mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-21 15:12:02 +00:00
Merge branch 'master' into evillique-nlp
This commit is contained in:
commit
916594fe23
12
.github/PULL_REQUEST_TEMPLATE.md
vendored
12
.github/PULL_REQUEST_TEMPLATE.md
vendored
@ -2,28 +2,26 @@ I hereby agree to the terms of the CLA available at: https://yandex.ru/legal/cla
|
|||||||
|
|
||||||
Changelog category (leave one):
|
Changelog category (leave one):
|
||||||
- New Feature
|
- New Feature
|
||||||
- Bug Fix
|
|
||||||
- Improvement
|
- Improvement
|
||||||
|
- Bug Fix
|
||||||
- Performance Improvement
|
- Performance Improvement
|
||||||
- Backward Incompatible Change
|
- Backward Incompatible Change
|
||||||
- Build/Testing/Packaging Improvement
|
- Build/Testing/Packaging Improvement
|
||||||
- Documentation (changelog entry is not required)
|
- Documentation (changelog entry is not required)
|
||||||
- Other
|
|
||||||
- Not for changelog (changelog entry is not required)
|
- Not for changelog (changelog entry is not required)
|
||||||
|
|
||||||
|
|
||||||
Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):
|
Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):
|
||||||
|
|
||||||
...
|
...
|
||||||
|
|
||||||
|
|
||||||
Detailed description / Documentation draft:
|
Detailed description / Documentation draft:
|
||||||
|
|
||||||
...
|
...
|
||||||
|
|
||||||
By adding documentation, you'll allow users to try your new feature immediately, not when someone else will have time to document it later. Documentation is necessary for all features that affect user experience in any way. You can add brief documentation draft above, or add documentation right into your patch as Markdown files in [docs](https://github.com/ClickHouse/ClickHouse/tree/master/docs) folder.
|
|
||||||
|
|
||||||
If you are doing this for the first time, it's recommended to read the lightweight [Contributing to ClickHouse Documentation](https://github.com/ClickHouse/ClickHouse/tree/master/docs/README.md) guide first.
|
> By adding documentation, you'll allow users to try your new feature immediately, not when someone else will have time to document it later. Documentation is necessary for all features that affect user experience in any way. You can add brief documentation draft above, or add documentation right into your patch as Markdown files in [docs](https://github.com/ClickHouse/ClickHouse/tree/master/docs) folder.
|
||||||
|
|
||||||
|
> If you are doing this for the first time, it's recommended to read the lightweight [Contributing to ClickHouse Documentation](https://github.com/ClickHouse/ClickHouse/tree/master/docs/README.md) guide first.
|
||||||
|
|
||||||
|
|
||||||
Information about CI checks: https://clickhouse.tech/docs/en/development/continuous-integration/
|
> Information about CI checks: https://clickhouse.tech/docs/en/development/continuous-integration/
|
||||||
|
8
.gitmodules
vendored
8
.gitmodules
vendored
@ -193,7 +193,7 @@
|
|||||||
url = https://github.com/danlark1/miniselect
|
url = https://github.com/danlark1/miniselect
|
||||||
[submodule "contrib/rocksdb"]
|
[submodule "contrib/rocksdb"]
|
||||||
path = contrib/rocksdb
|
path = contrib/rocksdb
|
||||||
url = https://github.com/ClickHouse-Extras/rocksdb.git
|
url = https://github.com/ClickHouse-Extras/rocksdb.git
|
||||||
[submodule "contrib/xz"]
|
[submodule "contrib/xz"]
|
||||||
path = contrib/xz
|
path = contrib/xz
|
||||||
url = https://github.com/xz-mirror/xz
|
url = https://github.com/xz-mirror/xz
|
||||||
@ -237,3 +237,9 @@
|
|||||||
[submodule "contrib/libpqxx"]
|
[submodule "contrib/libpqxx"]
|
||||||
path = contrib/libpqxx
|
path = contrib/libpqxx
|
||||||
url = https://github.com/ClickHouse-Extras/libpqxx.git
|
url = https://github.com/ClickHouse-Extras/libpqxx.git
|
||||||
|
[submodule "contrib/sqlite-amalgamation"]
|
||||||
|
path = contrib/sqlite-amalgamation
|
||||||
|
url = https://github.com/azadkuh/sqlite-amalgamation
|
||||||
|
[submodule "contrib/s2geometry"]
|
||||||
|
path = contrib/s2geometry
|
||||||
|
url = https://github.com/ClickHouse-Extras/s2geometry.git
|
||||||
|
156
CHANGELOG.md
156
CHANGELOG.md
@ -1,3 +1,159 @@
|
|||||||
|
### ClickHouse release v21.7, 2021-07-09
|
||||||
|
|
||||||
|
#### Backward Incompatible Change
|
||||||
|
|
||||||
|
* Improved performance of queries with explicitly defined large sets. Added compatibility setting `legacy_column_name_of_tuple_literal`. It makes sense to set it to `true`, while doing rolling update of cluster from version lower than 21.7 to any higher version. Otherwise distributed queries with explicitly defined sets at `IN` clause may fail during update. [#25371](https://github.com/ClickHouse/ClickHouse/pull/25371) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Forward/backward incompatible change of maximum buffer size in clickhouse-keeper (an experimental alternative to ZooKeeper). Better to do it now (before production), than later. [#25421](https://github.com/ClickHouse/ClickHouse/pull/25421) ([alesapin](https://github.com/alesapin)).
|
||||||
|
|
||||||
|
#### New Feature
|
||||||
|
|
||||||
|
* Support configuration in YAML format as alternative to XML. This closes [#3607](https://github.com/ClickHouse/ClickHouse/issues/3607). [#21858](https://github.com/ClickHouse/ClickHouse/pull/21858) ([BoloniniD](https://github.com/BoloniniD)).
|
||||||
|
* Provides a way to restore replicated table when the data is (possibly) present, but the ZooKeeper metadata is lost. Resolves [#13458](https://github.com/ClickHouse/ClickHouse/issues/13458). [#13652](https://github.com/ClickHouse/ClickHouse/pull/13652) ([Mike Kot](https://github.com/myrrc)).
|
||||||
|
* Support structs and maps in Arrow/Parquet/ORC and dictionaries in Arrow input/output formats. Present new setting `output_format_arrow_low_cardinality_as_dictionary`. [#24341](https://github.com/ClickHouse/ClickHouse/pull/24341) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Added support for `Array` type in dictionaries. [#25119](https://github.com/ClickHouse/ClickHouse/pull/25119) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Added function `bitPositionsToArray`. Closes [#23792](https://github.com/ClickHouse/ClickHouse/issues/23792). Author [Kevin Wan] (@MaxWk). [#25394](https://github.com/ClickHouse/ClickHouse/pull/25394) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Added function `dateName` to return names like 'Friday' or 'April'. Author [Daniil Kondratyev] (@dankondr). [#25372](https://github.com/ClickHouse/ClickHouse/pull/25372) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Add `toJSONString` function to serialize columns to their JSON representations. [#25164](https://github.com/ClickHouse/ClickHouse/pull/25164) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Now `query_log` has two new columns: `initial_query_start_time`, `initial_query_start_time_microsecond` that record the starting time of a distributed query if any. [#25022](https://github.com/ClickHouse/ClickHouse/pull/25022) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Add aggregate function `segmentLengthSum`. [#24250](https://github.com/ClickHouse/ClickHouse/pull/24250) ([flynn](https://github.com/ucasfl)).
|
||||||
|
* Add a new boolean setting `prefer_global_in_and_join` which defaults all IN/JOIN as GLOBAL IN/JOIN. [#23434](https://github.com/ClickHouse/ClickHouse/pull/23434) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Support `ALTER DELETE` queries for `Join` table engine. [#23260](https://github.com/ClickHouse/ClickHouse/pull/23260) ([foolchi](https://github.com/foolchi)).
|
||||||
|
* Add `quantileBFloat16` aggregate function as well as the corresponding `quantilesBFloat16` and `medianBFloat16`. It is very simple and fast quantile estimator with relative error not more than 0.390625%. This closes [#16641](https://github.com/ClickHouse/ClickHouse/issues/16641). [#23204](https://github.com/ClickHouse/ClickHouse/pull/23204) ([Ivan Novitskiy](https://github.com/RedClusive)).
|
||||||
|
* Implement `sequenceNextNode()` function useful for `flow analysis`. [#19766](https://github.com/ClickHouse/ClickHouse/pull/19766) ([achimbab](https://github.com/achimbab)).
|
||||||
|
|
||||||
|
#### Experimental Feature
|
||||||
|
|
||||||
|
* Add support for virtual filesystem over HDFS. [#11058](https://github.com/ClickHouse/ClickHouse/pull/11058) ([overshov](https://github.com/overshov)) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Now clickhouse-keeper (an experimental alternative to ZooKeeper) supports ZooKeeper-like `digest` ACLs. [#24448](https://github.com/ClickHouse/ClickHouse/pull/24448) ([alesapin](https://github.com/alesapin)).
|
||||||
|
|
||||||
|
#### Performance Improvement
|
||||||
|
|
||||||
|
* Added optimization that transforms some functions to reading of subcolumns to reduce amount of read data. E.g., statement `col IS NULL` is transformed to reading of subcolumn `col.null`. Optimization can be enabled by setting `optimize_functions_to_subcolumns` which is currently off by default. [#24406](https://github.com/ClickHouse/ClickHouse/pull/24406) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Rewrite more columns to possible alias expressions. This may enable better optimization, such as projections. [#24405](https://github.com/ClickHouse/ClickHouse/pull/24405) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Index of type `bloom_filter` can be used for expressions with `hasAny` function with constant arrays. This closes: [#24291](https://github.com/ClickHouse/ClickHouse/issues/24291). [#24900](https://github.com/ClickHouse/ClickHouse/pull/24900) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||||
|
* Add exponential backoff to reschedule read attempt in case RabbitMQ queues are empty. (ClickHouse has support for importing data from RabbitMQ). Closes [#24340](https://github.com/ClickHouse/ClickHouse/issues/24340). [#24415](https://github.com/ClickHouse/ClickHouse/pull/24415) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
|
||||||
|
* Allow to limit bandwidth for replication. Add two Replicated\*MergeTree settings: `max_replicated_fetches_network_bandwidth` and `max_replicated_sends_network_bandwidth` which allows to limit maximum speed of replicated fetches/sends for table. Add two server-wide settings (in `default` user profile): `max_replicated_fetches_network_bandwidth_for_server` and `max_replicated_sends_network_bandwidth_for_server` which limit maximum speed of replication for all tables. The settings are not followed perfectly accurately. Turned off by default. Fixes [#1821](https://github.com/ClickHouse/ClickHouse/issues/1821). [#24573](https://github.com/ClickHouse/ClickHouse/pull/24573) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Resource constraints and isolation for ODBC and Library bridges. Use separate `clickhouse-bridge` group and user for bridge processes. Set oom_score_adj so the bridges will be first subjects for OOM killer. Set set maximum RSS to 1 GiB. Closes [#23861](https://github.com/ClickHouse/ClickHouse/issues/23861). [#25280](https://github.com/ClickHouse/ClickHouse/pull/25280) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Add standalone `clickhouse-keeper` symlink to the main `clickhouse` binary. Now it's possible to run coordination without the main clickhouse server. [#24059](https://github.com/ClickHouse/ClickHouse/pull/24059) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Use global settings for query to `VIEW`. Fixed the behavior when queries to `VIEW` use local settings, that leads to errors if setting on `CREATE VIEW` and `SELECT` were different. As for now, `VIEW` won't use these modified settings, but you can still pass additional settings in `SETTINGS` section of `CREATE VIEW` query. Close [#20551](https://github.com/ClickHouse/ClickHouse/issues/20551). [#24095](https://github.com/ClickHouse/ClickHouse/pull/24095) ([Vladimir](https://github.com/vdimir)).
|
||||||
|
* On server start, parts with incorrect partition ID would not be ever removed, but always detached. [#25070](https://github.com/ClickHouse/ClickHouse/issues/25070). [#25166](https://github.com/ClickHouse/ClickHouse/pull/25166) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Increase size of background schedule pool to 128 (`background_schedule_pool_size` setting). It allows avoiding replication queue hung on slow zookeeper connection. [#25072](https://github.com/ClickHouse/ClickHouse/pull/25072) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Add merge tree setting `max_parts_to_merge_at_once` which limits the number of parts that can be merged in the background at once. Doesn't affect `OPTIMIZE FINAL` query. Fixes [#1820](https://github.com/ClickHouse/ClickHouse/issues/1820). [#24496](https://github.com/ClickHouse/ClickHouse/pull/24496) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Allow `NOT IN` operator to be used in partition pruning. [#24894](https://github.com/ClickHouse/ClickHouse/pull/24894) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Recognize IPv4 addresses like `127.0.1.1` as local. This is controversial and closes [#23504](https://github.com/ClickHouse/ClickHouse/issues/23504). Michael Filimonov will test this feature. [#24316](https://github.com/ClickHouse/ClickHouse/pull/24316) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* ClickHouse database created with MaterializeMySQL (it is an experimental feature) now contains all column comments from the MySQL database that materialized. [#25199](https://github.com/ClickHouse/ClickHouse/pull/25199) ([Storozhuk Kostiantyn](https://github.com/sand6255)).
|
||||||
|
* Add settings (`connection_auto_close`/`connection_max_tries`/`connection_pool_size`) for MySQL storage engine. [#24146](https://github.com/ClickHouse/ClickHouse/pull/24146) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Improve startup time of Distributed engine. [#25663](https://github.com/ClickHouse/ClickHouse/pull/25663) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Improvement for Distributed tables. Drop replicas from dirname for internal_replication=true (allows INSERT into Distributed with cluster from any number of replicas, before only 15 replicas was supported, everything more will fail with ENAMETOOLONG while creating directory for async blocks). [#25513](https://github.com/ClickHouse/ClickHouse/pull/25513) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Added support `Interval` type for `LowCardinality`. It is needed for intermediate values of some expressions. Closes [#21730](https://github.com/ClickHouse/ClickHouse/issues/21730). [#25410](https://github.com/ClickHouse/ClickHouse/pull/25410) ([Vladimir](https://github.com/vdimir)).
|
||||||
|
* Add `==` operator on time conditions for `sequenceMatch` and `sequenceCount` functions. For eg: sequenceMatch('(?1)(?t==1)(?2)')(time, data = 1, data = 2). [#25299](https://github.com/ClickHouse/ClickHouse/pull/25299) ([Christophe Kalenzaga](https://github.com/mga-chka)).
|
||||||
|
* Add settings `http_max_fields`, `http_max_field_name_size`, `http_max_field_value_size`. [#25296](https://github.com/ClickHouse/ClickHouse/pull/25296) ([Ivan](https://github.com/abyss7)).
|
||||||
|
* Add support for function `if` with `Decimal` and `Int` types on its branches. This closes [#20549](https://github.com/ClickHouse/ClickHouse/issues/20549). This closes [#10142](https://github.com/ClickHouse/ClickHouse/issues/10142). [#25283](https://github.com/ClickHouse/ClickHouse/pull/25283) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Update prompt in `clickhouse-client` and display a message when reconnecting. This closes [#10577](https://github.com/ClickHouse/ClickHouse/issues/10577). [#25281](https://github.com/ClickHouse/ClickHouse/pull/25281) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Correct memory tracking in aggregate function `topK`. This closes [#25259](https://github.com/ClickHouse/ClickHouse/issues/25259). [#25260](https://github.com/ClickHouse/ClickHouse/pull/25260) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix `topLevelDomain` for IDN hosts (i.e. `example.рф`), before it returns empty string for such hosts. [#25103](https://github.com/ClickHouse/ClickHouse/pull/25103) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Detect Linux kernel version at runtime (for worked nested epoll, that is required for `async_socket_for_remote`/`use_hedged_requests`, otherwise remote queries may stuck). [#25067](https://github.com/ClickHouse/ClickHouse/pull/25067) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* For distributed query, when `optimize_skip_unused_shards=1`, allow to skip shard with condition like `(sharding key) IN (one-element-tuple)`. (Tuples with many elements were supported. Tuple with single element did not work because it is parsed as literal). [#24930](https://github.com/ClickHouse/ClickHouse/pull/24930) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Improved log messages of S3 errors, no more double whitespaces in case of empty keys and buckets. [#24897](https://github.com/ClickHouse/ClickHouse/pull/24897) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
* Some queries require multi-pass semantic analysis. Try reusing built sets for `IN` in this case. [#24874](https://github.com/ClickHouse/ClickHouse/pull/24874) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Respect `max_distributed_connections` for `insert_distributed_sync` (otherwise for huge clusters and sync insert it may run out of `max_thread_pool_size`). [#24754](https://github.com/ClickHouse/ClickHouse/pull/24754) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Avoid hiding errors like `Limit for rows or bytes to read exceeded` for scalar subqueries. [#24545](https://github.com/ClickHouse/ClickHouse/pull/24545) ([nvartolomei](https://github.com/nvartolomei)).
|
||||||
|
* Make String-to-Int parser stricter so that `toInt64('+')` will throw. [#24475](https://github.com/ClickHouse/ClickHouse/pull/24475) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* If `SSD_CACHE` is created with DDL query, it can be created only inside `user_files` directory. [#24466](https://github.com/ClickHouse/ClickHouse/pull/24466) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* PostgreSQL support for specifying non default schema for insert queries. Closes [#24149](https://github.com/ClickHouse/ClickHouse/issues/24149). [#24413](https://github.com/ClickHouse/ClickHouse/pull/24413) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Fix IPv6 addresses resolving (i.e. fixes `select * from remote('[::1]', system.one)`). [#24319](https://github.com/ClickHouse/ClickHouse/pull/24319) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix trailing whitespaces in FROM clause with subqueries in multiline mode, and also changes the output of the queries slightly in a more human friendly way. [#24151](https://github.com/ClickHouse/ClickHouse/pull/24151) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Improvement for Distributed tables. Add ability to split distributed batch on failures (i.e. due to memory limits, corruptions), under `distributed_directory_monitor_split_batch_on_failure` (OFF by default). [#23864](https://github.com/ClickHouse/ClickHouse/pull/23864) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Handle column name clashes for `Join` table engine. Closes [#20309](https://github.com/ClickHouse/ClickHouse/issues/20309). [#23769](https://github.com/ClickHouse/ClickHouse/pull/23769) ([Vladimir](https://github.com/vdimir)).
|
||||||
|
* Display progress for `File` table engine in `clickhouse-local` and on INSERT query in `clickhouse-client` when data is passed to stdin. Closes [#18209](https://github.com/ClickHouse/ClickHouse/issues/18209). [#23656](https://github.com/ClickHouse/ClickHouse/pull/23656) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Bugfixes and improvements of `clickhouse-copier`. Allow to copy tables with different (but compatible schemas). Closes [#9159](https://github.com/ClickHouse/ClickHouse/issues/9159). Added test to copy ReplacingMergeTree. Closes [#22711](https://github.com/ClickHouse/ClickHouse/issues/22711). Support TTL on columns and Data Skipping Indices. It simply removes it to create internal Distributed table (underlying table will have TTL and skipping indices). Closes [#19384](https://github.com/ClickHouse/ClickHouse/issues/19384). Allow to copy MATERIALIZED and ALIAS columns. There are some cases in which it could be helpful (e.g. if this column is in PRIMARY KEY). Now it could be allowed by setting `allow_to_copy_alias_and_materialized_columns` property to true in task configuration. Closes [#9177](https://github.com/ClickHouse/ClickHouse/issues/9177). Closes [#11007] (https://github.com/ClickHouse/ClickHouse/issues/11007). Closes [#9514](https://github.com/ClickHouse/ClickHouse/issues/9514). Added a property `allow_to_drop_target_partitions` in task configuration to drop partition in original table before moving helping tables. Closes [#20957](https://github.com/ClickHouse/ClickHouse/issues/20957). Get rid of `OPTIMIZE DEDUPLICATE` query. This hack was needed, because `ALTER TABLE MOVE PARTITION` was retried many times and plain MergeTree tables don't have deduplication. Closes [#17966](https://github.com/ClickHouse/ClickHouse/issues/17966). Write progress to ZooKeeper node on path `task_path + /status` in JSON format. Closes [#20955](https://github.com/ClickHouse/ClickHouse/issues/20955). Support for ReplicatedTables without arguments. Closes [#24834](https://github.com/ClickHouse/ClickHouse/issues/24834) .[#23518](https://github.com/ClickHouse/ClickHouse/pull/23518) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Added sleep with backoff between read retries from S3. [#23461](https://github.com/ClickHouse/ClickHouse/pull/23461) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
* Respect `insert_allow_materialized_columns` (allows materialized columns) for INSERT into `Distributed` table. [#23349](https://github.com/ClickHouse/ClickHouse/pull/23349) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Add ability to push down LIMIT for distributed queries. [#23027](https://github.com/ClickHouse/ClickHouse/pull/23027) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix zero-copy replication with several S3 volumes (Fixes [#22679](https://github.com/ClickHouse/ClickHouse/issues/22679)). [#22864](https://github.com/ClickHouse/ClickHouse/pull/22864) ([ianton-ru](https://github.com/ianton-ru)).
|
||||||
|
* Resolve the actual port number bound when a user requests any available port from the operating system to show it in the log message. [#25569](https://github.com/ClickHouse/ClickHouse/pull/25569) ([bnaecker](https://github.com/bnaecker)).
|
||||||
|
* Fixed case, when sometimes conversion of postgres arrays resulted in String data type, not n-dimensional array, because `attndims` works incorrectly in some cases. Closes [#24804](https://github.com/ClickHouse/ClickHouse/issues/24804). [#25538](https://github.com/ClickHouse/ClickHouse/pull/25538) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Fix convertion of DateTime with timezone for MySQL, PostgreSQL, ODBC. Closes [#5057](https://github.com/ClickHouse/ClickHouse/issues/5057). [#25528](https://github.com/ClickHouse/ClickHouse/pull/25528) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Distinguish KILL MUTATION for different tables (fixes unexpected `Cancelled mutating parts` error). [#25025](https://github.com/ClickHouse/ClickHouse/pull/25025) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Allow to declare S3 disk at root of bucket (S3 virtual filesystem is an experimental feature under development). [#24898](https://github.com/ClickHouse/ClickHouse/pull/24898) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
* Enable reading of subcolumns (e.g. components of Tuples) for distributed tables. [#24472](https://github.com/ClickHouse/ClickHouse/pull/24472) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* A feature for MySQL compatibility protocol: make `user` function to return correct output. Closes [#25697](https://github.com/ClickHouse/ClickHouse/pull/25697). [#25697](https://github.com/ClickHouse/ClickHouse/pull/25697) ([sundyli](https://github.com/sundy-li)).
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Improvement for backward compatibility. Use old modulo function version when used in partition key. Closes [#23508](https://github.com/ClickHouse/ClickHouse/issues/23508). [#24157](https://github.com/ClickHouse/ClickHouse/pull/24157) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Fix extremely rare bug on low-memory servers which can lead to the inability to perform merges without restart. Possibly fixes [#24603](https://github.com/ClickHouse/ClickHouse/issues/24603). [#24872](https://github.com/ClickHouse/ClickHouse/pull/24872) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix extremely rare error `Tagging already tagged part` in replication queue during concurrent `alter move/replace partition`. Possibly fixes [#22142](https://github.com/ClickHouse/ClickHouse/issues/22142). [#24961](https://github.com/ClickHouse/ClickHouse/pull/24961) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix potential crash when calculating aggregate function states by aggregation of aggregate function states of other aggregate functions (not a practical use case). See [#24523](https://github.com/ClickHouse/ClickHouse/issues/24523). [#25015](https://github.com/ClickHouse/ClickHouse/pull/25015) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fixed the behavior when query `SYSTEM RESTART REPLICA` or `SYSTEM SYNC REPLICA` does not finish. This was detected on server with extremely low amount of RAM. [#24457](https://github.com/ClickHouse/ClickHouse/pull/24457) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Fix bug which can lead to ZooKeeper client hung inside clickhouse-server. [#24721](https://github.com/ClickHouse/ClickHouse/pull/24721) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* If ZooKeeper connection was lost and replica was cloned after restoring the connection, its replication queue might contain outdated entries. Fixed failed assertion when replication queue contains intersecting virtual parts. It may rarely happen if some data part was lost. Print error in log instead of terminating. [#24777](https://github.com/ClickHouse/ClickHouse/pull/24777) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix lost `WHERE` condition in expression-push-down optimization of query plan (setting `query_plan_filter_push_down = 1` by default). Fixes [#25368](https://github.com/ClickHouse/ClickHouse/issues/25368). [#25370](https://github.com/ClickHouse/ClickHouse/pull/25370) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix bug which can lead to intersecting parts after merges with TTL: `Part all_40_40_0 is covered by all_40_40_1 but should be merged into all_40_41_1. This shouldn't happen often.`. [#25549](https://github.com/ClickHouse/ClickHouse/pull/25549) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* On ZooKeeper connection loss `ReplicatedMergeTree` table might wait for background operations to complete before trying to reconnect. It's fixed, now background operations are stopped forcefully. [#25306](https://github.com/ClickHouse/ClickHouse/pull/25306) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix error `Key expression contains comparison between inconvertible types` for queries with `ARRAY JOIN` in case if array is used in primary key. Fixes [#8247](https://github.com/ClickHouse/ClickHouse/issues/8247). [#25546](https://github.com/ClickHouse/ClickHouse/pull/25546) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix wrong totals for query `WITH TOTALS` and `WITH FILL`. Fixes [#20872](https://github.com/ClickHouse/ClickHouse/issues/20872). [#25539](https://github.com/ClickHouse/ClickHouse/pull/25539) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix data race when querying `system.clusters` while reloading the cluster configuration at the same time. [#25737](https://github.com/ClickHouse/ClickHouse/pull/25737) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fixed `No such file or directory` error on moving `Distributed` table between databases. Fixes [#24971](https://github.com/ClickHouse/ClickHouse/issues/24971). [#25667](https://github.com/ClickHouse/ClickHouse/pull/25667) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* `REPLACE PARTITION` might be ignored in rare cases if the source partition was empty. It's fixed. Fixes [#24869](https://github.com/ClickHouse/ClickHouse/issues/24869). [#25665](https://github.com/ClickHouse/ClickHouse/pull/25665) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fixed a bug in `Replicated` database engine that might rarely cause some replica to skip enqueued DDL query. [#24805](https://github.com/ClickHouse/ClickHouse/pull/24805) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix null pointer dereference in `EXPLAIN AST` without query. [#25631](https://github.com/ClickHouse/ClickHouse/pull/25631) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix waiting of automatic dropping of empty parts. It could lead to full filling of background pool and stuck of replication. [#23315](https://github.com/ClickHouse/ClickHouse/pull/23315) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix restore of a table stored in S3 virtual filesystem (it is an experimental feature not ready for production). [#25601](https://github.com/ClickHouse/ClickHouse/pull/25601) ([ianton-ru](https://github.com/ianton-ru)).
|
||||||
|
* Fix nullptr dereference in `Arrow` format when using `Decimal256`. Add `Decimal256` support for `Arrow` format. [#25531](https://github.com/ClickHouse/ClickHouse/pull/25531) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Fix excessive underscore before the names of the preprocessed configuration files. [#25431](https://github.com/ClickHouse/ClickHouse/pull/25431) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* A fix for `clickhouse-copier` tool: Fix segfault when sharding_key is absent in task config for copier. [#25419](https://github.com/ClickHouse/ClickHouse/pull/25419) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Fix `REPLACE` column transformer when used in DDL by correctly quoting the formated query. This fixes [#23925](https://github.com/ClickHouse/ClickHouse/issues/23925). [#25391](https://github.com/ClickHouse/ClickHouse/pull/25391) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix the possibility of non-deterministic behaviour of the `quantileDeterministic` function and similar. This closes [#20480](https://github.com/ClickHouse/ClickHouse/issues/20480). [#25313](https://github.com/ClickHouse/ClickHouse/pull/25313) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Support `SimpleAggregateFunction(LowCardinality)` for `SummingMergeTree`. Fixes [#25134](https://github.com/ClickHouse/ClickHouse/issues/25134). [#25300](https://github.com/ClickHouse/ClickHouse/pull/25300) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix logical error with exception message "Cannot sum Array/Tuple in min/maxMap". [#25298](https://github.com/ClickHouse/ClickHouse/pull/25298) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Fix error `Bad cast from type DB::ColumnLowCardinality to DB::ColumnVector<char8_t>` for queries where `LowCardinality` argument was used for IN (this bug appeared in 21.6). Fixes [#25187](https://github.com/ClickHouse/ClickHouse/issues/25187). [#25290](https://github.com/ClickHouse/ClickHouse/pull/25290) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix incorrect behaviour of `joinGetOrNull` with not-nullable columns. This fixes [#24261](https://github.com/ClickHouse/ClickHouse/issues/24261). [#25288](https://github.com/ClickHouse/ClickHouse/pull/25288) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix incorrect behaviour and UBSan report in big integers. In previous versions `CAST(1e19 AS UInt128)` returned zero. [#25279](https://github.com/ClickHouse/ClickHouse/pull/25279) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fixed an error which occurred while inserting a subset of columns using CSVWithNames format. Fixes [#25129](https://github.com/ClickHouse/ClickHouse/issues/25129). [#25169](https://github.com/ClickHouse/ClickHouse/pull/25169) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Do not use table's projection for `SELECT` with `FINAL`. It is not supported yet. [#25163](https://github.com/ClickHouse/ClickHouse/pull/25163) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix possible parts loss after updating up to 21.5 in case table used `UUID` in partition key. (It is not recommended to use `UUID` in partition key). Fixes [#25070](https://github.com/ClickHouse/ClickHouse/issues/25070). [#25127](https://github.com/ClickHouse/ClickHouse/pull/25127) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix crash in query with cross join and `joined_subquery_requires_alias = 0`. Fixes [#24011](https://github.com/ClickHouse/ClickHouse/issues/24011). [#25082](https://github.com/ClickHouse/ClickHouse/pull/25082) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix bug with constant maps in mapContains function that lead to error `empty column was returned by function mapContains`. Closes [#25077](https://github.com/ClickHouse/ClickHouse/issues/25077). [#25080](https://github.com/ClickHouse/ClickHouse/pull/25080) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Remove possibility to create tables with columns referencing themselves like `a UInt32 ALIAS a + 1` or `b UInt32 MATERIALIZED b`. Fixes [#24910](https://github.com/ClickHouse/ClickHouse/issues/24910), [#24292](https://github.com/ClickHouse/ClickHouse/issues/24292). [#25059](https://github.com/ClickHouse/ClickHouse/pull/25059) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix wrong result when using aggregate projection with *not empty* `GROUP BY` key to execute query with `GROUP BY` by *empty* key. [#25055](https://github.com/ClickHouse/ClickHouse/pull/25055) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix serialization of splitted nested messages in Protobuf format. This PR fixes [#24647](https://github.com/ClickHouse/ClickHouse/issues/24647). [#25000](https://github.com/ClickHouse/ClickHouse/pull/25000) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* Fix limit/offset settings for distributed queries (ignore on the remote nodes). [#24940](https://github.com/ClickHouse/ClickHouse/pull/24940) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix possible heap-buffer-overflow in `Arrow` format. [#24922](https://github.com/ClickHouse/ClickHouse/pull/24922) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Fixed possible error 'Cannot read from istream at offset 0' when reading a file from DiskS3 (S3 virtual filesystem is an experimental feature under development that should not be used in production). [#24885](https://github.com/ClickHouse/ClickHouse/pull/24885) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* Fix "Missing columns" exception when joining Distributed Materialized View. [#24870](https://github.com/ClickHouse/ClickHouse/pull/24870) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Allow `NULL` values in postgresql compatibility protocol. Closes [#22622](https://github.com/ClickHouse/ClickHouse/issues/22622). [#24857](https://github.com/ClickHouse/ClickHouse/pull/24857) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Fix bug when exception `Mutation was killed` can be thrown to the client on mutation wait when mutation not loaded into memory yet. [#24809](https://github.com/ClickHouse/ClickHouse/pull/24809) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fixed bug in deserialization of random generator state with might cause some data types such as `AggregateFunction(groupArraySample(N), T))` to behave in a non-deterministic way. [#24538](https://github.com/ClickHouse/ClickHouse/pull/24538) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Disallow building uniqXXXXStates of other aggregation states. [#24523](https://github.com/ClickHouse/ClickHouse/pull/24523) ([Raúl Marín](https://github.com/Algunenano)). Then allow it back by actually eliminating the root cause of the related issue. ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix usage of tuples in `CREATE .. AS SELECT` queries. [#24464](https://github.com/ClickHouse/ClickHouse/pull/24464) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix computation of total bytes in `Buffer` table. In current ClickHouse version total_writes.bytes counter decreases too much during the buffer flush. It leads to counter overflow and totalBytes return something around 17.44 EB some time after the flush. [#24450](https://github.com/ClickHouse/ClickHouse/pull/24450) ([DimasKovas](https://github.com/DimasKovas)).
|
||||||
|
* Fix incorrect information about the monotonicity of toWeek function. This fixes [#24422](https://github.com/ClickHouse/ClickHouse/issues/24422) . This bug was introduced in https://github.com/ClickHouse/ClickHouse/pull/5212 , and was exposed later by smarter partition pruner. [#24446](https://github.com/ClickHouse/ClickHouse/pull/24446) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* When user authentication is managed by LDAP. Fixed potential deadlock that can happen during LDAP role (re)mapping, when LDAP group is mapped to a nonexistent local role. [#24431](https://github.com/ClickHouse/ClickHouse/pull/24431) ([Denis Glazachev](https://github.com/traceon)).
|
||||||
|
* In "multipart/form-data" message consider the CRLF preceding a boundary as part of it. Fixes [#23905](https://github.com/ClickHouse/ClickHouse/issues/23905). [#24399](https://github.com/ClickHouse/ClickHouse/pull/24399) ([Ivan](https://github.com/abyss7)).
|
||||||
|
* Fix drop partition with intersect fake parts. In rare cases there might be parts with mutation version greater than current block number. [#24321](https://github.com/ClickHouse/ClickHouse/pull/24321) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fixed a bug in moving Materialized View from Ordinary to Atomic database (`RENAME TABLE` query). Now inner table is moved to new database together with Materialized View. Fixes [#23926](https://github.com/ClickHouse/ClickHouse/issues/23926). [#24309](https://github.com/ClickHouse/ClickHouse/pull/24309) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Allow empty HTTP headers. Fixes [#23901](https://github.com/ClickHouse/ClickHouse/issues/23901). [#24285](https://github.com/ClickHouse/ClickHouse/pull/24285) ([Ivan](https://github.com/abyss7)).
|
||||||
|
* Correct processing of mutations (ALTER UPDATE/DELETE) in Memory tables. Closes [#24274](https://github.com/ClickHouse/ClickHouse/issues/24274). [#24275](https://github.com/ClickHouse/ClickHouse/pull/24275) ([flynn](https://github.com/ucasfl)).
|
||||||
|
* Make column LowCardinality property in JOIN output the same as in the input, close [#23351](https://github.com/ClickHouse/ClickHouse/issues/23351), close [#20315](https://github.com/ClickHouse/ClickHouse/issues/20315). [#24061](https://github.com/ClickHouse/ClickHouse/pull/24061) ([Vladimir](https://github.com/vdimir)).
|
||||||
|
* A fix for Kafka tables. Fix the bug in failover behavior when Engine = Kafka was not able to start consumption if the same consumer had an empty assignment previously. Closes [#21118](https://github.com/ClickHouse/ClickHouse/issues/21118). [#21267](https://github.com/ClickHouse/ClickHouse/pull/21267) ([filimonov](https://github.com/filimonov)).
|
||||||
|
|
||||||
|
#### Build/Testing/Packaging Improvement
|
||||||
|
|
||||||
|
* Add `darwin-aarch64` (Mac M1 / Apple Silicon) builds in CI [#25560](https://github.com/ClickHouse/ClickHouse/pull/25560) ([Ivan](https://github.com/abyss7)) and put the links to the docs and website ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Adds cross-platform embedding of binary resources into executables. It works on Illumos. [#25146](https://github.com/ClickHouse/ClickHouse/pull/25146) ([bnaecker](https://github.com/bnaecker)).
|
||||||
|
* Add join related options to stress tests to improve fuzzing. [#25200](https://github.com/ClickHouse/ClickHouse/pull/25200) ([Vladimir](https://github.com/vdimir)).
|
||||||
|
* Enable build with s3 module in osx [#25217](https://github.com/ClickHouse/ClickHouse/issues/25217). [#25218](https://github.com/ClickHouse/ClickHouse/pull/25218) ([kevin wan](https://github.com/MaxWk)).
|
||||||
|
* Add integration test cases to cover JDBC bridge. [#25047](https://github.com/ClickHouse/ClickHouse/pull/25047) ([Zhichun Wu](https://github.com/zhicwu)).
|
||||||
|
* Integration tests configuration has special treatment for dictionaries. Removed remaining dictionaries manual setup. [#24728](https://github.com/ClickHouse/ClickHouse/pull/24728) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||||
|
* Add libfuzzer tests for YAMLParser class. [#24480](https://github.com/ClickHouse/ClickHouse/pull/24480) ([BoloniniD](https://github.com/BoloniniD)).
|
||||||
|
* Ubuntu 20.04 is now used to run integration tests, docker-compose version used to run integration tests is updated to 1.28.2. Environment variables now take effect on docker-compose. Rework test_dictionaries_all_layouts_separate_sources to allow parallel run. [#20393](https://github.com/ClickHouse/ClickHouse/pull/20393) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||||
|
* Fix TOCTOU error in installation script. [#25277](https://github.com/ClickHouse/ClickHouse/pull/25277) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
|
||||||
|
|
||||||
### ClickHouse release 21.6, 2021-06-05
|
### ClickHouse release 21.6, 2021-06-05
|
||||||
|
|
||||||
#### Upgrade Notes
|
#### Upgrade Notes
|
||||||
|
@ -536,10 +536,12 @@ include (cmake/find/rapidjson.cmake)
|
|||||||
include (cmake/find/fastops.cmake)
|
include (cmake/find/fastops.cmake)
|
||||||
include (cmake/find/odbc.cmake)
|
include (cmake/find/odbc.cmake)
|
||||||
include (cmake/find/nanodbc.cmake)
|
include (cmake/find/nanodbc.cmake)
|
||||||
|
include (cmake/find/sqlite.cmake)
|
||||||
include (cmake/find/rocksdb.cmake)
|
include (cmake/find/rocksdb.cmake)
|
||||||
include (cmake/find/libpqxx.cmake)
|
include (cmake/find/libpqxx.cmake)
|
||||||
include (cmake/find/nuraft.cmake)
|
include (cmake/find/nuraft.cmake)
|
||||||
include (cmake/find/yaml-cpp.cmake)
|
include (cmake/find/yaml-cpp.cmake)
|
||||||
|
include (cmake/find/s2geometry.cmake)
|
||||||
|
|
||||||
if(NOT USE_INTERNAL_PARQUET_LIBRARY)
|
if(NOT USE_INTERNAL_PARQUET_LIBRARY)
|
||||||
set (ENABLE_ORC OFF CACHE INTERNAL "")
|
set (ENABLE_ORC OFF CACHE INTERNAL "")
|
||||||
|
@ -18,6 +18,8 @@
|
|||||||
|
|
||||||
#define DATE_LUT_MAX (0xFFFFFFFFU - 86400)
|
#define DATE_LUT_MAX (0xFFFFFFFFU - 86400)
|
||||||
#define DATE_LUT_MAX_DAY_NUM 0xFFFF
|
#define DATE_LUT_MAX_DAY_NUM 0xFFFF
|
||||||
|
/// Max int value of Date32, DATE LUT cache size minus daynum_offset_epoch
|
||||||
|
#define DATE_LUT_MAX_EXTEND_DAY_NUM (DATE_LUT_SIZE - 16436)
|
||||||
|
|
||||||
/// A constant to add to time_t so every supported time point becomes non-negative and still has the same remainder of division by 3600.
|
/// A constant to add to time_t so every supported time point becomes non-negative and still has the same remainder of division by 3600.
|
||||||
/// If we treat "remainder of division" operation in the sense of modular arithmetic (not like in C++).
|
/// If we treat "remainder of division" operation in the sense of modular arithmetic (not like in C++).
|
||||||
@ -270,6 +272,8 @@ public:
|
|||||||
auto getOffsetAtStartOfEpoch() const { return offset_at_start_of_epoch; }
|
auto getOffsetAtStartOfEpoch() const { return offset_at_start_of_epoch; }
|
||||||
auto getTimeOffsetAtStartOfLUT() const { return offset_at_start_of_lut; }
|
auto getTimeOffsetAtStartOfLUT() const { return offset_at_start_of_lut; }
|
||||||
|
|
||||||
|
auto getDayNumOffsetEpoch() const { return daynum_offset_epoch; }
|
||||||
|
|
||||||
/// All functions below are thread-safe; arguments are not checked.
|
/// All functions below are thread-safe; arguments are not checked.
|
||||||
|
|
||||||
inline ExtendedDayNum toDayNum(ExtendedDayNum d) const
|
inline ExtendedDayNum toDayNum(ExtendedDayNum d) const
|
||||||
@ -926,15 +930,17 @@ public:
|
|||||||
{
|
{
|
||||||
if (unlikely(year < DATE_LUT_MIN_YEAR || year > DATE_LUT_MAX_YEAR || month < 1 || month > 12 || day_of_month < 1 || day_of_month > 31))
|
if (unlikely(year < DATE_LUT_MIN_YEAR || year > DATE_LUT_MAX_YEAR || month < 1 || month > 12 || day_of_month < 1 || day_of_month > 31))
|
||||||
return LUTIndex(0);
|
return LUTIndex(0);
|
||||||
|
auto year_lut_index = (year - DATE_LUT_MIN_YEAR) * 12 + month - 1;
|
||||||
return LUTIndex{years_months_lut[(year - DATE_LUT_MIN_YEAR) * 12 + month - 1] + day_of_month - 1};
|
UInt32 index = years_months_lut[year_lut_index].toUnderType() + day_of_month - 1;
|
||||||
|
/// When date is out of range, default value is DATE_LUT_SIZE - 1 (2283-11-11)
|
||||||
|
return LUTIndex{std::min(index, static_cast<UInt32>(DATE_LUT_SIZE - 1))};
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Create DayNum from year, month, day of month.
|
/// Create DayNum from year, month, day of month.
|
||||||
inline ExtendedDayNum makeDayNum(Int16 year, UInt8 month, UInt8 day_of_month) const
|
inline ExtendedDayNum makeDayNum(Int16 year, UInt8 month, UInt8 day_of_month, Int32 default_error_day_num = 0) const
|
||||||
{
|
{
|
||||||
if (unlikely(year < DATE_LUT_MIN_YEAR || year > DATE_LUT_MAX_YEAR || month < 1 || month > 12 || day_of_month < 1 || day_of_month > 31))
|
if (unlikely(year < DATE_LUT_MIN_YEAR || year > DATE_LUT_MAX_YEAR || month < 1 || month > 12 || day_of_month < 1 || day_of_month > 31))
|
||||||
return ExtendedDayNum(0);
|
return ExtendedDayNum(default_error_day_num);
|
||||||
|
|
||||||
return toDayNum(makeLUTIndex(year, month, day_of_month));
|
return toDayNum(makeLUTIndex(year, month, day_of_month));
|
||||||
}
|
}
|
||||||
@ -1091,9 +1097,9 @@ public:
|
|||||||
return lut[new_index].date + time;
|
return lut[new_index].date + time;
|
||||||
}
|
}
|
||||||
|
|
||||||
inline NO_SANITIZE_UNDEFINED Time addWeeks(Time t, Int64 delta) const
|
inline NO_SANITIZE_UNDEFINED Time addWeeks(Time t, Int32 delta) const
|
||||||
{
|
{
|
||||||
return addDays(t, delta * 7);
|
return addDays(t, static_cast<Int64>(delta) * 7);
|
||||||
}
|
}
|
||||||
|
|
||||||
inline UInt8 saturateDayOfMonth(Int16 year, UInt8 month, UInt8 day_of_month) const
|
inline UInt8 saturateDayOfMonth(Int16 year, UInt8 month, UInt8 day_of_month) const
|
||||||
@ -1158,14 +1164,14 @@ public:
|
|||||||
return toDayNum(addMonthsIndex(d, delta));
|
return toDayNum(addMonthsIndex(d, delta));
|
||||||
}
|
}
|
||||||
|
|
||||||
inline Time NO_SANITIZE_UNDEFINED addQuarters(Time t, Int64 delta) const
|
inline Time NO_SANITIZE_UNDEFINED addQuarters(Time t, Int32 delta) const
|
||||||
{
|
{
|
||||||
return addMonths(t, delta * 3);
|
return addMonths(t, static_cast<Int64>(delta) * 3);
|
||||||
}
|
}
|
||||||
|
|
||||||
inline ExtendedDayNum addQuarters(ExtendedDayNum d, Int64 delta) const
|
inline ExtendedDayNum addQuarters(ExtendedDayNum d, Int32 delta) const
|
||||||
{
|
{
|
||||||
return addMonths(d, delta * 3);
|
return addMonths(d, static_cast<Int64>(delta) * 3);
|
||||||
}
|
}
|
||||||
|
|
||||||
template <typename DateOrTime>
|
template <typename DateOrTime>
|
||||||
|
@ -70,6 +70,14 @@ public:
|
|||||||
m_day = values.day_of_month;
|
m_day = values.day_of_month;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
explicit LocalDate(ExtendedDayNum day_num)
|
||||||
|
{
|
||||||
|
const auto & values = DateLUT::instance().getValues(day_num);
|
||||||
|
m_year = values.year;
|
||||||
|
m_month = values.month;
|
||||||
|
m_day = values.day_of_month;
|
||||||
|
}
|
||||||
|
|
||||||
LocalDate(unsigned short year_, unsigned char month_, unsigned char day_)
|
LocalDate(unsigned short year_, unsigned char month_, unsigned char day_)
|
||||||
: m_year(year_), m_month(month_), m_day(day_)
|
: m_year(year_), m_month(month_), m_day(day_)
|
||||||
{
|
{
|
||||||
@ -98,6 +106,12 @@ public:
|
|||||||
return DayNum(lut.makeDayNum(m_year, m_month, m_day).toUnderType());
|
return DayNum(lut.makeDayNum(m_year, m_month, m_day).toUnderType());
|
||||||
}
|
}
|
||||||
|
|
||||||
|
ExtendedDayNum getExtenedDayNum() const
|
||||||
|
{
|
||||||
|
const auto & lut = DateLUT::instance();
|
||||||
|
return ExtendedDayNum (lut.makeDayNum(m_year, m_month, m_day).toUnderType());
|
||||||
|
}
|
||||||
|
|
||||||
operator DayNum() const
|
operator DayNum() const
|
||||||
{
|
{
|
||||||
return getDayNum();
|
return getDayNum();
|
||||||
|
@ -69,7 +69,7 @@ void convertHistoryFile(const std::string & path, replxx::Replxx & rx)
|
|||||||
}
|
}
|
||||||
|
|
||||||
std::string line;
|
std::string line;
|
||||||
if (!getline(in, line).good())
|
if (getline(in, line).bad())
|
||||||
{
|
{
|
||||||
rx.print("Cannot read from %s (for conversion): %s\n",
|
rx.print("Cannot read from %s (for conversion): %s\n",
|
||||||
path.c_str(), errnoToString(errno).c_str());
|
path.c_str(), errnoToString(errno).c_str());
|
||||||
@ -78,7 +78,7 @@ void convertHistoryFile(const std::string & path, replxx::Replxx & rx)
|
|||||||
|
|
||||||
/// This is the marker of the date, no need to convert.
|
/// This is the marker of the date, no need to convert.
|
||||||
static char const REPLXX_TIMESTAMP_PATTERN[] = "### dddd-dd-dd dd:dd:dd.ddd";
|
static char const REPLXX_TIMESTAMP_PATTERN[] = "### dddd-dd-dd dd:dd:dd.ddd";
|
||||||
if (line.starts_with("### ") && line.size() == strlen(REPLXX_TIMESTAMP_PATTERN))
|
if (line.empty() || (line.starts_with("### ") && line.size() == strlen(REPLXX_TIMESTAMP_PATTERN)))
|
||||||
{
|
{
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
24
base/common/removeDuplicates.h
Normal file
24
base/common/removeDuplicates.h
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
#pragma once
|
||||||
|
#include <vector>
|
||||||
|
|
||||||
|
/// Removes duplicates from a container without changing the order of its elements.
|
||||||
|
/// Keeps the last occurrence of each element.
|
||||||
|
/// Should NOT be used for containers with a lot of elements because it has O(N^2) complexity.
|
||||||
|
template <typename T>
|
||||||
|
void removeDuplicatesKeepLast(std::vector<T> & vec)
|
||||||
|
{
|
||||||
|
auto begin = vec.begin();
|
||||||
|
auto end = vec.end();
|
||||||
|
auto new_begin = end;
|
||||||
|
for (auto current = end; current != begin;)
|
||||||
|
{
|
||||||
|
--current;
|
||||||
|
if (std::find(new_begin, end, *current) == end)
|
||||||
|
{
|
||||||
|
--new_begin;
|
||||||
|
if (new_begin != current)
|
||||||
|
*new_begin = *current;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
vec.erase(begin, new_begin);
|
||||||
|
}
|
@ -259,10 +259,25 @@ private:
|
|||||||
Poco::Logger * log;
|
Poco::Logger * log;
|
||||||
BaseDaemon & daemon;
|
BaseDaemon & daemon;
|
||||||
|
|
||||||
void onTerminate(const std::string & message, UInt32 thread_num) const
|
void onTerminate(std::string_view message, UInt32 thread_num) const
|
||||||
{
|
{
|
||||||
|
size_t pos = message.find('\n');
|
||||||
|
|
||||||
LOG_FATAL(log, "(version {}{}, {}) (from thread {}) {}",
|
LOG_FATAL(log, "(version {}{}, {}) (from thread {}) {}",
|
||||||
VERSION_STRING, VERSION_OFFICIAL, daemon.build_id_info, thread_num, message);
|
VERSION_STRING, VERSION_OFFICIAL, daemon.build_id_info, thread_num, message.substr(0, pos));
|
||||||
|
|
||||||
|
/// Print trace from std::terminate exception line-by-line to make it easy for grep.
|
||||||
|
while (pos != std::string_view::npos)
|
||||||
|
{
|
||||||
|
++pos;
|
||||||
|
size_t next_pos = message.find('\n', pos);
|
||||||
|
size_t size = next_pos;
|
||||||
|
if (next_pos != std::string_view::npos)
|
||||||
|
size = next_pos - pos;
|
||||||
|
|
||||||
|
LOG_FATAL(log, "{}", message.substr(pos, size));
|
||||||
|
pos = next_pos;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void onFault(
|
void onFault(
|
||||||
|
@ -4,13 +4,24 @@ QUERIES_FILE="queries.sql"
|
|||||||
TABLE=$1
|
TABLE=$1
|
||||||
TRIES=3
|
TRIES=3
|
||||||
|
|
||||||
|
if [ -x ./clickhouse ]
|
||||||
|
then
|
||||||
|
CLICKHOUSE_CLIENT="./clickhouse client"
|
||||||
|
elif command -v clickhouse-client >/dev/null 2>&1
|
||||||
|
then
|
||||||
|
CLICKHOUSE_CLIENT="clickhouse-client"
|
||||||
|
else
|
||||||
|
echo "clickhouse-client is not found"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
cat "$QUERIES_FILE" | sed "s/{table}/${TABLE}/g" | while read query; do
|
cat "$QUERIES_FILE" | sed "s/{table}/${TABLE}/g" | while read query; do
|
||||||
sync
|
sync
|
||||||
echo 3 | sudo tee /proc/sys/vm/drop_caches >/dev/null
|
echo 3 | sudo tee /proc/sys/vm/drop_caches >/dev/null
|
||||||
|
|
||||||
echo -n "["
|
echo -n "["
|
||||||
for i in $(seq 1 $TRIES); do
|
for i in $(seq 1 $TRIES); do
|
||||||
RES=$(clickhouse-client --time --format=Null --query="$query" 2>&1)
|
RES=$(${CLICKHOUSE_CLIENT} --time --format=Null --max_memory_usage=100G --query="$query" 2>&1)
|
||||||
[[ "$?" == "0" ]] && echo -n "${RES}" || echo -n "null"
|
[[ "$?" == "0" ]] && echo -n "${RES}" || echo -n "null"
|
||||||
[[ "$i" != $TRIES ]] && echo -n ", "
|
[[ "$i" != $TRIES ]] && echo -n ", "
|
||||||
done
|
done
|
||||||
|
@ -11,8 +11,8 @@ DATASET="${TABLE}_v1.tar.xz"
|
|||||||
QUERIES_FILE="queries.sql"
|
QUERIES_FILE="queries.sql"
|
||||||
TRIES=3
|
TRIES=3
|
||||||
|
|
||||||
AMD64_BIN_URL="https://clickhouse-builds.s3.yandex.net/0/e29c4c3cc47ab2a6c4516486c1b77d57e7d42643/clickhouse_build_check/gcc-10_relwithdebuginfo_none_bundled_unsplitted_disable_False_binary/clickhouse"
|
AMD64_BIN_URL="https://builds.clickhouse.tech/master/amd64/clickhouse"
|
||||||
AARCH64_BIN_URL="https://clickhouse-builds.s3.yandex.net/0/e29c4c3cc47ab2a6c4516486c1b77d57e7d42643/clickhouse_special_build_check/clang-10-aarch64_relwithdebuginfo_none_bundled_unsplitted_disable_False_binary/clickhouse"
|
AARCH64_BIN_URL="https://builds.clickhouse.tech/master/aarch64/clickhouse"
|
||||||
|
|
||||||
# Note: on older Ubuntu versions, 'axel' does not support IPv6. If you are using IPv6-only servers on very old Ubuntu, just don't install 'axel'.
|
# Note: on older Ubuntu versions, 'axel' does not support IPv6. If you are using IPv6-only servers on very old Ubuntu, just don't install 'axel'.
|
||||||
|
|
||||||
@ -89,7 +89,7 @@ cat "$QUERIES_FILE" | sed "s/{table}/${TABLE}/g" | while read query; do
|
|||||||
|
|
||||||
echo -n "["
|
echo -n "["
|
||||||
for i in $(seq 1 $TRIES); do
|
for i in $(seq 1 $TRIES); do
|
||||||
RES=$(./clickhouse client --max_memory_usage 100000000000 --time --format=Null --query="$query" 2>&1 ||:)
|
RES=$(./clickhouse client --max_memory_usage 100G --time --format=Null --query="$query" 2>&1 ||:)
|
||||||
[[ "$?" == "0" ]] && echo -n "${RES}" || echo -n "null"
|
[[ "$?" == "0" ]] && echo -n "${RES}" || echo -n "null"
|
||||||
[[ "$i" != $TRIES ]] && echo -n ", "
|
[[ "$i" != $TRIES ]] && echo -n ", "
|
||||||
done
|
done
|
||||||
|
@ -2,11 +2,11 @@
|
|||||||
|
|
||||||
# NOTE: has nothing common with DBMS_TCP_PROTOCOL_VERSION,
|
# NOTE: has nothing common with DBMS_TCP_PROTOCOL_VERSION,
|
||||||
# only DBMS_TCP_PROTOCOL_VERSION should be incremented on protocol changes.
|
# only DBMS_TCP_PROTOCOL_VERSION should be incremented on protocol changes.
|
||||||
SET(VERSION_REVISION 54453)
|
SET(VERSION_REVISION 54454)
|
||||||
SET(VERSION_MAJOR 21)
|
SET(VERSION_MAJOR 21)
|
||||||
SET(VERSION_MINOR 8)
|
SET(VERSION_MINOR 9)
|
||||||
SET(VERSION_PATCH 1)
|
SET(VERSION_PATCH 1)
|
||||||
SET(VERSION_GITHASH fb895056568e26200629c7d19626e92d2dedc70d)
|
SET(VERSION_GITHASH f48c5af90c2ad51955d1ee3b6b05d006b03e4238)
|
||||||
SET(VERSION_DESCRIBE v21.8.1.1-prestable)
|
SET(VERSION_DESCRIBE v21.9.1.1-prestable)
|
||||||
SET(VERSION_STRING 21.8.1.1)
|
SET(VERSION_STRING 21.9.1.1)
|
||||||
# end of autochange
|
# end of autochange
|
||||||
|
@ -53,5 +53,6 @@ macro(clickhouse_embed_binaries)
|
|||||||
set_property(SOURCE "${CMAKE_CURRENT_BINARY_DIR}/${ASSEMBLY_FILE_NAME}" APPEND PROPERTY INCLUDE_DIRECTORIES "${EMBED_RESOURCE_DIR}")
|
set_property(SOURCE "${CMAKE_CURRENT_BINARY_DIR}/${ASSEMBLY_FILE_NAME}" APPEND PROPERTY INCLUDE_DIRECTORIES "${EMBED_RESOURCE_DIR}")
|
||||||
|
|
||||||
target_sources("${EMBED_TARGET}" PRIVATE "${CMAKE_CURRENT_BINARY_DIR}/${ASSEMBLY_FILE_NAME}")
|
target_sources("${EMBED_TARGET}" PRIVATE "${CMAKE_CURRENT_BINARY_DIR}/${ASSEMBLY_FILE_NAME}")
|
||||||
|
set_target_properties("${EMBED_TARGET}" PROPERTIES OBJECT_DEPENDS "${RESOURCE_FILE}")
|
||||||
endforeach()
|
endforeach()
|
||||||
endmacro()
|
endmacro()
|
||||||
|
24
cmake/find/s2geometry.cmake
Normal file
24
cmake/find/s2geometry.cmake
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
|
||||||
|
option(ENABLE_S2_GEOMETRY "Enable S2 geometry library" ${ENABLE_LIBRARIES})
|
||||||
|
|
||||||
|
if (ENABLE_S2_GEOMETRY)
|
||||||
|
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/s2geometry")
|
||||||
|
message (WARNING "submodule contrib/s2geometry is missing. to fix try run: \n git submodule update --init --recursive")
|
||||||
|
set (ENABLE_S2_GEOMETRY 0)
|
||||||
|
set (USE_S2_GEOMETRY 0)
|
||||||
|
else()
|
||||||
|
if (OPENSSL_FOUND)
|
||||||
|
set (S2_GEOMETRY_LIBRARY s2)
|
||||||
|
set (S2_GEOMETRY_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/contrib/s2geometry/src/s2)
|
||||||
|
set (USE_S2_GEOMETRY 1)
|
||||||
|
else()
|
||||||
|
message (WARNING "S2 uses OpenSSL, but the latter is absent.")
|
||||||
|
endif()
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if (NOT USE_S2_GEOMETRY)
|
||||||
|
message (${RECONFIGURE_MESSAGE_LEVEL} "Can't enable S2 geometry library")
|
||||||
|
endif()
|
||||||
|
endif()
|
||||||
|
|
||||||
|
message (STATUS "Using s2geometry=${USE_S2_GEOMETRY} : ${S2_GEOMETRY_INCLUDE_DIR}")
|
16
cmake/find/sqlite.cmake
Normal file
16
cmake/find/sqlite.cmake
Normal file
@ -0,0 +1,16 @@
|
|||||||
|
option(ENABLE_SQLITE "Enable sqlite" ${ENABLE_LIBRARIES})
|
||||||
|
|
||||||
|
if (NOT ENABLE_SQLITE)
|
||||||
|
return()
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/sqlite-amalgamation/sqlite3.c")
|
||||||
|
message (WARNING "submodule contrib/sqlite3-amalgamation is missing. to fix try run: \n git submodule update --init --recursive")
|
||||||
|
message (${RECONFIGURE_MESSAGE_LEVEL} "Can't find internal sqlite library")
|
||||||
|
set (USE_SQLITE 0)
|
||||||
|
return()
|
||||||
|
endif()
|
||||||
|
|
||||||
|
set (USE_SQLITE 1)
|
||||||
|
set (SQLITE_LIBRARY sqlite)
|
||||||
|
message (STATUS "Using sqlite=${USE_SQLITE}")
|
@ -1,4 +1,4 @@
|
|||||||
option(ENABLE_STATS "Enalbe StatsLib library" ${ENABLE_LIBRARIES})
|
option(ENABLE_STATS "Enable StatsLib library" ${ENABLE_LIBRARIES})
|
||||||
|
|
||||||
if (ENABLE_STATS)
|
if (ENABLE_STATS)
|
||||||
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/stats")
|
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/stats")
|
||||||
|
15
contrib/CMakeLists.txt
vendored
15
contrib/CMakeLists.txt
vendored
@ -1,3 +1,4 @@
|
|||||||
|
# Third-party libraries may have substandard code.
|
||||||
|
|
||||||
# Put all targets defined here and in added subfolders under "contrib/" folder in GUI-based IDEs by default.
|
# Put all targets defined here and in added subfolders under "contrib/" folder in GUI-based IDEs by default.
|
||||||
# Some of third-party projects may override CMAKE_FOLDER or FOLDER property of their targets, so they will
|
# Some of third-party projects may override CMAKE_FOLDER or FOLDER property of their targets, so they will
|
||||||
@ -10,10 +11,8 @@ else ()
|
|||||||
endif ()
|
endif ()
|
||||||
unset (_current_dir_name)
|
unset (_current_dir_name)
|
||||||
|
|
||||||
# Third-party libraries may have substandard code.
|
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -w")
|
||||||
# Also remove a possible source of nondeterminism.
|
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -w")
|
||||||
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -w -D__DATE__= -D__TIME__= -D__TIMESTAMP__=")
|
|
||||||
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -w -D__DATE__= -D__TIME__= -D__TIMESTAMP__=")
|
|
||||||
|
|
||||||
if (WITH_COVERAGE)
|
if (WITH_COVERAGE)
|
||||||
set (WITHOUT_COVERAGE_LIST ${WITHOUT_COVERAGE})
|
set (WITHOUT_COVERAGE_LIST ${WITHOUT_COVERAGE})
|
||||||
@ -331,3 +330,11 @@ add_subdirectory(fast_float)
|
|||||||
add_subdirectory(libstemmer-c-cmake)
|
add_subdirectory(libstemmer-c-cmake)
|
||||||
add_subdirectory(wordnet-blast-cmake)
|
add_subdirectory(wordnet-blast-cmake)
|
||||||
add_subdirectory(lemmagen-c-cmake)
|
add_subdirectory(lemmagen-c-cmake)
|
||||||
|
|
||||||
|
if (USE_SQLITE)
|
||||||
|
add_subdirectory(sqlite-cmake)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if (USE_S2_GEOMETRY)
|
||||||
|
add_subdirectory(s2geometry-cmake)
|
||||||
|
endif()
|
||||||
|
2
contrib/NuRaft
vendored
2
contrib/NuRaft
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 976874b7aa7f422bf4ea595bb7d1166c617b1c26
|
Subproject commit 0ce9490093021c63564cca159571a8b27772ad48
|
2
contrib/h3
vendored
2
contrib/h3
vendored
@ -1 +1 @@
|
|||||||
Subproject commit e209086ae1b5477307f545a0f6111780edc59940
|
Subproject commit c7f46cfd71fb60e2fefc90e28abe81657deff735
|
@ -3,21 +3,22 @@ set(H3_BINARY_DIR "${ClickHouse_BINARY_DIR}/contrib/h3/src/h3lib")
|
|||||||
|
|
||||||
set(SRCS
|
set(SRCS
|
||||||
"${H3_SOURCE_DIR}/lib/algos.c"
|
"${H3_SOURCE_DIR}/lib/algos.c"
|
||||||
"${H3_SOURCE_DIR}/lib/baseCells.c"
|
|
||||||
"${H3_SOURCE_DIR}/lib/bbox.c"
|
|
||||||
"${H3_SOURCE_DIR}/lib/coordijk.c"
|
"${H3_SOURCE_DIR}/lib/coordijk.c"
|
||||||
"${H3_SOURCE_DIR}/lib/faceijk.c"
|
"${H3_SOURCE_DIR}/lib/bbox.c"
|
||||||
"${H3_SOURCE_DIR}/lib/geoCoord.c"
|
|
||||||
"${H3_SOURCE_DIR}/lib/h3Index.c"
|
|
||||||
"${H3_SOURCE_DIR}/lib/h3UniEdge.c"
|
|
||||||
"${H3_SOURCE_DIR}/lib/linkedGeo.c"
|
|
||||||
"${H3_SOURCE_DIR}/lib/localij.c"
|
|
||||||
"${H3_SOURCE_DIR}/lib/mathExtensions.c"
|
|
||||||
"${H3_SOURCE_DIR}/lib/polygon.c"
|
"${H3_SOURCE_DIR}/lib/polygon.c"
|
||||||
|
"${H3_SOURCE_DIR}/lib/h3Index.c"
|
||||||
"${H3_SOURCE_DIR}/lib/vec2d.c"
|
"${H3_SOURCE_DIR}/lib/vec2d.c"
|
||||||
"${H3_SOURCE_DIR}/lib/vec3d.c"
|
"${H3_SOURCE_DIR}/lib/vec3d.c"
|
||||||
"${H3_SOURCE_DIR}/lib/vertex.c"
|
"${H3_SOURCE_DIR}/lib/vertex.c"
|
||||||
|
"${H3_SOURCE_DIR}/lib/linkedGeo.c"
|
||||||
|
"${H3_SOURCE_DIR}/lib/localij.c"
|
||||||
|
"${H3_SOURCE_DIR}/lib/latLng.c"
|
||||||
|
"${H3_SOURCE_DIR}/lib/directedEdge.c"
|
||||||
|
"${H3_SOURCE_DIR}/lib/mathExtensions.c"
|
||||||
|
"${H3_SOURCE_DIR}/lib/iterators.c"
|
||||||
"${H3_SOURCE_DIR}/lib/vertexGraph.c"
|
"${H3_SOURCE_DIR}/lib/vertexGraph.c"
|
||||||
|
"${H3_SOURCE_DIR}/lib/faceijk.c"
|
||||||
|
"${H3_SOURCE_DIR}/lib/baseCells.c"
|
||||||
)
|
)
|
||||||
|
|
||||||
configure_file("${H3_SOURCE_DIR}/include/h3api.h.in" "${H3_BINARY_DIR}/include/h3api.h")
|
configure_file("${H3_SOURCE_DIR}/include/h3api.h.in" "${H3_BINARY_DIR}/include/h3api.h")
|
||||||
|
@ -22,6 +22,7 @@ set(SRCS
|
|||||||
"${LIBRARY_DIR}/src/launcher.cxx"
|
"${LIBRARY_DIR}/src/launcher.cxx"
|
||||||
"${LIBRARY_DIR}/src/srv_config.cxx"
|
"${LIBRARY_DIR}/src/srv_config.cxx"
|
||||||
"${LIBRARY_DIR}/src/snapshot_sync_req.cxx"
|
"${LIBRARY_DIR}/src/snapshot_sync_req.cxx"
|
||||||
|
"${LIBRARY_DIR}/src/snapshot_sync_ctx.cxx"
|
||||||
"${LIBRARY_DIR}/src/handle_timeout.cxx"
|
"${LIBRARY_DIR}/src/handle_timeout.cxx"
|
||||||
"${LIBRARY_DIR}/src/handle_append_entries.cxx"
|
"${LIBRARY_DIR}/src/handle_append_entries.cxx"
|
||||||
"${LIBRARY_DIR}/src/cluster_config.cxx"
|
"${LIBRARY_DIR}/src/cluster_config.cxx"
|
||||||
|
2
contrib/poco
vendored
2
contrib/poco
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 5994506908028612869fee627d68d8212dfe7c1e
|
Subproject commit 7351c4691b5d401f59e3959adfc5b4fa263b32da
|
2
contrib/rocksdb
vendored
2
contrib/rocksdb
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 07c77549a20b63ff6981b400085eba36bb5c80c4
|
Subproject commit b6480c69bf3ab6e298e0d019a07fd4f69029b26a
|
@ -70,11 +70,6 @@ else()
|
|||||||
endif()
|
endif()
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
set(BUILD_VERSION_CC rocksdb_build_version.cc)
|
|
||||||
add_library(rocksdb_build_version OBJECT ${BUILD_VERSION_CC})
|
|
||||||
|
|
||||||
target_include_directories(rocksdb_build_version PRIVATE "${ROCKSDB_SOURCE_DIR}/util")
|
|
||||||
|
|
||||||
include(CheckCCompilerFlag)
|
include(CheckCCompilerFlag)
|
||||||
if(CMAKE_SYSTEM_PROCESSOR MATCHES "^(powerpc|ppc)64")
|
if(CMAKE_SYSTEM_PROCESSOR MATCHES "^(powerpc|ppc)64")
|
||||||
CHECK_C_COMPILER_FLAG("-mcpu=power9" HAS_POWER9)
|
CHECK_C_COMPILER_FLAG("-mcpu=power9" HAS_POWER9)
|
||||||
@ -243,272 +238,293 @@ find_package(Threads REQUIRED)
|
|||||||
# Main library source code
|
# Main library source code
|
||||||
|
|
||||||
set(SOURCES
|
set(SOURCES
|
||||||
"${ROCKSDB_SOURCE_DIR}/cache/cache.cc"
|
${ROCKSDB_SOURCE_DIR}/cache/cache.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/cache/clock_cache.cc"
|
${ROCKSDB_SOURCE_DIR}/cache/cache_entry_roles.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/cache/lru_cache.cc"
|
${ROCKSDB_SOURCE_DIR}/cache/clock_cache.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/cache/sharded_cache.cc"
|
${ROCKSDB_SOURCE_DIR}/cache/lru_cache.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/arena_wrapped_db_iter.cc"
|
${ROCKSDB_SOURCE_DIR}/cache/sharded_cache.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_addition.cc"
|
${ROCKSDB_SOURCE_DIR}/db/arena_wrapped_db_iter.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_builder.cc"
|
${ROCKSDB_SOURCE_DIR}/db/blob/blob_fetcher.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_cache.cc"
|
${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_addition.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_garbage.cc"
|
${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_builder.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_meta.cc"
|
${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_cache.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_reader.cc"
|
${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_garbage.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/blob/blob_log_format.cc"
|
${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_meta.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/blob/blob_log_sequential_reader.cc"
|
${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_reader.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/blob/blob_log_writer.cc"
|
${ROCKSDB_SOURCE_DIR}/db/blob/blob_garbage_meter.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/builder.cc"
|
${ROCKSDB_SOURCE_DIR}/db/blob/blob_log_format.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/c.cc"
|
${ROCKSDB_SOURCE_DIR}/db/blob/blob_log_sequential_reader.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/column_family.cc"
|
${ROCKSDB_SOURCE_DIR}/db/blob/blob_log_writer.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/compacted_db_impl.cc"
|
${ROCKSDB_SOURCE_DIR}/db/builder.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/compaction/compaction.cc"
|
${ROCKSDB_SOURCE_DIR}/db/c.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_iterator.cc"
|
${ROCKSDB_SOURCE_DIR}/db/column_family.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker.cc"
|
${ROCKSDB_SOURCE_DIR}/db/compaction/compaction.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_job.cc"
|
${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_iterator.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker_fifo.cc"
|
${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker_level.cc"
|
${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_job.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker_universal.cc"
|
${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker_fifo.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/compaction/sst_partitioner.cc"
|
${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker_level.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/convenience.cc"
|
${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker_universal.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/db_filesnapshot.cc"
|
${ROCKSDB_SOURCE_DIR}/db/compaction/sst_partitioner.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl.cc"
|
${ROCKSDB_SOURCE_DIR}/db/convenience.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_write.cc"
|
${ROCKSDB_SOURCE_DIR}/db/db_filesnapshot.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_compaction_flush.cc"
|
${ROCKSDB_SOURCE_DIR}/db/db_impl/compacted_db_impl.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_files.cc"
|
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_open.cc"
|
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_write.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_debug.cc"
|
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_compaction_flush.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_experimental.cc"
|
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_files.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_readonly.cc"
|
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_open.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_secondary.cc"
|
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_debug.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/db_info_dumper.cc"
|
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_experimental.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/db_iter.cc"
|
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_readonly.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/dbformat.cc"
|
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_secondary.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/error_handler.cc"
|
${ROCKSDB_SOURCE_DIR}/db/db_info_dumper.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/event_helpers.cc"
|
${ROCKSDB_SOURCE_DIR}/db/db_iter.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/experimental.cc"
|
${ROCKSDB_SOURCE_DIR}/db/dbformat.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/external_sst_file_ingestion_job.cc"
|
${ROCKSDB_SOURCE_DIR}/db/error_handler.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/file_indexer.cc"
|
${ROCKSDB_SOURCE_DIR}/db/event_helpers.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/flush_job.cc"
|
${ROCKSDB_SOURCE_DIR}/db/experimental.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/flush_scheduler.cc"
|
${ROCKSDB_SOURCE_DIR}/db/external_sst_file_ingestion_job.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/forward_iterator.cc"
|
${ROCKSDB_SOURCE_DIR}/db/file_indexer.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/import_column_family_job.cc"
|
${ROCKSDB_SOURCE_DIR}/db/flush_job.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/internal_stats.cc"
|
${ROCKSDB_SOURCE_DIR}/db/flush_scheduler.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/logs_with_prep_tracker.cc"
|
${ROCKSDB_SOURCE_DIR}/db/forward_iterator.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/log_reader.cc"
|
${ROCKSDB_SOURCE_DIR}/db/import_column_family_job.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/log_writer.cc"
|
${ROCKSDB_SOURCE_DIR}/db/internal_stats.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/malloc_stats.cc"
|
${ROCKSDB_SOURCE_DIR}/db/logs_with_prep_tracker.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/memtable.cc"
|
${ROCKSDB_SOURCE_DIR}/db/log_reader.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/memtable_list.cc"
|
${ROCKSDB_SOURCE_DIR}/db/log_writer.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/merge_helper.cc"
|
${ROCKSDB_SOURCE_DIR}/db/malloc_stats.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/merge_operator.cc"
|
${ROCKSDB_SOURCE_DIR}/db/memtable.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/output_validator.cc"
|
${ROCKSDB_SOURCE_DIR}/db/memtable_list.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/periodic_work_scheduler.cc"
|
${ROCKSDB_SOURCE_DIR}/db/merge_helper.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/range_del_aggregator.cc"
|
${ROCKSDB_SOURCE_DIR}/db/merge_operator.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/range_tombstone_fragmenter.cc"
|
${ROCKSDB_SOURCE_DIR}/db/output_validator.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/repair.cc"
|
${ROCKSDB_SOURCE_DIR}/db/periodic_work_scheduler.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/snapshot_impl.cc"
|
${ROCKSDB_SOURCE_DIR}/db/range_del_aggregator.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/table_cache.cc"
|
${ROCKSDB_SOURCE_DIR}/db/range_tombstone_fragmenter.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/table_properties_collector.cc"
|
${ROCKSDB_SOURCE_DIR}/db/repair.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/transaction_log_impl.cc"
|
${ROCKSDB_SOURCE_DIR}/db/snapshot_impl.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/trim_history_scheduler.cc"
|
${ROCKSDB_SOURCE_DIR}/db/table_cache.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/version_builder.cc"
|
${ROCKSDB_SOURCE_DIR}/db/table_properties_collector.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/version_edit.cc"
|
${ROCKSDB_SOURCE_DIR}/db/transaction_log_impl.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/version_edit_handler.cc"
|
${ROCKSDB_SOURCE_DIR}/db/trim_history_scheduler.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/version_set.cc"
|
${ROCKSDB_SOURCE_DIR}/db/version_builder.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/wal_edit.cc"
|
${ROCKSDB_SOURCE_DIR}/db/version_edit.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/wal_manager.cc"
|
${ROCKSDB_SOURCE_DIR}/db/version_edit_handler.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/write_batch.cc"
|
${ROCKSDB_SOURCE_DIR}/db/version_set.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/write_batch_base.cc"
|
${ROCKSDB_SOURCE_DIR}/db/wal_edit.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/write_controller.cc"
|
${ROCKSDB_SOURCE_DIR}/db/wal_manager.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/db/write_thread.cc"
|
${ROCKSDB_SOURCE_DIR}/db/write_batch.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/env/env.cc"
|
${ROCKSDB_SOURCE_DIR}/db/write_batch_base.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/env/env_chroot.cc"
|
${ROCKSDB_SOURCE_DIR}/db/write_controller.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/env/env_encryption.cc"
|
${ROCKSDB_SOURCE_DIR}/db/write_thread.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/env/env_hdfs.cc"
|
${ROCKSDB_SOURCE_DIR}/env/composite_env.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/env/file_system.cc"
|
${ROCKSDB_SOURCE_DIR}/env/env.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/env/file_system_tracer.cc"
|
${ROCKSDB_SOURCE_DIR}/env/env_chroot.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/env/mock_env.cc"
|
${ROCKSDB_SOURCE_DIR}/env/env_encryption.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/file/delete_scheduler.cc"
|
${ROCKSDB_SOURCE_DIR}/env/env_hdfs.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/file/file_prefetch_buffer.cc"
|
${ROCKSDB_SOURCE_DIR}/env/file_system.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/file/file_util.cc"
|
${ROCKSDB_SOURCE_DIR}/env/file_system_tracer.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/file/filename.cc"
|
${ROCKSDB_SOURCE_DIR}/env/fs_remap.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/file/random_access_file_reader.cc"
|
${ROCKSDB_SOURCE_DIR}/env/mock_env.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/file/read_write_util.cc"
|
${ROCKSDB_SOURCE_DIR}/file/delete_scheduler.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/file/readahead_raf.cc"
|
${ROCKSDB_SOURCE_DIR}/file/file_prefetch_buffer.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/file/sequence_file_reader.cc"
|
${ROCKSDB_SOURCE_DIR}/file/file_util.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/file/sst_file_manager_impl.cc"
|
${ROCKSDB_SOURCE_DIR}/file/filename.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/file/writable_file_writer.cc"
|
${ROCKSDB_SOURCE_DIR}/file/line_file_reader.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/logging/auto_roll_logger.cc"
|
${ROCKSDB_SOURCE_DIR}/file/random_access_file_reader.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/logging/event_logger.cc"
|
${ROCKSDB_SOURCE_DIR}/file/read_write_util.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/logging/log_buffer.cc"
|
${ROCKSDB_SOURCE_DIR}/file/readahead_raf.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/memory/arena.cc"
|
${ROCKSDB_SOURCE_DIR}/file/sequence_file_reader.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/memory/concurrent_arena.cc"
|
${ROCKSDB_SOURCE_DIR}/file/sst_file_manager_impl.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/memory/jemalloc_nodump_allocator.cc"
|
${ROCKSDB_SOURCE_DIR}/file/writable_file_writer.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/memory/memkind_kmem_allocator.cc"
|
${ROCKSDB_SOURCE_DIR}/logging/auto_roll_logger.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/memtable/alloc_tracker.cc"
|
${ROCKSDB_SOURCE_DIR}/logging/event_logger.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/memtable/hash_linklist_rep.cc"
|
${ROCKSDB_SOURCE_DIR}/logging/log_buffer.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/memtable/hash_skiplist_rep.cc"
|
${ROCKSDB_SOURCE_DIR}/memory/arena.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/memtable/skiplistrep.cc"
|
${ROCKSDB_SOURCE_DIR}/memory/concurrent_arena.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/memtable/vectorrep.cc"
|
${ROCKSDB_SOURCE_DIR}/memory/jemalloc_nodump_allocator.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/memtable/write_buffer_manager.cc"
|
${ROCKSDB_SOURCE_DIR}/memory/memkind_kmem_allocator.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/monitoring/histogram.cc"
|
${ROCKSDB_SOURCE_DIR}/memtable/alloc_tracker.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/monitoring/histogram_windowing.cc"
|
${ROCKSDB_SOURCE_DIR}/memtable/hash_linklist_rep.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/monitoring/in_memory_stats_history.cc"
|
${ROCKSDB_SOURCE_DIR}/memtable/hash_skiplist_rep.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/monitoring/instrumented_mutex.cc"
|
${ROCKSDB_SOURCE_DIR}/memtable/skiplistrep.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/monitoring/iostats_context.cc"
|
${ROCKSDB_SOURCE_DIR}/memtable/vectorrep.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/monitoring/perf_context.cc"
|
${ROCKSDB_SOURCE_DIR}/memtable/write_buffer_manager.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/monitoring/perf_level.cc"
|
${ROCKSDB_SOURCE_DIR}/monitoring/histogram.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/monitoring/persistent_stats_history.cc"
|
${ROCKSDB_SOURCE_DIR}/monitoring/histogram_windowing.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/monitoring/statistics.cc"
|
${ROCKSDB_SOURCE_DIR}/monitoring/in_memory_stats_history.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_impl.cc"
|
${ROCKSDB_SOURCE_DIR}/monitoring/instrumented_mutex.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_updater.cc"
|
${ROCKSDB_SOURCE_DIR}/monitoring/iostats_context.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_util.cc"
|
${ROCKSDB_SOURCE_DIR}/monitoring/perf_context.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_util_debug.cc"
|
${ROCKSDB_SOURCE_DIR}/monitoring/perf_level.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/options/cf_options.cc"
|
${ROCKSDB_SOURCE_DIR}/monitoring/persistent_stats_history.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/options/configurable.cc"
|
${ROCKSDB_SOURCE_DIR}/monitoring/statistics.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/options/customizable.cc"
|
${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_impl.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/options/db_options.cc"
|
${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_updater.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/options/options.cc"
|
${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_util.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/options/options_helper.cc"
|
${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_util_debug.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/options/options_parser.cc"
|
${ROCKSDB_SOURCE_DIR}/options/cf_options.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/port/stack_trace.cc"
|
${ROCKSDB_SOURCE_DIR}/options/configurable.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/adaptive/adaptive_table_factory.cc"
|
${ROCKSDB_SOURCE_DIR}/options/customizable.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/binary_search_index_reader.cc"
|
${ROCKSDB_SOURCE_DIR}/options/db_options.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/block.cc"
|
${ROCKSDB_SOURCE_DIR}/options/options.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_filter_block.cc"
|
${ROCKSDB_SOURCE_DIR}/options/options_helper.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_builder.cc"
|
${ROCKSDB_SOURCE_DIR}/options/options_parser.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_factory.cc"
|
${ROCKSDB_SOURCE_DIR}/port/stack_trace.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_iterator.cc"
|
${ROCKSDB_SOURCE_DIR}/table/adaptive/adaptive_table_factory.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_reader.cc"
|
${ROCKSDB_SOURCE_DIR}/table/block_based/binary_search_index_reader.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/block_builder.cc"
|
${ROCKSDB_SOURCE_DIR}/table/block_based/block.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/block_prefetcher.cc"
|
${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_filter_block.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/block_prefix_index.cc"
|
${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_builder.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/data_block_hash_index.cc"
|
${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_factory.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/data_block_footer.cc"
|
${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_iterator.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/filter_block_reader_common.cc"
|
${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_reader.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/filter_policy.cc"
|
${ROCKSDB_SOURCE_DIR}/table/block_based/block_builder.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/flush_block_policy.cc"
|
${ROCKSDB_SOURCE_DIR}/table/block_based/block_prefetcher.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/full_filter_block.cc"
|
${ROCKSDB_SOURCE_DIR}/table/block_based/block_prefix_index.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/hash_index_reader.cc"
|
${ROCKSDB_SOURCE_DIR}/table/block_based/data_block_hash_index.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/index_builder.cc"
|
${ROCKSDB_SOURCE_DIR}/table/block_based/data_block_footer.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/index_reader_common.cc"
|
${ROCKSDB_SOURCE_DIR}/table/block_based/filter_block_reader_common.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/parsed_full_filter_block.cc"
|
${ROCKSDB_SOURCE_DIR}/table/block_based/filter_policy.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/partitioned_filter_block.cc"
|
${ROCKSDB_SOURCE_DIR}/table/block_based/flush_block_policy.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/partitioned_index_iterator.cc"
|
${ROCKSDB_SOURCE_DIR}/table/block_based/full_filter_block.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/partitioned_index_reader.cc"
|
${ROCKSDB_SOURCE_DIR}/table/block_based/hash_index_reader.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/reader_common.cc"
|
${ROCKSDB_SOURCE_DIR}/table/block_based/index_builder.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/uncompression_dict_reader.cc"
|
${ROCKSDB_SOURCE_DIR}/table/block_based/index_reader_common.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/block_fetcher.cc"
|
${ROCKSDB_SOURCE_DIR}/table/block_based/parsed_full_filter_block.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/cuckoo/cuckoo_table_builder.cc"
|
${ROCKSDB_SOURCE_DIR}/table/block_based/partitioned_filter_block.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/cuckoo/cuckoo_table_factory.cc"
|
${ROCKSDB_SOURCE_DIR}/table/block_based/partitioned_index_iterator.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/cuckoo/cuckoo_table_reader.cc"
|
${ROCKSDB_SOURCE_DIR}/table/block_based/partitioned_index_reader.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/format.cc"
|
${ROCKSDB_SOURCE_DIR}/table/block_based/reader_common.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/get_context.cc"
|
${ROCKSDB_SOURCE_DIR}/table/block_based/uncompression_dict_reader.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/iterator.cc"
|
${ROCKSDB_SOURCE_DIR}/table/block_fetcher.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/merging_iterator.cc"
|
${ROCKSDB_SOURCE_DIR}/table/cuckoo/cuckoo_table_builder.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/meta_blocks.cc"
|
${ROCKSDB_SOURCE_DIR}/table/cuckoo/cuckoo_table_factory.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/persistent_cache_helper.cc"
|
${ROCKSDB_SOURCE_DIR}/table/cuckoo/cuckoo_table_reader.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_bloom.cc"
|
${ROCKSDB_SOURCE_DIR}/table/format.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_builder.cc"
|
${ROCKSDB_SOURCE_DIR}/table/get_context.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_factory.cc"
|
${ROCKSDB_SOURCE_DIR}/table/iterator.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_index.cc"
|
${ROCKSDB_SOURCE_DIR}/table/merging_iterator.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_key_coding.cc"
|
${ROCKSDB_SOURCE_DIR}/table/meta_blocks.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_reader.cc"
|
${ROCKSDB_SOURCE_DIR}/table/persistent_cache_helper.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/sst_file_dumper.cc"
|
${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_bloom.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/sst_file_reader.cc"
|
${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_builder.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/sst_file_writer.cc"
|
${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_factory.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/table_factory.cc"
|
${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_index.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/table_properties.cc"
|
${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_key_coding.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/table/two_level_iterator.cc"
|
${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_reader.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/test_util/sync_point.cc"
|
${ROCKSDB_SOURCE_DIR}/table/sst_file_dumper.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/test_util/sync_point_impl.cc"
|
${ROCKSDB_SOURCE_DIR}/table/sst_file_reader.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/test_util/testutil.cc"
|
${ROCKSDB_SOURCE_DIR}/table/sst_file_writer.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/test_util/transaction_test_util.cc"
|
${ROCKSDB_SOURCE_DIR}/table/table_factory.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/tools/block_cache_analyzer/block_cache_trace_analyzer.cc"
|
${ROCKSDB_SOURCE_DIR}/table/table_properties.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/tools/dump/db_dump_tool.cc"
|
${ROCKSDB_SOURCE_DIR}/table/two_level_iterator.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/tools/io_tracer_parser_tool.cc"
|
${ROCKSDB_SOURCE_DIR}/test_util/sync_point.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/tools/ldb_cmd.cc"
|
${ROCKSDB_SOURCE_DIR}/test_util/sync_point_impl.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/tools/ldb_tool.cc"
|
${ROCKSDB_SOURCE_DIR}/test_util/testutil.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/tools/sst_dump_tool.cc"
|
${ROCKSDB_SOURCE_DIR}/test_util/transaction_test_util.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/tools/trace_analyzer_tool.cc"
|
${ROCKSDB_SOURCE_DIR}/tools/block_cache_analyzer/block_cache_trace_analyzer.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/trace_replay/trace_replay.cc"
|
${ROCKSDB_SOURCE_DIR}/tools/dump/db_dump_tool.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/trace_replay/block_cache_tracer.cc"
|
${ROCKSDB_SOURCE_DIR}/tools/io_tracer_parser_tool.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/trace_replay/io_tracer.cc"
|
${ROCKSDB_SOURCE_DIR}/tools/ldb_cmd.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/util/coding.cc"
|
${ROCKSDB_SOURCE_DIR}/tools/ldb_tool.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/util/compaction_job_stats_impl.cc"
|
${ROCKSDB_SOURCE_DIR}/tools/sst_dump_tool.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/util/comparator.cc"
|
${ROCKSDB_SOURCE_DIR}/tools/trace_analyzer_tool.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/util/compression_context_cache.cc"
|
${ROCKSDB_SOURCE_DIR}/trace_replay/trace_replay.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/util/concurrent_task_limiter_impl.cc"
|
${ROCKSDB_SOURCE_DIR}/trace_replay/block_cache_tracer.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/util/crc32c.cc"
|
${ROCKSDB_SOURCE_DIR}/trace_replay/io_tracer.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/util/dynamic_bloom.cc"
|
${ROCKSDB_SOURCE_DIR}/util/coding.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/util/hash.cc"
|
${ROCKSDB_SOURCE_DIR}/util/compaction_job_stats_impl.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/util/murmurhash.cc"
|
${ROCKSDB_SOURCE_DIR}/util/comparator.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/util/random.cc"
|
${ROCKSDB_SOURCE_DIR}/util/compression_context_cache.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/util/rate_limiter.cc"
|
${ROCKSDB_SOURCE_DIR}/util/concurrent_task_limiter_impl.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/util/slice.cc"
|
${ROCKSDB_SOURCE_DIR}/util/crc32c.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/util/file_checksum_helper.cc"
|
${ROCKSDB_SOURCE_DIR}/util/dynamic_bloom.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/util/status.cc"
|
${ROCKSDB_SOURCE_DIR}/util/hash.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/util/string_util.cc"
|
${ROCKSDB_SOURCE_DIR}/util/murmurhash.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/util/thread_local.cc"
|
${ROCKSDB_SOURCE_DIR}/util/random.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/util/threadpool_imp.cc"
|
${ROCKSDB_SOURCE_DIR}/util/rate_limiter.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/util/xxhash.cc"
|
${ROCKSDB_SOURCE_DIR}/util/ribbon_config.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/backupable/backupable_db.cc"
|
${ROCKSDB_SOURCE_DIR}/util/slice.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_compaction_filter.cc"
|
${ROCKSDB_SOURCE_DIR}/util/file_checksum_helper.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_db.cc"
|
${ROCKSDB_SOURCE_DIR}/util/status.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_db_impl.cc"
|
${ROCKSDB_SOURCE_DIR}/util/string_util.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_db_impl_filesnapshot.cc"
|
${ROCKSDB_SOURCE_DIR}/util/thread_local.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_dump_tool.cc"
|
${ROCKSDB_SOURCE_DIR}/util/threadpool_imp.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_file.cc"
|
${ROCKSDB_SOURCE_DIR}/util/xxhash.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/cassandra/cassandra_compaction_filter.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/backupable/backupable_db.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/cassandra/format.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_compaction_filter.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/cassandra/merge_operator.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_db.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/checkpoint/checkpoint_impl.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_db_impl.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/compaction_filters/remove_emptyvalue_compactionfilter.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_db_impl_filesnapshot.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/debug.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_dump_tool.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/env_mirror.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_file.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/env_timed.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/cassandra/cassandra_compaction_filter.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/fault_injection_env.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/cassandra/format.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/fault_injection_fs.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/cassandra/merge_operator.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/leveldb_options/leveldb_options.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/checkpoint/checkpoint_impl.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/memory/memory_util.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/compaction_filters/remove_emptyvalue_compactionfilter.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/bytesxor.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/debug.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/max.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/env_mirror.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/put.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/env_timed.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/sortlist.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/fault_injection_env.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/string_append/stringappend.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/fault_injection_fs.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/string_append/stringappend2.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/leveldb_options/leveldb_options.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/uint64add.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/memory/memory_util.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/object_registry.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/bytesxor.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/option_change_migration/option_change_migration.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/max.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/options/options_util.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/put.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/block_cache_tier.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/sortlist.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/block_cache_tier_file.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/string_append/stringappend.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/block_cache_tier_metadata.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/string_append/stringappend2.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/persistent_cache_tier.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/uint64add.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/volatile_tier_impl.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/object_registry.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/simulator_cache/cache_simulator.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/option_change_migration/option_change_migration.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/simulator_cache/sim_cache.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/options/options_util.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/table_properties_collectors/compact_on_deletion_collector.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/block_cache_tier.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/trace/file_trace_reader_writer.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/block_cache_tier_file.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/lock_manager.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/block_cache_tier_metadata.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/point/point_lock_tracker.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/persistent_cache_tier.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/point/point_lock_manager.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/volatile_tier_impl.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/optimistic_transaction_db_impl.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/simulator_cache/cache_simulator.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/optimistic_transaction.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/simulator_cache/sim_cache.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/pessimistic_transaction.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/table_properties_collectors/compact_on_deletion_collector.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/pessimistic_transaction_db.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/trace/file_trace_reader_writer.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/snapshot_checker.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/lock_manager.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_base.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/point/point_lock_tracker.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_db_mutex_impl.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/point/point_lock_manager.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_util.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/range_tree_lock_manager.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_prepared_txn.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/range_tree_lock_tracker.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_prepared_txn_db.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/optimistic_transaction_db_impl.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_unprepared_txn.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/optimistic_transaction.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_unprepared_txn_db.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/pessimistic_transaction.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/ttl/db_ttl_impl.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/pessimistic_transaction_db.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/write_batch_with_index/write_batch_with_index.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/snapshot_checker.cc
|
||||||
"${ROCKSDB_SOURCE_DIR}/utilities/write_batch_with_index/write_batch_with_index_internal.cc"
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_base.cc
|
||||||
$<TARGET_OBJECTS:rocksdb_build_version>)
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_db_mutex_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_util.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_prepared_txn.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_prepared_txn_db.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_unprepared_txn.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_unprepared_txn_db.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/ttl/db_ttl_impl.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/write_batch_with_index/write_batch_with_index.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/write_batch_with_index/write_batch_with_index_internal.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/locktree/concurrent_tree.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/locktree/keyrange.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/locktree/lock_request.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/locktree/locktree.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/locktree/manager.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/locktree/range_buffer.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/locktree/treenode.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/locktree/txnid_set.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/locktree/wfg.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/standalone_port.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/util/dbt.cc
|
||||||
|
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/util/memarena.cc
|
||||||
|
rocksdb_build_version.cc)
|
||||||
|
|
||||||
if(HAVE_SSE42 AND NOT MSVC)
|
if(HAVE_SSE42 AND NOT MSVC)
|
||||||
set_source_files_properties(
|
set_source_files_properties(
|
||||||
|
@ -1,3 +1,62 @@
|
|||||||
const char* rocksdb_build_git_sha = "rocksdb_build_git_sha:0";
|
// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
|
||||||
const char* rocksdb_build_git_date = "rocksdb_build_git_date:2000-01-01";
|
/// This file was edited for ClickHouse.
|
||||||
const char* rocksdb_build_compile_date = "2000-01-01";
|
|
||||||
|
#include <memory>
|
||||||
|
|
||||||
|
#include "rocksdb/version.h"
|
||||||
|
#include "util/string_util.h"
|
||||||
|
|
||||||
|
// The build script may replace these values with real values based
|
||||||
|
// on whether or not GIT is available and the platform settings
|
||||||
|
static const std::string rocksdb_build_git_sha = "rocksdb_build_git_sha:0";
|
||||||
|
static const std::string rocksdb_build_git_tag = "rocksdb_build_git_tag:master";
|
||||||
|
static const std::string rocksdb_build_date = "rocksdb_build_date:2000-01-01";
|
||||||
|
|
||||||
|
namespace ROCKSDB_NAMESPACE {
|
||||||
|
static void AddProperty(std::unordered_map<std::string, std::string> *props, const std::string& name) {
|
||||||
|
size_t colon = name.find(":");
|
||||||
|
if (colon != std::string::npos && colon > 0 && colon < name.length() - 1) {
|
||||||
|
// If we found a "@:", then this property was a build-time substitution that failed. Skip it
|
||||||
|
size_t at = name.find("@", colon);
|
||||||
|
if (at != colon + 1) {
|
||||||
|
// Everything before the colon is the name, after is the value
|
||||||
|
(*props)[name.substr(0, colon)] = name.substr(colon + 1);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static std::unordered_map<std::string, std::string>* LoadPropertiesSet() {
|
||||||
|
auto * properties = new std::unordered_map<std::string, std::string>();
|
||||||
|
AddProperty(properties, rocksdb_build_git_sha);
|
||||||
|
AddProperty(properties, rocksdb_build_git_tag);
|
||||||
|
AddProperty(properties, rocksdb_build_date);
|
||||||
|
return properties;
|
||||||
|
}
|
||||||
|
|
||||||
|
const std::unordered_map<std::string, std::string>& GetRocksBuildProperties() {
|
||||||
|
static std::unique_ptr<std::unordered_map<std::string, std::string>> props(LoadPropertiesSet());
|
||||||
|
return *props;
|
||||||
|
}
|
||||||
|
|
||||||
|
std::string GetRocksVersionAsString(bool with_patch) {
|
||||||
|
std::string version = ToString(ROCKSDB_MAJOR) + "." + ToString(ROCKSDB_MINOR);
|
||||||
|
if (with_patch) {
|
||||||
|
return version + "." + ToString(ROCKSDB_PATCH);
|
||||||
|
} else {
|
||||||
|
return version;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
std::string GetRocksBuildInfoAsString(const std::string& program, bool verbose) {
|
||||||
|
std::string info = program + " (RocksDB) " + GetRocksVersionAsString(true);
|
||||||
|
if (verbose) {
|
||||||
|
for (const auto& it : GetRocksBuildProperties()) {
|
||||||
|
info.append("\n ");
|
||||||
|
info.append(it.first);
|
||||||
|
info.append(": ");
|
||||||
|
info.append(it.second);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return info;
|
||||||
|
}
|
||||||
|
} // namespace ROCKSDB_NAMESPACE
|
||||||
|
1
contrib/s2geometry
vendored
Submodule
1
contrib/s2geometry
vendored
Submodule
@ -0,0 +1 @@
|
|||||||
|
Subproject commit 20ea540d81f4575a3fc0aea585aac611bcd03ede
|
128
contrib/s2geometry-cmake/CMakeLists.txt
Normal file
128
contrib/s2geometry-cmake/CMakeLists.txt
Normal file
@ -0,0 +1,128 @@
|
|||||||
|
set(S2_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/s2geometry/src")
|
||||||
|
|
||||||
|
set(S2_SRCS
|
||||||
|
"${S2_SOURCE_DIR}/s2/base/stringprintf.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/base/strtoint.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/encoded_s2cell_id_vector.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/encoded_s2point_vector.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/encoded_s2shape_index.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/encoded_string_vector.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/id_set_lexicon.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/mutable_s2shape_index.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/r2rect.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s1angle.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s1chord_angle.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s1interval.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2boolean_operation.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2builder.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2builder_graph.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2builderutil_closed_set_normalizer.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2builderutil_find_polygon_degeneracies.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2builderutil_lax_polygon_layer.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2builderutil_s2point_vector_layer.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2builderutil_s2polygon_layer.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2builderutil_s2polyline_layer.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2builderutil_s2polyline_vector_layer.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2builderutil_snap_functions.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2cap.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2cell.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2cell_id.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2cell_index.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2cell_union.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2centroids.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2closest_cell_query.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2closest_edge_query.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2closest_point_query.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2contains_vertex_query.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2convex_hull_query.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2coords.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2crossing_edge_query.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2debug.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2earth.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2edge_clipping.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2edge_crosser.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2edge_crossings.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2edge_distances.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2edge_tessellator.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2error.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2furthest_edge_query.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2latlng.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2latlng_rect.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2latlng_rect_bounder.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2lax_loop_shape.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2lax_polygon_shape.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2lax_polyline_shape.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2loop.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2loop_measures.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2measures.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2metrics.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2max_distance_targets.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2min_distance_targets.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2padded_cell.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2point_compression.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2point_region.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2pointutil.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2polygon.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2polyline.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2polyline_alignment.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2polyline_measures.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2polyline_simplifier.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2predicates.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2projections.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2r2rect.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2region.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2region_term_indexer.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2region_coverer.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2region_intersection.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2region_union.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2shape_index.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2shape_index_buffered_region.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2shape_index_measures.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2shape_measures.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2shapeutil_build_polygon_boundaries.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2shapeutil_coding.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2shapeutil_contains_brute_force.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2shapeutil_edge_iterator.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2shapeutil_get_reference_point.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2shapeutil_range_iterator.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2shapeutil_visit_crossing_edge_pairs.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2text_format.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/s2wedge_relations.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/strings/ostringstream.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/strings/serialize.cc"
|
||||||
|
# ClickHouse doesn't use strings from abseil.
|
||||||
|
# So, there is no duplicate symbols.
|
||||||
|
"${S2_SOURCE_DIR}/s2/third_party/absl/base/dynamic_annotations.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/third_party/absl/base/internal/raw_logging.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/third_party/absl/base/internal/throw_delegate.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/third_party/absl/numeric/int128.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/third_party/absl/strings/ascii.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/third_party/absl/strings/match.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/third_party/absl/strings/numbers.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/third_party/absl/strings/str_cat.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/third_party/absl/strings/str_split.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/third_party/absl/strings/string_view.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/third_party/absl/strings/strip.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/third_party/absl/strings/internal/memutil.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/util/bits/bit-interleave.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/util/bits/bits.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/util/coding/coder.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/util/coding/varint.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/util/math/exactfloat/exactfloat.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/util/math/mathutil.cc"
|
||||||
|
"${S2_SOURCE_DIR}/s2/util/units/length-units.cc"
|
||||||
|
)
|
||||||
|
|
||||||
|
add_library(s2 ${S2_SRCS})
|
||||||
|
|
||||||
|
set_property(TARGET s2 PROPERTY CXX_STANDARD 11)
|
||||||
|
|
||||||
|
if (OPENSSL_FOUND)
|
||||||
|
target_link_libraries(s2 PRIVATE ${OPENSSL_LIBRARIES})
|
||||||
|
endif()
|
||||||
|
|
||||||
|
target_include_directories(s2 SYSTEM BEFORE PUBLIC "${S2_SOURCE_DIR}/")
|
||||||
|
|
||||||
|
if(M_LIBRARY)
|
||||||
|
target_link_libraries(s2 PRIVATE ${M_LIBRARY})
|
||||||
|
endif()
|
1
contrib/sqlite-amalgamation
vendored
Submodule
1
contrib/sqlite-amalgamation
vendored
Submodule
@ -0,0 +1 @@
|
|||||||
|
Subproject commit 9818baa5d027ffb26d57f810dc4c597d4946781c
|
6
contrib/sqlite-cmake/CMakeLists.txt
Normal file
6
contrib/sqlite-cmake/CMakeLists.txt
Normal file
@ -0,0 +1,6 @@
|
|||||||
|
set (LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/sqlite-amalgamation")
|
||||||
|
|
||||||
|
set(SRCS ${LIBRARY_DIR}/sqlite3.c)
|
||||||
|
|
||||||
|
add_library(sqlite ${SRCS})
|
||||||
|
target_include_directories(sqlite SYSTEM PUBLIC "${LIBRARY_DIR}")
|
4
debian/changelog
vendored
4
debian/changelog
vendored
@ -1,5 +1,5 @@
|
|||||||
clickhouse (21.8.1.1) unstable; urgency=low
|
clickhouse (21.9.1.1) unstable; urgency=low
|
||||||
|
|
||||||
* Modified source code
|
* Modified source code
|
||||||
|
|
||||||
-- clickhouse-release <clickhouse-release@yandex-team.ru> Mon, 28 Jun 2021 00:50:15 +0300
|
-- clickhouse-release <clickhouse-release@yandex-team.ru> Sat, 10 Jul 2021 08:22:49 +0300
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
FROM ubuntu:18.04
|
FROM ubuntu:18.04
|
||||||
|
|
||||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||||
ARG version=21.8.1.*
|
ARG version=21.9.1.*
|
||||||
|
|
||||||
RUN apt-get update \
|
RUN apt-get update \
|
||||||
&& apt-get install --yes --no-install-recommends \
|
&& apt-get install --yes --no-install-recommends \
|
||||||
|
@ -27,7 +27,7 @@ RUN apt-get update \
|
|||||||
# Special dpkg-deb (https://github.com/ClickHouse-Extras/dpkg) version which is able
|
# Special dpkg-deb (https://github.com/ClickHouse-Extras/dpkg) version which is able
|
||||||
# to compress files using pigz (https://zlib.net/pigz/) instead of gzip.
|
# to compress files using pigz (https://zlib.net/pigz/) instead of gzip.
|
||||||
# Significantly increase deb packaging speed and compatible with old systems
|
# Significantly increase deb packaging speed and compatible with old systems
|
||||||
RUN curl -O https://clickhouse-builds.s3.yandex.net/utils/1/dpkg-deb \
|
RUN curl -O https://clickhouse-datasets.s3.yandex.net/utils/1/dpkg-deb \
|
||||||
&& chmod +x dpkg-deb \
|
&& chmod +x dpkg-deb \
|
||||||
&& cp dpkg-deb /usr/bin
|
&& cp dpkg-deb /usr/bin
|
||||||
|
|
||||||
|
@ -2,7 +2,7 @@
|
|||||||
FROM yandex/clickhouse-deb-builder
|
FROM yandex/clickhouse-deb-builder
|
||||||
|
|
||||||
RUN export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \
|
RUN export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \
|
||||||
&& wget -nv -O /tmp/arrow-keyring.deb "https://apache.bintray.com/arrow/ubuntu/apache-arrow-archive-keyring-latest-${CODENAME}.deb" \
|
&& wget -nv -O /tmp/arrow-keyring.deb "https://apache.jfrog.io/artifactory/arrow/ubuntu/apache-arrow-apt-source-latest-${CODENAME}.deb" \
|
||||||
&& dpkg -i /tmp/arrow-keyring.deb
|
&& dpkg -i /tmp/arrow-keyring.deb
|
||||||
|
|
||||||
# Libraries from OS are only needed to test the "unbundled" build (that is not used in production).
|
# Libraries from OS are only needed to test the "unbundled" build (that is not used in production).
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
FROM ubuntu:20.04
|
FROM ubuntu:20.04
|
||||||
|
|
||||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||||
ARG version=21.8.1.*
|
ARG version=21.9.1.*
|
||||||
ARG gosu_ver=1.10
|
ARG gosu_ver=1.10
|
||||||
|
|
||||||
# set non-empty deb_location_url url to create a docker image
|
# set non-empty deb_location_url url to create a docker image
|
||||||
|
@ -72,7 +72,10 @@ do
|
|||||||
|
|
||||||
if [ "$DO_CHOWN" = "1" ]; then
|
if [ "$DO_CHOWN" = "1" ]; then
|
||||||
# ensure proper directories permissions
|
# ensure proper directories permissions
|
||||||
chown -R "$USER:$GROUP" "$dir"
|
# but skip it for if directory already has proper premissions, cause recursive chown may be slow
|
||||||
|
if [ "$(stat -c %u "$dir")" != "$USER" ] || [ "$(stat -c %g "$dir")" != "$GROUP" ]; then
|
||||||
|
chown -R "$USER:$GROUP" "$dir"
|
||||||
|
fi
|
||||||
elif ! $gosu test -d "$dir" -a -w "$dir" -a -r "$dir"; then
|
elif ! $gosu test -d "$dir" -a -w "$dir" -a -r "$dir"; then
|
||||||
echo "Necessary directory '$dir' isn't accessible by user with id '$USER'"
|
echo "Necessary directory '$dir' isn't accessible by user with id '$USER'"
|
||||||
exit 1
|
exit 1
|
||||||
@ -161,6 +164,10 @@ fi
|
|||||||
|
|
||||||
# if no args passed to `docker run` or first argument start with `--`, then the user is passing clickhouse-server arguments
|
# if no args passed to `docker run` or first argument start with `--`, then the user is passing clickhouse-server arguments
|
||||||
if [[ $# -lt 1 ]] || [[ "$1" == "--"* ]]; then
|
if [[ $# -lt 1 ]] || [[ "$1" == "--"* ]]; then
|
||||||
|
# Watchdog is launched by default, but does not send SIGINT to the main process,
|
||||||
|
# so the container can't be finished by ctrl+c
|
||||||
|
CLICKHOUSE_WATCHDOG_ENABLE=${CLICKHOUSE_WATCHDOG_ENABLE:-0}
|
||||||
|
export CLICKHOUSE_WATCHDOG_ENABLE
|
||||||
exec $gosu /usr/bin/clickhouse-server --config-file="$CLICKHOUSE_CONFIG" "$@"
|
exec $gosu /usr/bin/clickhouse-server --config-file="$CLICKHOUSE_CONFIG" "$@"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
FROM ubuntu:18.04
|
FROM ubuntu:18.04
|
||||||
|
|
||||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||||
ARG version=21.8.1.*
|
ARG version=21.9.1.*
|
||||||
|
|
||||||
RUN apt-get update && \
|
RUN apt-get update && \
|
||||||
apt-get install -y apt-transport-https dirmngr && \
|
apt-get install -y apt-transport-https dirmngr && \
|
||||||
|
@ -27,7 +27,7 @@ RUN apt-get update \
|
|||||||
# Special dpkg-deb (https://github.com/ClickHouse-Extras/dpkg) version which is able
|
# Special dpkg-deb (https://github.com/ClickHouse-Extras/dpkg) version which is able
|
||||||
# to compress files using pigz (https://zlib.net/pigz/) instead of gzip.
|
# to compress files using pigz (https://zlib.net/pigz/) instead of gzip.
|
||||||
# Significantly increase deb packaging speed and compatible with old systems
|
# Significantly increase deb packaging speed and compatible with old systems
|
||||||
RUN curl -O https://clickhouse-builds.s3.yandex.net/utils/1/dpkg-deb \
|
RUN curl -O https://clickhouse-datasets.s3.yandex.net/utils/1/dpkg-deb \
|
||||||
&& chmod +x dpkg-deb \
|
&& chmod +x dpkg-deb \
|
||||||
&& cp dpkg-deb /usr/bin
|
&& cp dpkg-deb /usr/bin
|
||||||
|
|
||||||
@ -61,4 +61,7 @@ ENV TSAN_OPTIONS='halt_on_error=1 history_size=7'
|
|||||||
ENV UBSAN_OPTIONS='print_stacktrace=1'
|
ENV UBSAN_OPTIONS='print_stacktrace=1'
|
||||||
ENV MSAN_OPTIONS='abort_on_error=1 poison_in_dtor=1'
|
ENV MSAN_OPTIONS='abort_on_error=1 poison_in_dtor=1'
|
||||||
|
|
||||||
|
ENV TZ=Europe/Moscow
|
||||||
|
RUN ln -snf "/usr/share/zoneinfo/$TZ" /etc/localtime && echo "$TZ" > /etc/timezone
|
||||||
|
|
||||||
CMD sleep 1
|
CMD sleep 1
|
||||||
|
@ -27,7 +27,7 @@ RUN apt-get update \
|
|||||||
# Special dpkg-deb (https://github.com/ClickHouse-Extras/dpkg) version which is able
|
# Special dpkg-deb (https://github.com/ClickHouse-Extras/dpkg) version which is able
|
||||||
# to compress files using pigz (https://zlib.net/pigz/) instead of gzip.
|
# to compress files using pigz (https://zlib.net/pigz/) instead of gzip.
|
||||||
# Significantly increase deb packaging speed and compatible with old systems
|
# Significantly increase deb packaging speed and compatible with old systems
|
||||||
RUN curl -O https://clickhouse-builds.s3.yandex.net/utils/1/dpkg-deb \
|
RUN curl -O https://clickhouse-datasets.s3.yandex.net/utils/1/dpkg-deb \
|
||||||
&& chmod +x dpkg-deb \
|
&& chmod +x dpkg-deb \
|
||||||
&& cp dpkg-deb /usr/bin
|
&& cp dpkg-deb /usr/bin
|
||||||
|
|
||||||
@ -65,7 +65,7 @@ RUN apt-get update \
|
|||||||
unixodbc \
|
unixodbc \
|
||||||
--yes --no-install-recommends
|
--yes --no-install-recommends
|
||||||
|
|
||||||
RUN pip3 install numpy scipy pandas
|
RUN pip3 install numpy scipy pandas Jinja2
|
||||||
|
|
||||||
# This symlink required by gcc to find lld compiler
|
# This symlink required by gcc to find lld compiler
|
||||||
RUN ln -s /usr/bin/lld-${LLVM_VERSION} /usr/bin/ld.lld
|
RUN ln -s /usr/bin/lld-${LLVM_VERSION} /usr/bin/ld.lld
|
||||||
|
@ -381,6 +381,16 @@ function run_tests
|
|||||||
|
|
||||||
# needs pv
|
# needs pv
|
||||||
01923_network_receive_time_metric_insert
|
01923_network_receive_time_metric_insert
|
||||||
|
|
||||||
|
01889_sqlite_read_write
|
||||||
|
|
||||||
|
# needs s2
|
||||||
|
01849_geoToS2
|
||||||
|
01851_s2_to_geo
|
||||||
|
01852_s2_get_neighbours
|
||||||
|
01853_s2_cells_intersect
|
||||||
|
01854_s2_cap_contains
|
||||||
|
01854_s2_cap_union
|
||||||
)
|
)
|
||||||
|
|
||||||
time clickhouse-test --hung-check -j 8 --order=random --use-skip-list \
|
time clickhouse-test --hung-check -j 8 --order=random --use-skip-list \
|
||||||
|
@ -194,6 +194,10 @@ continue
|
|||||||
jobs
|
jobs
|
||||||
pstree -aspgT
|
pstree -aspgT
|
||||||
|
|
||||||
|
server_exit_code=0
|
||||||
|
wait $server_pid || server_exit_code=$?
|
||||||
|
echo "Server exit code is $server_exit_code"
|
||||||
|
|
||||||
# Make files with status and description we'll show for this check on Github.
|
# Make files with status and description we'll show for this check on Github.
|
||||||
task_exit_code=$fuzzer_exit_code
|
task_exit_code=$fuzzer_exit_code
|
||||||
if [ "$server_died" == 1 ]
|
if [ "$server_died" == 1 ]
|
||||||
|
@ -32,7 +32,7 @@ RUN rm -rf \
|
|||||||
RUN apt-get clean
|
RUN apt-get clean
|
||||||
|
|
||||||
# Install MySQL ODBC driver
|
# Install MySQL ODBC driver
|
||||||
RUN curl 'https://cdn.mysql.com//Downloads/Connector-ODBC/8.0/mysql-connector-odbc-8.0.21-linux-glibc2.12-x86-64bit.tar.gz' --output 'mysql-connector.tar.gz' && tar -xzf mysql-connector.tar.gz && cd mysql-connector-odbc-8.0.21-linux-glibc2.12-x86-64bit/lib && mv * /usr/local/lib && ln -s /usr/local/lib/libmyodbc8a.so /usr/lib/x86_64-linux-gnu/odbc/libmyodbc.so
|
RUN curl 'https://downloads.mysql.com/archives/get/p/10/file/mysql-connector-odbc-8.0.21-linux-glibc2.12-x86-64bit.tar.gz' --location --output 'mysql-connector.tar.gz' && tar -xzf mysql-connector.tar.gz && cd mysql-connector-odbc-8.0.21-linux-glibc2.12-x86-64bit/lib && mv * /usr/local/lib && ln -s /usr/local/lib/libmyodbc8a.so /usr/lib/x86_64-linux-gnu/odbc/libmyodbc.so
|
||||||
|
|
||||||
# Unfortunately this is required for a single test for conversion data from zookeeper to clickhouse-keeper.
|
# Unfortunately this is required for a single test for conversion data from zookeeper to clickhouse-keeper.
|
||||||
# ZooKeeper is not started by default, but consumes some space in containers.
|
# ZooKeeper is not started by default, but consumes some space in containers.
|
||||||
@ -49,4 +49,3 @@ RUN mkdir /zookeeper && chmod -R 777 /zookeeper
|
|||||||
|
|
||||||
ENV TZ=Europe/Moscow
|
ENV TZ=Europe/Moscow
|
||||||
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
|
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
|
||||||
|
|
||||||
|
@ -76,6 +76,7 @@ RUN python3 -m pip install \
|
|||||||
pytest \
|
pytest \
|
||||||
pytest-timeout \
|
pytest-timeout \
|
||||||
pytest-xdist \
|
pytest-xdist \
|
||||||
|
pytest-repeat \
|
||||||
redis \
|
redis \
|
||||||
tzlocal \
|
tzlocal \
|
||||||
urllib3 \
|
urllib3 \
|
||||||
|
@ -14,10 +14,14 @@ services:
|
|||||||
}
|
}
|
||||||
EOF
|
EOF
|
||||||
./docker-entrypoint.sh'
|
./docker-entrypoint.sh'
|
||||||
ports:
|
expose:
|
||||||
- 9020:9019
|
- 9019
|
||||||
healthcheck:
|
healthcheck:
|
||||||
test: ["CMD", "curl", "-s", "localhost:9019/ping"]
|
test: ["CMD", "curl", "-s", "localhost:9019/ping"]
|
||||||
interval: 5s
|
interval: 5s
|
||||||
timeout: 3s
|
timeout: 3s
|
||||||
retries: 30
|
retries: 30
|
||||||
|
volumes:
|
||||||
|
- type: ${JDBC_BRIDGE_FS:-tmpfs}
|
||||||
|
source: ${JDBC_BRIDGE_LOGS:-}
|
||||||
|
target: /app/logs
|
@ -2,7 +2,7 @@ version: '2.3'
|
|||||||
services:
|
services:
|
||||||
postgres1:
|
postgres1:
|
||||||
image: postgres
|
image: postgres
|
||||||
command: ["postgres", "-c", "logging_collector=on", "-c", "log_directory=/postgres/logs", "-c", "log_filename=postgresql.log", "-c", "log_statement=all"]
|
command: ["postgres", "-c", "logging_collector=on", "-c", "log_directory=/postgres/logs", "-c", "log_filename=postgresql.log", "-c", "log_statement=all", "-c", "max_connections=200"]
|
||||||
restart: always
|
restart: always
|
||||||
expose:
|
expose:
|
||||||
- ${POSTGRES_PORT}
|
- ${POSTGRES_PORT}
|
||||||
|
@ -2,7 +2,7 @@ version: '2.3'
|
|||||||
|
|
||||||
services:
|
services:
|
||||||
rabbitmq1:
|
rabbitmq1:
|
||||||
image: rabbitmq:3-management-alpine
|
image: rabbitmq:3.8-management-alpine
|
||||||
hostname: rabbitmq1
|
hostname: rabbitmq1
|
||||||
expose:
|
expose:
|
||||||
- ${RABBITMQ_PORT}
|
- ${RABBITMQ_PORT}
|
||||||
|
@ -1196,7 +1196,7 @@ create table changes engine File(TSV, 'metrics/changes.tsv') as
|
|||||||
if(left > right, left / right, right / left) times_diff
|
if(left > right, left / right, right / left) times_diff
|
||||||
from metrics
|
from metrics
|
||||||
group by metric
|
group by metric
|
||||||
having abs(diff) > 0.05 and isFinite(diff)
|
having abs(diff) > 0.05 and isFinite(diff) and isFinite(times_diff)
|
||||||
)
|
)
|
||||||
order by diff desc
|
order by diff desc
|
||||||
;
|
;
|
||||||
|
@ -2,6 +2,11 @@
|
|||||||
|
|
||||||
set -e -x
|
set -e -x
|
||||||
|
|
||||||
|
# Choose random timezone for this test run
|
||||||
|
TZ="$(grep -v '#' /usr/share/zoneinfo/zone.tab | awk '{print $3}' | shuf | head -n1)"
|
||||||
|
echo "Choosen random timezone $TZ"
|
||||||
|
ln -snf "/usr/share/zoneinfo/$TZ" /etc/localtime && echo "$TZ" > /etc/timezone
|
||||||
|
|
||||||
dpkg -i package_folder/clickhouse-common-static_*.deb;
|
dpkg -i package_folder/clickhouse-common-static_*.deb;
|
||||||
dpkg -i package_folder/clickhouse-common-static-dbg_*.deb
|
dpkg -i package_folder/clickhouse-common-static-dbg_*.deb
|
||||||
dpkg -i package_folder/clickhouse-server_*.deb
|
dpkg -i package_folder/clickhouse-server_*.deb
|
||||||
|
@ -29,9 +29,10 @@ RUN apt-get update -y \
|
|||||||
unixodbc \
|
unixodbc \
|
||||||
wget \
|
wget \
|
||||||
mysql-client=5.7* \
|
mysql-client=5.7* \
|
||||||
postgresql-client
|
postgresql-client \
|
||||||
|
sqlite3
|
||||||
|
|
||||||
RUN pip3 install numpy scipy pandas
|
RUN pip3 install numpy scipy pandas Jinja2
|
||||||
|
|
||||||
RUN mkdir -p /tmp/clickhouse-odbc-tmp \
|
RUN mkdir -p /tmp/clickhouse-odbc-tmp \
|
||||||
&& wget -nv -O - ${odbc_driver_url} | tar --strip-components=1 -xz -C /tmp/clickhouse-odbc-tmp \
|
&& wget -nv -O - ${odbc_driver_url} | tar --strip-components=1 -xz -C /tmp/clickhouse-odbc-tmp \
|
||||||
|
@ -12,7 +12,7 @@ UNKNOWN_SIGN = "[ UNKNOWN "
|
|||||||
SKIPPED_SIGN = "[ SKIPPED "
|
SKIPPED_SIGN = "[ SKIPPED "
|
||||||
HUNG_SIGN = "Found hung queries in processlist"
|
HUNG_SIGN = "Found hung queries in processlist"
|
||||||
|
|
||||||
NO_TASK_TIMEOUT_SIGN = "All tests have finished"
|
NO_TASK_TIMEOUT_SIGNS = ["All tests have finished", "No tests were run"]
|
||||||
|
|
||||||
RETRIES_SIGN = "Some tests were restarted"
|
RETRIES_SIGN = "Some tests were restarted"
|
||||||
|
|
||||||
@ -29,7 +29,7 @@ def process_test_log(log_path):
|
|||||||
with open(log_path, 'r') as test_file:
|
with open(log_path, 'r') as test_file:
|
||||||
for line in test_file:
|
for line in test_file:
|
||||||
line = line.strip()
|
line = line.strip()
|
||||||
if NO_TASK_TIMEOUT_SIGN in line:
|
if any(s in line for s in NO_TASK_TIMEOUT_SIGNS):
|
||||||
task_timeout = False
|
task_timeout = False
|
||||||
if HUNG_SIGN in line:
|
if HUNG_SIGN in line:
|
||||||
hung = True
|
hung = True
|
||||||
@ -80,6 +80,7 @@ def process_result(result_path):
|
|||||||
if result_path and os.path.exists(result_path):
|
if result_path and os.path.exists(result_path):
|
||||||
total, skipped, unknown, failed, success, hung, task_timeout, retries, test_results = process_test_log(result_path)
|
total, skipped, unknown, failed, success, hung, task_timeout, retries, test_results = process_test_log(result_path)
|
||||||
is_flacky_check = 1 < int(os.environ.get('NUM_TRIES', 1))
|
is_flacky_check = 1 < int(os.environ.get('NUM_TRIES', 1))
|
||||||
|
logging.info("Is flacky check: %s", is_flacky_check)
|
||||||
# If no tests were run (success == 0) it indicates an error (e.g. server did not start or crashed immediately)
|
# If no tests were run (success == 0) it indicates an error (e.g. server did not start or crashed immediately)
|
||||||
# But it's Ok for "flaky checks" - they can contain just one test for check which is marked as skipped.
|
# But it's Ok for "flaky checks" - they can contain just one test for check which is marked as skipped.
|
||||||
if failed != 0 or unknown != 0 or (success == 0 and (not is_flacky_check)):
|
if failed != 0 or unknown != 0 or (success == 0 and (not is_flacky_check)):
|
||||||
|
@ -3,6 +3,11 @@
|
|||||||
# fail on errors, verbose and export all env variables
|
# fail on errors, verbose and export all env variables
|
||||||
set -e -x -a
|
set -e -x -a
|
||||||
|
|
||||||
|
# Choose random timezone for this test run.
|
||||||
|
TZ="$(grep -v '#' /usr/share/zoneinfo/zone.tab | awk '{print $3}' | shuf | head -n1)"
|
||||||
|
echo "Choosen random timezone $TZ"
|
||||||
|
ln -snf "/usr/share/zoneinfo/$TZ" /etc/localtime && echo "$TZ" > /etc/timezone
|
||||||
|
|
||||||
dpkg -i package_folder/clickhouse-common-static_*.deb
|
dpkg -i package_folder/clickhouse-common-static_*.deb
|
||||||
dpkg -i package_folder/clickhouse-common-static-dbg_*.deb
|
dpkg -i package_folder/clickhouse-common-static-dbg_*.deb
|
||||||
dpkg -i package_folder/clickhouse-server_*.deb
|
dpkg -i package_folder/clickhouse-server_*.deb
|
||||||
@ -138,15 +143,18 @@ if [[ -n "$WITH_COVERAGE" ]] && [[ "$WITH_COVERAGE" -eq 1 ]]; then
|
|||||||
fi
|
fi
|
||||||
tar -chf /test_output/text_log_dump.tar /var/lib/clickhouse/data/system/text_log ||:
|
tar -chf /test_output/text_log_dump.tar /var/lib/clickhouse/data/system/text_log ||:
|
||||||
tar -chf /test_output/query_log_dump.tar /var/lib/clickhouse/data/system/query_log ||:
|
tar -chf /test_output/query_log_dump.tar /var/lib/clickhouse/data/system/query_log ||:
|
||||||
|
tar -chf /test_output/zookeeper_log_dump.tar /var/lib/clickhouse/data/system/zookeeper_log ||:
|
||||||
tar -chf /test_output/coordination.tar /var/lib/clickhouse/coordination ||:
|
tar -chf /test_output/coordination.tar /var/lib/clickhouse/coordination ||:
|
||||||
|
|
||||||
if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then
|
if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then
|
||||||
grep -Fa "Fatal" /var/log/clickhouse-server/clickhouse-server1.log ||:
|
grep -Fa "Fatal" /var/log/clickhouse-server/clickhouse-server1.log ||:
|
||||||
grep -Fa "Fatal" /var/log/clickhouse-server/clickhouse-server2.log ||:
|
grep -Fa "Fatal" /var/log/clickhouse-server/clickhouse-server2.log ||:
|
||||||
pigz < /var/log/clickhouse-server/clickhouse-server1.log > /test_output/clickhouse-server1.log.gz ||:
|
pigz < /var/log/clickhouse-server/clickhouse-server1.log > /test_output/clickhouse-server1.log.gz ||:
|
||||||
pigz < /var/log/clickhouse-server/clickhouse-server2.log > /test_output/clickhouse-server2.log.gz ||:
|
pigz < /var/log/clickhouse-server/clickhouse-server2.log > /test_output/clickhouse-server2.log.gz ||:
|
||||||
mv /var/log/clickhouse-server/stderr1.log /test_output/ ||:
|
mv /var/log/clickhouse-server/stderr1.log /test_output/ ||:
|
||||||
mv /var/log/clickhouse-server/stderr2.log /test_output/ ||:
|
mv /var/log/clickhouse-server/stderr2.log /test_output/ ||:
|
||||||
|
tar -chf /test_output/zookeeper_log_dump1.tar /var/lib/clickhouse1/data/system/zookeeper_log ||:
|
||||||
|
tar -chf /test_output/zookeeper_log_dump2.tar /var/lib/clickhouse2/data/system/zookeeper_log ||:
|
||||||
tar -chf /test_output/coordination1.tar /var/lib/clickhouse1/coordination ||:
|
tar -chf /test_output/coordination1.tar /var/lib/clickhouse1/coordination ||:
|
||||||
tar -chf /test_output/coordination2.tar /var/lib/clickhouse2/coordination ||:
|
tar -chf /test_output/coordination2.tar /var/lib/clickhouse2/coordination ||:
|
||||||
fi
|
fi
|
||||||
|
@ -77,9 +77,6 @@ RUN mkdir -p /tmp/clickhouse-odbc-tmp \
|
|||||||
&& odbcinst -i -s -l -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbc.ini.sample \
|
&& odbcinst -i -s -l -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbc.ini.sample \
|
||||||
&& rm -rf /tmp/clickhouse-odbc-tmp
|
&& rm -rf /tmp/clickhouse-odbc-tmp
|
||||||
|
|
||||||
ENV TZ=Europe/Moscow
|
|
||||||
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
|
|
||||||
|
|
||||||
COPY run.sh /
|
COPY run.sh /
|
||||||
CMD ["/bin/bash", "/run.sh"]
|
CMD ["/bin/bash", "/run.sh"]
|
||||||
|
|
||||||
|
@ -58,11 +58,11 @@ function start()
|
|||||||
echo "Cannot start clickhouse-server"
|
echo "Cannot start clickhouse-server"
|
||||||
cat /var/log/clickhouse-server/stdout.log
|
cat /var/log/clickhouse-server/stdout.log
|
||||||
tail -n1000 /var/log/clickhouse-server/stderr.log
|
tail -n1000 /var/log/clickhouse-server/stderr.log
|
||||||
tail -n1000 /var/log/clickhouse-server/clickhouse-server.log
|
tail -n100000 /var/log/clickhouse-server/clickhouse-server.log | grep -F -v '<Warning> RaftInstance:' -e '<Information> RaftInstance' | tail -n1000
|
||||||
break
|
break
|
||||||
fi
|
fi
|
||||||
# use root to match with current uid
|
# use root to match with current uid
|
||||||
clickhouse start --user root >/var/log/clickhouse-server/stdout.log 2>/var/log/clickhouse-server/stderr.log
|
clickhouse start --user root >/var/log/clickhouse-server/stdout.log 2>>/var/log/clickhouse-server/stderr.log
|
||||||
sleep 0.5
|
sleep 0.5
|
||||||
counter=$((counter + 1))
|
counter=$((counter + 1))
|
||||||
done
|
done
|
||||||
@ -118,35 +118,35 @@ clickhouse-client --query "SELECT 'Server successfully started', 'OK'" >> /test_
|
|||||||
[ -f /var/log/clickhouse-server/stderr.log ] || echo -e "Stderr log does not exist\tFAIL"
|
[ -f /var/log/clickhouse-server/stderr.log ] || echo -e "Stderr log does not exist\tFAIL"
|
||||||
|
|
||||||
# Print Fatal log messages to stdout
|
# Print Fatal log messages to stdout
|
||||||
zgrep -Fa " <Fatal> " /var/log/clickhouse-server/clickhouse-server.log
|
zgrep -Fa " <Fatal> " /var/log/clickhouse-server/clickhouse-server.log*
|
||||||
|
|
||||||
# Grep logs for sanitizer asserts, crashes and other critical errors
|
# Grep logs for sanitizer asserts, crashes and other critical errors
|
||||||
|
|
||||||
# Sanitizer asserts
|
# Sanitizer asserts
|
||||||
zgrep -Fa "==================" /var/log/clickhouse-server/stderr.log >> /test_output/tmp
|
zgrep -Fa "==================" /var/log/clickhouse-server/stderr.log >> /test_output/tmp
|
||||||
zgrep -Fa "WARNING" /var/log/clickhouse-server/stderr.log >> /test_output/tmp
|
zgrep -Fa "WARNING" /var/log/clickhouse-server/stderr.log >> /test_output/tmp
|
||||||
zgrep -Fav "ASan doesn't fully support makecontext/swapcontext functions" > /dev/null \
|
zgrep -Fav "ASan doesn't fully support makecontext/swapcontext functions" /test_output/tmp > /dev/null \
|
||||||
&& echo -e 'Sanitizer assert (in stderr.log)\tFAIL' >> /test_output/test_results.tsv \
|
&& echo -e 'Sanitizer assert (in stderr.log)\tFAIL' >> /test_output/test_results.tsv \
|
||||||
|| echo -e 'No sanitizer asserts\tOK' >> /test_output/test_results.tsv
|
|| echo -e 'No sanitizer asserts\tOK' >> /test_output/test_results.tsv
|
||||||
rm -f /test_output/tmp
|
rm -f /test_output/tmp
|
||||||
|
|
||||||
# OOM
|
# OOM
|
||||||
zgrep -Fa " <Fatal> Application: Child process was terminated by signal 9" /var/log/clickhouse-server/clickhouse-server.log > /dev/null \
|
zgrep -Fa " <Fatal> Application: Child process was terminated by signal 9" /var/log/clickhouse-server/clickhouse-server.log* > /dev/null \
|
||||||
&& echo -e 'OOM killer (or signal 9) in clickhouse-server.log\tFAIL' >> /test_output/test_results.tsv \
|
&& echo -e 'OOM killer (or signal 9) in clickhouse-server.log\tFAIL' >> /test_output/test_results.tsv \
|
||||||
|| echo -e 'No OOM messages in clickhouse-server.log\tOK' >> /test_output/test_results.tsv
|
|| echo -e 'No OOM messages in clickhouse-server.log\tOK' >> /test_output/test_results.tsv
|
||||||
|
|
||||||
# Logical errors
|
# Logical errors
|
||||||
zgrep -Fa "Code: 49, e.displayText() = DB::Exception:" /var/log/clickhouse-server/clickhouse-server.log > /dev/null \
|
zgrep -Fa "Code: 49, e.displayText() = DB::Exception:" /var/log/clickhouse-server/clickhouse-server.log* > /dev/null \
|
||||||
&& echo -e 'Logical error thrown (see clickhouse-server.log)\tFAIL' >> /test_output/test_results.tsv \
|
&& echo -e 'Logical error thrown (see clickhouse-server.log)\tFAIL' >> /test_output/test_results.tsv \
|
||||||
|| echo -e 'No logical errors\tOK' >> /test_output/test_results.tsv
|
|| echo -e 'No logical errors\tOK' >> /test_output/test_results.tsv
|
||||||
|
|
||||||
# Crash
|
# Crash
|
||||||
zgrep -Fa "########################################" /var/log/clickhouse-server/clickhouse-server.log > /dev/null \
|
zgrep -Fa "########################################" /var/log/clickhouse-server/clickhouse-server.log* > /dev/null \
|
||||||
&& echo -e 'Killed by signal (in clickhouse-server.log)\tFAIL' >> /test_output/test_results.tsv \
|
&& echo -e 'Killed by signal (in clickhouse-server.log)\tFAIL' >> /test_output/test_results.tsv \
|
||||||
|| echo -e 'Not crashed\tOK' >> /test_output/test_results.tsv
|
|| echo -e 'Not crashed\tOK' >> /test_output/test_results.tsv
|
||||||
|
|
||||||
# It also checks for crash without stacktrace (printed by watchdog)
|
# It also checks for crash without stacktrace (printed by watchdog)
|
||||||
zgrep -Fa " <Fatal> " /var/log/clickhouse-server/clickhouse-server.log > /dev/null \
|
zgrep -Fa " <Fatal> " /var/log/clickhouse-server/clickhouse-server.log* > /dev/null \
|
||||||
&& echo -e 'Fatal message in clickhouse-server.log\tFAIL' >> /test_output/test_results.tsv \
|
&& echo -e 'Fatal message in clickhouse-server.log\tFAIL' >> /test_output/test_results.tsv \
|
||||||
|| echo -e 'No fatal messages in clickhouse-server.log\tOK' >> /test_output/test_results.tsv
|
|| echo -e 'No fatal messages in clickhouse-server.log\tOK' >> /test_output/test_results.tsv
|
||||||
|
|
||||||
|
@ -20,6 +20,7 @@ def get_skip_list_cmd(path):
|
|||||||
|
|
||||||
def get_options(i):
|
def get_options(i):
|
||||||
options = []
|
options = []
|
||||||
|
client_options = []
|
||||||
if 0 < i:
|
if 0 < i:
|
||||||
options.append("--order=random")
|
options.append("--order=random")
|
||||||
|
|
||||||
@ -27,25 +28,29 @@ def get_options(i):
|
|||||||
options.append("--db-engine=Ordinary")
|
options.append("--db-engine=Ordinary")
|
||||||
|
|
||||||
if i % 3 == 2:
|
if i % 3 == 2:
|
||||||
options.append('''--client-option='allow_experimental_database_replicated=1' --db-engine="Replicated('/test/db/test_{}', 's1', 'r1')"'''.format(i))
|
options.append('''--db-engine="Replicated('/test/db/test_{}', 's1', 'r1')"'''.format(i))
|
||||||
|
client_options.append('allow_experimental_database_replicated=1')
|
||||||
|
|
||||||
# If database name is not specified, new database is created for each functional test.
|
# If database name is not specified, new database is created for each functional test.
|
||||||
# Run some threads with one database for all tests.
|
# Run some threads with one database for all tests.
|
||||||
if i % 2 == 1:
|
if i % 2 == 1:
|
||||||
options.append(" --database=test_{}".format(i))
|
options.append(" --database=test_{}".format(i))
|
||||||
|
|
||||||
if i % 7 == 0:
|
if i % 5 == 1:
|
||||||
options.append(" --client-option='join_use_nulls=1'")
|
client_options.append("join_use_nulls=1")
|
||||||
|
|
||||||
if i % 14 == 0:
|
if i % 15 == 6:
|
||||||
options.append(' --client-option="join_algorithm=\'partial_merge\'"')
|
client_options.append("join_algorithm='partial_merge'")
|
||||||
|
|
||||||
if i % 21 == 0:
|
if i % 15 == 11:
|
||||||
options.append(' --client-option="join_algorithm=\'auto\'"')
|
client_options.append("join_algorithm='auto'")
|
||||||
options.append(' --client-option="max_rows_in_join=1000"')
|
client_options.append('max_rows_in_join=1000')
|
||||||
|
|
||||||
if i == 13:
|
if i == 13:
|
||||||
options.append(" --client-option='memory_tracker_fault_probability=0.00001'")
|
client_options.append('memory_tracker_fault_probability=0.001')
|
||||||
|
|
||||||
|
if client_options:
|
||||||
|
options.append(" --client-option " + ' '.join(client_options))
|
||||||
|
|
||||||
return ' '.join(options)
|
return ' '.join(options)
|
||||||
|
|
||||||
|
@ -1,8 +1,6 @@
|
|||||||
# docker build -t yandex/clickhouse-unit-test .
|
# docker build -t yandex/clickhouse-unit-test .
|
||||||
FROM yandex/clickhouse-stateless-test
|
FROM yandex/clickhouse-stateless-test
|
||||||
|
|
||||||
ENV TZ=Europe/Moscow
|
|
||||||
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
|
|
||||||
RUN apt-get install gdb
|
RUN apt-get install gdb
|
||||||
|
|
||||||
COPY run.sh /
|
COPY run.sh /
|
||||||
|
@ -9,7 +9,7 @@ Many developers can say that the code is the best docs by itself, and they are r
|
|||||||
If you want to help ClickHouse with documentation you can face, for example, the following questions:
|
If you want to help ClickHouse with documentation you can face, for example, the following questions:
|
||||||
|
|
||||||
- "I don't know how to write."
|
- "I don't know how to write."
|
||||||
|
|
||||||
We have prepared some [recommendations](#what-to-write) for you.
|
We have prepared some [recommendations](#what-to-write) for you.
|
||||||
|
|
||||||
- "I know what I want to write, but I don't know how to contribute to docs."
|
- "I know what I want to write, but I don't know how to contribute to docs."
|
||||||
@ -71,17 +71,17 @@ Contribute all new information in English language. Other languages are translat
|
|||||||
```
|
```
|
||||||
|
|
||||||
- Bold text: `**asterisks**` or `__underlines__`.
|
- Bold text: `**asterisks**` or `__underlines__`.
|
||||||
- Links: `[link text](uri)`. Examples:
|
- Links: `[link text](uri)`. Examples:
|
||||||
|
|
||||||
- External link: `[ClickHouse repo](https://github.com/ClickHouse/ClickHouse)`
|
- External link: `[ClickHouse repo](https://github.com/ClickHouse/ClickHouse)`
|
||||||
- Cross link: `[How to build docs](tools/README.md)`
|
- Cross link: `[How to build docs](tools/README.md)`
|
||||||
|
|
||||||
- Images: `![Exclamation sign](uri)`. You can refer to local images as well as remote in internet.
|
- Images: `![Exclamation sign](uri)`. You can refer to local images as well as remote in internet.
|
||||||
- Lists: Lists can be of two types:
|
- Lists: Lists can be of two types:
|
||||||
|
|
||||||
- `- unordered`: Each item starts from the `-`.
|
- `- unordered`: Each item starts from the `-`.
|
||||||
- `1. ordered`: Each item starts from the number.
|
- `1. ordered`: Each item starts from the number.
|
||||||
|
|
||||||
A list must be separated from the text by an empty line. Nested lists must be indented with 4 spaces.
|
A list must be separated from the text by an empty line. Nested lists must be indented with 4 spaces.
|
||||||
|
|
||||||
- Inline code: `` `in backticks` ``.
|
- Inline code: `` `in backticks` ``.
|
||||||
@ -107,7 +107,7 @@ Contribute all new information in English language. Other languages are translat
|
|||||||
- Text hidden behind a cut (single sting that opens on click):
|
- Text hidden behind a cut (single sting that opens on click):
|
||||||
|
|
||||||
```text
|
```text
|
||||||
<details markdown="1"> <summary>Visible text</summary>
|
<details markdown="1"> <summary>Visible text</summary>
|
||||||
Hidden content.
|
Hidden content.
|
||||||
</details>`.
|
</details>`.
|
||||||
```
|
```
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
---
|
---
|
||||||
toc_priority:
|
toc_priority:
|
||||||
toc_title:
|
toc_title:
|
||||||
---
|
---
|
||||||
|
|
||||||
# data_type_name {#data_type-name}
|
# data_type_name {#data_type-name}
|
||||||
|
@ -58,6 +58,6 @@ Result:
|
|||||||
|
|
||||||
Follow up with any text to clarify the example.
|
Follow up with any text to clarify the example.
|
||||||
|
|
||||||
**See Also**
|
**See Also**
|
||||||
|
|
||||||
- [link](#)
|
- [link](#)
|
||||||
|
@ -14,8 +14,8 @@ More text (Optional).
|
|||||||
|
|
||||||
**Arguments** (Optional)
|
**Arguments** (Optional)
|
||||||
|
|
||||||
- `x` — Description. Optional (only for optional arguments). Possible values: <values list>. Default value: <value>. [Type name](relative/path/to/type/dscr.md#type).
|
- `x` — Description. Optional (only for optional arguments). Possible values: <values list>. Default value: <value>. [Type name](relative/path/to/type/dscr.md#type).
|
||||||
- `y` — Description. Optional (only for optional arguments). Possible values: <values list>.Default value: <value>. [Type name](relative/path/to/type/dscr.md#type).
|
- `y` — Description. Optional (only for optional arguments). Possible values: <values list>.Default value: <value>. [Type name](relative/path/to/type/dscr.md#type).
|
||||||
|
|
||||||
**Parameters** (Optional, only for parametric aggregate functions)
|
**Parameters** (Optional, only for parametric aggregate functions)
|
||||||
|
|
||||||
@ -23,7 +23,7 @@ More text (Optional).
|
|||||||
|
|
||||||
**Returned value(s)**
|
**Returned value(s)**
|
||||||
|
|
||||||
- Returned values list.
|
- Returned values list.
|
||||||
|
|
||||||
Type: [Type name](relative/path/to/type/dscr.md#type).
|
Type: [Type name](relative/path/to/type/dscr.md#type).
|
||||||
|
|
||||||
|
@ -16,8 +16,8 @@ Better:
|
|||||||
option(ENABLE_TESTS "Provide unit_test_dbms target with Google.test unit tests" OFF)
|
option(ENABLE_TESTS "Provide unit_test_dbms target with Google.test unit tests" OFF)
|
||||||
```
|
```
|
||||||
|
|
||||||
If the option's purpose can't be guessed by its name, or the purpose guess may be misleading, or option has some
|
If the option's purpose can't be guessed by its name, or the purpose guess may be misleading, or option has some
|
||||||
pre-conditions, leave a comment above the `option()` line and explain what it does.
|
pre-conditions, leave a comment above the `option()` line and explain what it does.
|
||||||
The best way would be linking the docs page (if it exists).
|
The best way would be linking the docs page (if it exists).
|
||||||
The comment is parsed into a separate column (see below).
|
The comment is parsed into a separate column (see below).
|
||||||
|
|
||||||
@ -33,7 +33,7 @@ option(ENABLE_TESTS "Provide unit_test_dbms target with Google.test unit tests"
|
|||||||
|
|
||||||
Suppose you have an option that may strip debug symbols from the ClickHouse's part.
|
Suppose you have an option that may strip debug symbols from the ClickHouse's part.
|
||||||
This can speed up the linking process, but produces a binary that cannot be debugged.
|
This can speed up the linking process, but produces a binary that cannot be debugged.
|
||||||
In that case, prefer explicitly raising a warning telling the developer that he may be doing something wrong.
|
In that case, prefer explicitly raising a warning telling the developer that he may be doing something wrong.
|
||||||
Also, such options should be disabled if applies.
|
Also, such options should be disabled if applies.
|
||||||
|
|
||||||
Bad:
|
Bad:
|
||||||
|
@ -7,7 +7,7 @@ toc_title: Support
|
|||||||
|
|
||||||
!!! info "Info"
|
!!! info "Info"
|
||||||
If you have launched a ClickHouse commercial support service, feel free to [open a pull-request](https://github.com/ClickHouse/ClickHouse/edit/master/docs/en/commercial/support.md) adding it to the following list.
|
If you have launched a ClickHouse commercial support service, feel free to [open a pull-request](https://github.com/ClickHouse/ClickHouse/edit/master/docs/en/commercial/support.md) adding it to the following list.
|
||||||
|
|
||||||
## Yandex.Cloud
|
## Yandex.Cloud
|
||||||
|
|
||||||
ClickHouse worldwide support from the authors of ClickHouse. Supports on-premise and cloud deployments. Ask details on clickhouse-support@yandex-team.com
|
ClickHouse worldwide support from the authors of ClickHouse. Supports on-premise and cloud deployments. Ask details on clickhouse-support@yandex-team.com
|
||||||
|
@ -4,11 +4,11 @@ ClickHouse has hundreds (or even thousands) of features. Every commit gets check
|
|||||||
|
|
||||||
The core functionality is very well tested, but some corner-cases and different combinations of features can be uncovered with ClickHouse CI.
|
The core functionality is very well tested, but some corner-cases and different combinations of features can be uncovered with ClickHouse CI.
|
||||||
|
|
||||||
Most of the bugs/regressions we see happen in that 'grey area' where test coverage is poor.
|
Most of the bugs/regressions we see happen in that 'grey area' where test coverage is poor.
|
||||||
|
|
||||||
And we are very interested in covering most of the possible scenarios and feature combinations used in real life by tests.
|
And we are very interested in covering most of the possible scenarios and feature combinations used in real life by tests.
|
||||||
|
|
||||||
## Why adding tests
|
## Why adding tests
|
||||||
|
|
||||||
Why/when you should add a test case into ClickHouse code:
|
Why/when you should add a test case into ClickHouse code:
|
||||||
1) you use some complicated scenarios / feature combinations / you have some corner case which is probably not widely used
|
1) you use some complicated scenarios / feature combinations / you have some corner case which is probably not widely used
|
||||||
@ -17,18 +17,18 @@ Why/when you should add a test case into ClickHouse code:
|
|||||||
4) once the test is added/accepted, you can be sure the corner case you check will never be accidentally broken.
|
4) once the test is added/accepted, you can be sure the corner case you check will never be accidentally broken.
|
||||||
5) you will be a part of great open-source community
|
5) you will be a part of great open-source community
|
||||||
6) your name will be visible in the `system.contributors` table!
|
6) your name will be visible in the `system.contributors` table!
|
||||||
7) you will make a world bit better :)
|
7) you will make a world bit better :)
|
||||||
|
|
||||||
### Steps to do
|
### Steps to do
|
||||||
|
|
||||||
#### Prerequisite
|
#### Prerequisite
|
||||||
|
|
||||||
I assume you run some Linux machine (you can use docker / virtual machines on other OS) and any modern browser / internet connection, and you have some basic Linux & SQL skills.
|
I assume you run some Linux machine (you can use docker / virtual machines on other OS) and any modern browser / internet connection, and you have some basic Linux & SQL skills.
|
||||||
|
|
||||||
Any highly specialized knowledge is not needed (so you don't need to know C++ or know something about how ClickHouse CI works).
|
Any highly specialized knowledge is not needed (so you don't need to know C++ or know something about how ClickHouse CI works).
|
||||||
|
|
||||||
|
|
||||||
#### Preparation
|
#### Preparation
|
||||||
|
|
||||||
1) [create GitHub account](https://github.com/join) (if you haven't one yet)
|
1) [create GitHub account](https://github.com/join) (if you haven't one yet)
|
||||||
2) [setup git](https://docs.github.com/en/free-pro-team@latest/github/getting-started-with-github/set-up-git)
|
2) [setup git](https://docs.github.com/en/free-pro-team@latest/github/getting-started-with-github/set-up-git)
|
||||||
@ -54,17 +54,17 @@ git remote add upstream https://github.com/ClickHouse/ClickHouse
|
|||||||
|
|
||||||
#### New branch for the test
|
#### New branch for the test
|
||||||
|
|
||||||
1) create a new branch from the latest clickhouse master
|
1) create a new branch from the latest clickhouse master
|
||||||
```
|
```
|
||||||
cd ~/workspace/ClickHouse
|
cd ~/workspace/ClickHouse
|
||||||
git fetch upstream
|
git fetch upstream
|
||||||
git checkout -b name_for_a_branch_with_my_test upstream/master
|
git checkout -b name_for_a_branch_with_my_test upstream/master
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Install & run clickhouse
|
#### Install & run clickhouse
|
||||||
|
|
||||||
1) install `clickhouse-server` (follow [official docs](https://clickhouse.tech/docs/en/getting-started/install/))
|
1) install `clickhouse-server` (follow [official docs](https://clickhouse.tech/docs/en/getting-started/install/))
|
||||||
2) install test configurations (it will use Zookeeper mock implementation and adjust some settings)
|
2) install test configurations (it will use Zookeeper mock implementation and adjust some settings)
|
||||||
```
|
```
|
||||||
cd ~/workspace/ClickHouse/tests/config
|
cd ~/workspace/ClickHouse/tests/config
|
||||||
sudo ./install.sh
|
sudo ./install.sh
|
||||||
@ -74,7 +74,7 @@ sudo ./install.sh
|
|||||||
sudo systemctl restart clickhouse-server
|
sudo systemctl restart clickhouse-server
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Creating the test file
|
#### Creating the test file
|
||||||
|
|
||||||
|
|
||||||
1) find the number for your test - find the file with the biggest number in `tests/queries/0_stateless/`
|
1) find the number for your test - find the file with the biggest number in `tests/queries/0_stateless/`
|
||||||
@ -86,7 +86,7 @@ tests/queries/0_stateless/01520_client_print_query_id.reference
|
|||||||
```
|
```
|
||||||
Currently, the last number for the test is `01520`, so my test will have the number `01521`
|
Currently, the last number for the test is `01520`, so my test will have the number `01521`
|
||||||
|
|
||||||
2) create an SQL file with the next number and name of the feature you test
|
2) create an SQL file with the next number and name of the feature you test
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
touch tests/queries/0_stateless/01521_dummy_test.sql
|
touch tests/queries/0_stateless/01521_dummy_test.sql
|
||||||
@ -105,30 +105,40 @@ clickhouse-client -nmT < tests/queries/0_stateless/01521_dummy_test.sql | tee te
|
|||||||
|
|
||||||
5) ensure everything is correct, if the test output is incorrect (due to some bug for example), adjust the reference file using text editor.
|
5) ensure everything is correct, if the test output is incorrect (due to some bug for example), adjust the reference file using text editor.
|
||||||
|
|
||||||
#### How to create good test
|
#### How to create a good test
|
||||||
|
|
||||||
- test should be
|
- A test should be
|
||||||
- minimal - create only tables related to tested functionality, remove unrelated columns and parts of query
|
- minimal - create only tables related to tested functionality, remove unrelated columns and parts of query
|
||||||
- fast - should not take longer than few seconds (better subseconds)
|
- fast - should not take longer than a few seconds (better subseconds)
|
||||||
- correct - fails then feature is not working
|
- correct - fails then feature is not working
|
||||||
- deterministic
|
- deterministic
|
||||||
- isolated / stateless
|
- isolated / stateless
|
||||||
- don't rely on some environment things
|
- don't rely on some environment things
|
||||||
- don't rely on timing when possible
|
- don't rely on timing when possible
|
||||||
- try to cover corner cases (zeros / Nulls / empty sets / throwing exceptions)
|
- try to cover corner cases (zeros / Nulls / empty sets / throwing exceptions)
|
||||||
- to test that query return errors, you can put special comment after the query: `-- { serverError 60 }` or `-- { clientError 20 }`
|
- to test that query return errors, you can put special comment after the query: `-- { serverError 60 }` or `-- { clientError 20 }`
|
||||||
- don't switch databases (unless necessary)
|
- don't switch databases (unless necessary)
|
||||||
- you can create several table replicas on the same node if needed
|
- you can create several table replicas on the same node if needed
|
||||||
- you can use one of the test cluster definitions when needed (see system.clusters)
|
- you can use one of the test cluster definitions when needed (see system.clusters)
|
||||||
- use `number` / `numbers_mt` / `zeros` / `zeros_mt` and similar for queries / to initialize data when applicable
|
- use `number` / `numbers_mt` / `zeros` / `zeros_mt` and similar for queries / to initialize data when applicable
|
||||||
- clean up the created objects after test and before the test (DROP IF EXISTS) - in case of some dirty state
|
- clean up the created objects after test and before the test (DROP IF EXISTS) - in case of some dirty state
|
||||||
- prefer sync mode of operations (mutations, merges, etc.)
|
- prefer sync mode of operations (mutations, merges, etc.)
|
||||||
- use other SQL files in the `0_stateless` folder as an example
|
- use other SQL files in the `0_stateless` folder as an example
|
||||||
- ensure the feature / feature combination you want to test is not yet covered with existing tests
|
- ensure the feature / feature combination you want to test is not yet covered with existing tests
|
||||||
|
|
||||||
|
#### Test naming rules
|
||||||
|
|
||||||
|
It's important to name tests correctly, so one could turn some tests subset off in clickhouse-test invocation.
|
||||||
|
|
||||||
|
| Tester flag| What should be in test name | When flag should be added |
|
||||||
|
|---|---|---|---|
|
||||||
|
| `--[no-]zookeeper`| "zookeeper" or "replica" | Test uses tables from ReplicatedMergeTree family |
|
||||||
|
| `--[no-]shard` | "shard" or "distributed" or "global"| Test using connections to 127.0.0.2 or similar |
|
||||||
|
| `--[no-]long` | "long" or "deadlock" or "race" | Test runs longer than 60 seconds |
|
||||||
|
|
||||||
#### Commit / push / create PR.
|
#### Commit / push / create PR.
|
||||||
|
|
||||||
1) commit & push your changes
|
1) commit & push your changes
|
||||||
```sh
|
```sh
|
||||||
cd ~/workspace/ClickHouse
|
cd ~/workspace/ClickHouse
|
||||||
git add tests/queries/0_stateless/01521_dummy_test.sql
|
git add tests/queries/0_stateless/01521_dummy_test.sql
|
||||||
@ -137,5 +147,5 @@ git commit # use some nice commit message when possible
|
|||||||
git push origin HEAD
|
git push origin HEAD
|
||||||
```
|
```
|
||||||
2) use a link which was shown during the push, to create a PR into the main repo
|
2) use a link which was shown during the push, to create a PR into the main repo
|
||||||
3) adjust the PR title and contents, in `Changelog category (leave one)` keep
|
3) adjust the PR title and contents, in `Changelog category (leave one)` keep
|
||||||
`Build/Testing/Packaging Improvement`, fill the rest of the fields if you want.
|
`Build/Testing/Packaging Improvement`, fill the rest of the fields if you want.
|
||||||
|
@ -134,10 +134,10 @@ $ ./release
|
|||||||
|
|
||||||
## Faster builds for development
|
## Faster builds for development
|
||||||
|
|
||||||
Normally all tools of the ClickHouse bundle, such as `clickhouse-server`, `clickhouse-client` etc., are linked into a single static executable, `clickhouse`. This executable must be re-linked on every change, which might be slow. Two common ways to improve linking time are to use `lld` linker, and use the 'split' build configuration, which builds a separate binary for every tool, and further splits the code into several shared libraries. To enable these tweaks, pass the following flags to `cmake`:
|
Normally all tools of the ClickHouse bundle, such as `clickhouse-server`, `clickhouse-client` etc., are linked into a single static executable, `clickhouse`. This executable must be re-linked on every change, which might be slow. One common way to improve build time is to use the 'split' build configuration, which builds a separate binary for every tool, and further splits the code into several shared libraries. To enable this tweak, pass the following flags to `cmake`:
|
||||||
|
|
||||||
```
|
```
|
||||||
-DCMAKE_C_FLAGS="--ld-path=lld" -DCMAKE_CXX_FLAGS="--ld-path=lld" -DUSE_STATIC_LIBRARIES=0 -DSPLIT_SHARED_LIBRARIES=1 -DCLICKHOUSE_SPLIT_BINARY=1
|
-DUSE_STATIC_LIBRARIES=0 -DSPLIT_SHARED_LIBRARIES=1 -DCLICKHOUSE_SPLIT_BINARY=1
|
||||||
```
|
```
|
||||||
|
|
||||||
## You Don’t Have to Build ClickHouse {#you-dont-have-to-build-clickhouse}
|
## You Don’t Have to Build ClickHouse {#you-dont-have-to-build-clickhouse}
|
||||||
|
@ -8,7 +8,7 @@ toc_title: Third-Party Libraries Used
|
|||||||
The list of third-party libraries can be obtained by the following query:
|
The list of third-party libraries can be obtained by the following query:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
SELECT library_name, license_type, license_path FROM system.licenses ORDER BY library_name COLLATE 'en'
|
SELECT library_name, license_type, license_path FROM system.licenses ORDER BY library_name COLLATE 'en';
|
||||||
```
|
```
|
||||||
|
|
||||||
[Example](https://gh-api.clickhouse.tech/play?user=play#U0VMRUNUIGxpYnJhcnlfbmFtZSwgbGljZW5zZV90eXBlLCBsaWNlbnNlX3BhdGggRlJPTSBzeXN0ZW0ubGljZW5zZXMgT1JERVIgQlkgbGlicmFyeV9uYW1lIENPTExBVEUgJ2VuJw==)
|
[Example](https://gh-api.clickhouse.tech/play?user=play#U0VMRUNUIGxpYnJhcnlfbmFtZSwgbGljZW5zZV90eXBlLCBsaWNlbnNlX3BhdGggRlJPTSBzeXN0ZW0ubGljZW5zZXMgT1JERVIgQlkgbGlicmFyeV9uYW1lIENPTExBVEUgJ2VuJw==)
|
||||||
@ -79,6 +79,7 @@ SELECT library_name, license_type, license_path FROM system.licenses ORDER BY li
|
|||||||
| re2 | BSD 3-clause | /contrib/re2/LICENSE |
|
| re2 | BSD 3-clause | /contrib/re2/LICENSE |
|
||||||
| replxx | BSD 3-clause | /contrib/replxx/LICENSE.md |
|
| replxx | BSD 3-clause | /contrib/replxx/LICENSE.md |
|
||||||
| rocksdb | BSD 3-clause | /contrib/rocksdb/LICENSE.leveldb |
|
| rocksdb | BSD 3-clause | /contrib/rocksdb/LICENSE.leveldb |
|
||||||
|
| s2geometry | Apache | /contrib/s2geometry/LICENSE |
|
||||||
| sentry-native | MIT | /contrib/sentry-native/LICENSE |
|
| sentry-native | MIT | /contrib/sentry-native/LICENSE |
|
||||||
| simdjson | Apache | /contrib/simdjson/LICENSE |
|
| simdjson | Apache | /contrib/simdjson/LICENSE |
|
||||||
| snappy | Public Domain | /contrib/snappy/COPYING |
|
| snappy | Public Domain | /contrib/snappy/COPYING |
|
||||||
|
@ -123,7 +123,7 @@ For installing CMake and Ninja on Mac OS X first install Homebrew and then insta
|
|||||||
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
|
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
|
||||||
brew install cmake ninja
|
brew install cmake ninja
|
||||||
|
|
||||||
Next, check the version of CMake: `cmake --version`. If it is below 3.3, you should install a newer version from the website: https://cmake.org/download/.
|
Next, check the version of CMake: `cmake --version`. If it is below 3.12, you should install a newer version from the website: https://cmake.org/download/.
|
||||||
|
|
||||||
## Optional External Libraries {#optional-external-libraries}
|
## Optional External Libraries {#optional-external-libraries}
|
||||||
|
|
||||||
|
@ -749,7 +749,7 @@ If your code in the `master` branch is not buildable yet, exclude it from the bu
|
|||||||
|
|
||||||
**1.** The C++20 standard library is used (experimental extensions are allowed), as well as `boost` and `Poco` frameworks.
|
**1.** The C++20 standard library is used (experimental extensions are allowed), as well as `boost` and `Poco` frameworks.
|
||||||
|
|
||||||
**2.** It is not allowed to use libraries from OS packages. It is also not allowed to use pre-installed libraries. All libraries should be placed in form of source code in `contrib` directory and built with ClickHouse.
|
**2.** It is not allowed to use libraries from OS packages. It is also not allowed to use pre-installed libraries. All libraries should be placed in form of source code in `contrib` directory and built with ClickHouse. See [Guidelines for adding new third-party libraries](contrib.md#adding-third-party-libraries) for details.
|
||||||
|
|
||||||
**3.** Preference is always given to libraries that are already in use.
|
**3.** Preference is always given to libraries that are already in use.
|
||||||
|
|
||||||
|
@ -70,7 +70,13 @@ Note that integration of ClickHouse with third-party drivers is not tested. Also
|
|||||||
|
|
||||||
Unit tests are useful when you want to test not the ClickHouse as a whole, but a single isolated library or class. You can enable or disable build of tests with `ENABLE_TESTS` CMake option. Unit tests (and other test programs) are located in `tests` subdirectories across the code. To run unit tests, type `ninja test`. Some tests use `gtest`, but some are just programs that return non-zero exit code on test failure.
|
Unit tests are useful when you want to test not the ClickHouse as a whole, but a single isolated library or class. You can enable or disable build of tests with `ENABLE_TESTS` CMake option. Unit tests (and other test programs) are located in `tests` subdirectories across the code. To run unit tests, type `ninja test`. Some tests use `gtest`, but some are just programs that return non-zero exit code on test failure.
|
||||||
|
|
||||||
It’s not necessarily to have unit tests if the code is already covered by functional tests (and functional tests are usually much more simple to use).
|
It’s not necessary to have unit tests if the code is already covered by functional tests (and functional tests are usually much more simple to use).
|
||||||
|
|
||||||
|
You can run individual gtest checks by calling the executable directly, for example:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ ./src/unit_tests_dbms --gtest_filter=LocalAddress*
|
||||||
|
```
|
||||||
|
|
||||||
## Performance Tests {#performance-tests}
|
## Performance Tests {#performance-tests}
|
||||||
|
|
||||||
|
@ -17,7 +17,7 @@ It supports non-blocking [DROP TABLE](#drop-detach-table) and [RENAME TABLE](#re
|
|||||||
|
|
||||||
### Table UUID {#table-uuid}
|
### Table UUID {#table-uuid}
|
||||||
|
|
||||||
All tables in database `Atomic` have persistent [UUID](../../sql-reference/data-types/uuid.md) and store data in directory `/clickhouse_path/store/xxx/xxxyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy/`, where `xxxyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy` is UUID of the table.
|
All tables in database `Atomic` have persistent [UUID](../../sql-reference/data-types/uuid.md) and store data in directory `/clickhouse_path/store/xxx/xxxyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy/`, where `xxxyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy` is UUID of the table.
|
||||||
Usually, the UUID is generated automatically, but the user can also explicitly specify the UUID in the same way when creating the table (this is not recommended). To display the `SHOW CREATE` query with the UUID you can use setting [show_table_uuid_in_table_create_query_if_not_nil](../../operations/settings/settings.md#show_table_uuid_in_table_create_query_if_not_nil). For example:
|
Usually, the UUID is generated automatically, but the user can also explicitly specify the UUID in the same way when creating the table (this is not recommended). To display the `SHOW CREATE` query with the UUID you can use setting [show_table_uuid_in_table_create_query_if_not_nil](../../operations/settings/settings.md#show_table_uuid_in_table_create_query_if_not_nil). For example:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
@ -47,7 +47,7 @@ EXCHANGE TABLES new_table AND old_table;
|
|||||||
|
|
||||||
### ReplicatedMergeTree in Atomic Database {#replicatedmergetree-in-atomic-database}
|
### ReplicatedMergeTree in Atomic Database {#replicatedmergetree-in-atomic-database}
|
||||||
|
|
||||||
For [ReplicatedMergeTree](../table-engines/mergetree-family/replication.md#table_engines-replication) tables, it is recommended to not specify engine parameters - path in ZooKeeper and replica name. In this case, configuration parameters will be used [default_replica_path](../../operations/server-configuration-parameters/settings.md#default_replica_path) and [default_replica_name](../../operations/server-configuration-parameters/settings.md#default_replica_name). If you want to specify engine parameters explicitly, it is recommended to use {uuid} macros. This is useful so that unique paths are automatically generated for each table in ZooKeeper.
|
For [ReplicatedMergeTree](../table-engines/mergetree-family/replication.md#table_engines-replication) tables, it is recommended to not specify engine parameters - path in ZooKeeper and replica name. In this case, configuration parameters will be used [default_replica_path](../../operations/server-configuration-parameters/settings.md#default_replica_path) and [default_replica_name](../../operations/server-configuration-parameters/settings.md#default_replica_name). If you want to specify engine parameters explicitly, it is recommended to use `{uuid}` macros. This is useful so that unique paths are automatically generated for each table in ZooKeeper.
|
||||||
|
|
||||||
## See Also
|
## See Also
|
||||||
|
|
||||||
|
@ -14,7 +14,7 @@ You can also use the following database engines:
|
|||||||
|
|
||||||
- [MySQL](../../engines/database-engines/mysql.md)
|
- [MySQL](../../engines/database-engines/mysql.md)
|
||||||
|
|
||||||
- [MaterializeMySQL](../../engines/database-engines/materialize-mysql.md)
|
- [MaterializedMySQL](../../engines/database-engines/materialized-mysql.md)
|
||||||
|
|
||||||
- [Lazy](../../engines/database-engines/lazy.md)
|
- [Lazy](../../engines/database-engines/lazy.md)
|
||||||
|
|
||||||
@ -22,4 +22,4 @@ You can also use the following database engines:
|
|||||||
|
|
||||||
- [PostgreSQL](../../engines/database-engines/postgresql.md)
|
- [PostgreSQL](../../engines/database-engines/postgresql.md)
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/database_engines/) <!--hide-->
|
- [Replicated](../../engines/database-engines/replicated.md)
|
||||||
|
@ -1,9 +1,11 @@
|
|||||||
---
|
---
|
||||||
toc_priority: 29
|
toc_priority: 29
|
||||||
toc_title: MaterializeMySQL
|
toc_title: MaterializedMySQL
|
||||||
---
|
---
|
||||||
|
|
||||||
# MaterializeMySQL {#materialize-mysql}
|
# MaterializedMySQL {#materialized-mysql}
|
||||||
|
|
||||||
|
**This is experimental feature that should not be used in production.**
|
||||||
|
|
||||||
Creates ClickHouse database with all the tables existing in MySQL, and all the data in those tables.
|
Creates ClickHouse database with all the tables existing in MySQL, and all the data in those tables.
|
||||||
|
|
||||||
@ -15,7 +17,7 @@ This feature is experimental.
|
|||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
CREATE DATABASE [IF NOT EXISTS] db_name [ON CLUSTER cluster]
|
CREATE DATABASE [IF NOT EXISTS] db_name [ON CLUSTER cluster]
|
||||||
ENGINE = MaterializeMySQL('host:port', ['database' | database], 'user', 'password') [SETTINGS ...]
|
ENGINE = MaterializedMySQL('host:port', ['database' | database], 'user', 'password') [SETTINGS ...]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Engine Parameters**
|
**Engine Parameters**
|
||||||
@ -25,13 +27,35 @@ ENGINE = MaterializeMySQL('host:port', ['database' | database], 'user', 'passwor
|
|||||||
- `user` — MySQL user.
|
- `user` — MySQL user.
|
||||||
- `password` — User password.
|
- `password` — User password.
|
||||||
|
|
||||||
|
**Engine Settings**
|
||||||
|
- `max_rows_in_buffer` — Max rows that data is allowed to cache in memory(for single table and the cache data unable to query). when rows is exceeded, the data will be materialized. Default: `65505`.
|
||||||
|
- `max_bytes_in_buffer` — Max bytes that data is allowed to cache in memory(for single table and the cache data unable to query). when rows is exceeded, the data will be materialized. Default: `1048576`.
|
||||||
|
- `max_rows_in_buffers` — Max rows that data is allowed to cache in memory(for database and the cache data unable to query). when rows is exceeded, the data will be materialized. Default: `65505`.
|
||||||
|
- `max_bytes_in_buffers` — Max bytes that data is allowed to cache in memory(for database and the cache data unable to query). when rows is exceeded, the data will be materialized. Default: `1048576`.
|
||||||
|
- `max_flush_data_time` — Max milliseconds that data is allowed to cache in memory(for database and the cache data unable to query). when this time is exceeded, the data will be materialized. Default: `1000`.
|
||||||
|
- `max_wait_time_when_mysql_unavailable` — Retry interval when MySQL is not available (milliseconds). Negative value disable retry. Default: `1000`.
|
||||||
|
- `allows_query_when_mysql_lost` — Allow query materialized table when mysql is lost. Default: `0` (`false`).
|
||||||
|
```
|
||||||
|
CREATE DATABASE mysql ENGINE = MaterializedMySQL('localhost:3306', 'db', 'user', '***')
|
||||||
|
SETTINGS
|
||||||
|
allows_query_when_mysql_lost=true,
|
||||||
|
max_wait_time_when_mysql_unavailable=10000;
|
||||||
|
```
|
||||||
|
|
||||||
|
**Settings on MySQL-server side**
|
||||||
|
|
||||||
|
For the correct work of `MaterializeMySQL`, there are few mandatory `MySQL`-side configuration settings that should be set:
|
||||||
|
|
||||||
|
- `default_authentication_plugin = mysql_native_password` since `MaterializeMySQL` can only authorize with this method.
|
||||||
|
- `gtid_mode = on` since GTID based logging is a mandatory for providing correct `MaterializeMySQL` replication. Pay attention that while turning this mode `On` you should also specify `enforce_gtid_consistency = on`.
|
||||||
|
|
||||||
## Virtual columns {#virtual-columns}
|
## Virtual columns {#virtual-columns}
|
||||||
|
|
||||||
When working with the `MaterializeMySQL` database engine, [ReplacingMergeTree](../../engines/table-engines/mergetree-family/replacingmergetree.md) tables are used with virtual `_sign` and `_version` columns.
|
When working with the `MaterializedMySQL` database engine, [ReplacingMergeTree](../../engines/table-engines/mergetree-family/replacingmergetree.md) tables are used with virtual `_sign` and `_version` columns.
|
||||||
|
|
||||||
- `_version` — Transaction counter. Type [UInt64](../../sql-reference/data-types/int-uint.md).
|
- `_version` — Transaction counter. Type [UInt64](../../sql-reference/data-types/int-uint.md).
|
||||||
- `_sign` — Deletion mark. Type [Int8](../../sql-reference/data-types/int-uint.md). Possible values:
|
- `_sign` — Deletion mark. Type [Int8](../../sql-reference/data-types/int-uint.md). Possible values:
|
||||||
- `1` — Row is not deleted,
|
- `1` — Row is not deleted,
|
||||||
- `-1` — Row is deleted.
|
- `-1` — Row is deleted.
|
||||||
|
|
||||||
## Data Types Support {#data_types-support}
|
## Data Types Support {#data_types-support}
|
||||||
@ -53,6 +77,7 @@ When working with the `MaterializeMySQL` database engine, [ReplacingMergeTree](.
|
|||||||
| STRING | [String](../../sql-reference/data-types/string.md) |
|
| STRING | [String](../../sql-reference/data-types/string.md) |
|
||||||
| VARCHAR, VAR_STRING | [String](../../sql-reference/data-types/string.md) |
|
| VARCHAR, VAR_STRING | [String](../../sql-reference/data-types/string.md) |
|
||||||
| BLOB | [String](../../sql-reference/data-types/string.md) |
|
| BLOB | [String](../../sql-reference/data-types/string.md) |
|
||||||
|
| BINARY | [FixedString](../../sql-reference/data-types/fixedstring.md) |
|
||||||
|
|
||||||
Other types are not supported. If MySQL table contains a column of such type, ClickHouse throws exception "Unhandled data type" and stops replication.
|
Other types are not supported. If MySQL table contains a column of such type, ClickHouse throws exception "Unhandled data type" and stops replication.
|
||||||
|
|
||||||
@ -60,28 +85,38 @@ Other types are not supported. If MySQL table contains a column of such type, Cl
|
|||||||
|
|
||||||
## Specifics and Recommendations {#specifics-and-recommendations}
|
## Specifics and Recommendations {#specifics-and-recommendations}
|
||||||
|
|
||||||
|
### Compatibility restrictions
|
||||||
|
|
||||||
|
Apart of the data types limitations there are few restrictions comparing to `MySQL` databases, that should be resolved before replication will be possible:
|
||||||
|
|
||||||
|
- Each table in `MySQL` should contain `PRIMARY KEY`.
|
||||||
|
|
||||||
|
- Replication for tables, those are containing rows with `ENUM` field values out of range (specified in `ENUM` signature) will not work.
|
||||||
|
|
||||||
### DDL Queries {#ddl-queries}
|
### DDL Queries {#ddl-queries}
|
||||||
|
|
||||||
MySQL DDL queries are converted into the corresponding ClickHouse DDL queries ([ALTER](../../sql-reference/statements/alter/index.md), [CREATE](../../sql-reference/statements/create/index.md), [DROP](../../sql-reference/statements/drop.md), [RENAME](../../sql-reference/statements/rename.md)). If ClickHouse cannot parse some DDL query, the query is ignored.
|
MySQL DDL queries are converted into the corresponding ClickHouse DDL queries ([ALTER](../../sql-reference/statements/alter/index.md), [CREATE](../../sql-reference/statements/create/index.md), [DROP](../../sql-reference/statements/drop.md), [RENAME](../../sql-reference/statements/rename.md)). If ClickHouse cannot parse some DDL query, the query is ignored.
|
||||||
|
|
||||||
### Data Replication {#data-replication}
|
### Data Replication {#data-replication}
|
||||||
|
|
||||||
`MaterializeMySQL` does not support direct `INSERT`, `DELETE` and `UPDATE` queries. However, they are supported in terms of data replication:
|
`MaterializedMySQL` does not support direct `INSERT`, `DELETE` and `UPDATE` queries. However, they are supported in terms of data replication:
|
||||||
|
|
||||||
- MySQL `INSERT` query is converted into `INSERT` with `_sign=1`.
|
- MySQL `INSERT` query is converted into `INSERT` with `_sign=1`.
|
||||||
|
|
||||||
- MySQL `DELETE` query is converted into `INSERT` with `_sign=-1`.
|
- MySQL `DELETE` query is converted into `INSERT` with `_sign=-1`.
|
||||||
|
|
||||||
- MySQL `UPDATE` query is converted into `INSERT` with `_sign=-1` and `INSERT` with `_sign=1`.
|
- MySQL `UPDATE` query is converted into `INSERT` with `_sign=-1` and `INSERT` with `_sign=1`.
|
||||||
|
|
||||||
### Selecting from MaterializeMySQL Tables {#select}
|
### Selecting from MaterializedMySQL Tables {#select}
|
||||||
|
|
||||||
`SELECT` query from `MaterializeMySQL` tables has some specifics:
|
`SELECT` query from `MaterializedMySQL` tables has some specifics:
|
||||||
|
|
||||||
- If `_version` is not specified in the `SELECT` query, [FINAL](../../sql-reference/statements/select/from.md#select-from-final) modifier is used. So only rows with `MAX(_version)` are selected.
|
- If `_version` is not specified in the `SELECT` query, [FINAL](../../sql-reference/statements/select/from.md#select-from-final) modifier is used. So only rows with `MAX(_version)` are selected.
|
||||||
|
|
||||||
- If `_sign` is not specified in the `SELECT` query, `WHERE _sign=1` is used by default. So the deleted rows are not included into the result set.
|
- If `_sign` is not specified in the `SELECT` query, `WHERE _sign=1` is used by default. So the deleted rows are not included into the result set.
|
||||||
|
|
||||||
|
- The result includes columns comments in case they exist in MySQL database tables.
|
||||||
|
|
||||||
### Index Conversion {#index-conversion}
|
### Index Conversion {#index-conversion}
|
||||||
|
|
||||||
MySQL `PRIMARY KEY` and `INDEX` clauses are converted into `ORDER BY` tuples in ClickHouse tables.
|
MySQL `PRIMARY KEY` and `INDEX` clauses are converted into `ORDER BY` tuples in ClickHouse tables.
|
||||||
@ -91,10 +126,10 @@ ClickHouse has only one physical order, which is determined by `ORDER BY` clause
|
|||||||
**Notes**
|
**Notes**
|
||||||
|
|
||||||
- Rows with `_sign=-1` are not deleted physically from the tables.
|
- Rows with `_sign=-1` are not deleted physically from the tables.
|
||||||
- Cascade `UPDATE/DELETE` queries are not supported by the `MaterializeMySQL` engine.
|
- Cascade `UPDATE/DELETE` queries are not supported by the `MaterializedMySQL` engine.
|
||||||
- Replication can be easily broken.
|
- Replication can be easily broken.
|
||||||
- Manual operations on database and tables are forbidden.
|
- Manual operations on database and tables are forbidden.
|
||||||
- `MaterializeMySQL` is influenced by [optimize_on_insert](../../operations/settings/settings.md#optimize-on-insert) setting. The data is merged in the corresponding table in the `MaterializeMySQL` database when a table in the MySQL server changes.
|
- `MaterializedMySQL` is influenced by [optimize_on_insert](../../operations/settings/settings.md#optimize-on-insert) setting. The data is merged in the corresponding table in the `MaterializedMySQL` database when a table in the MySQL server changes.
|
||||||
|
|
||||||
## Examples of Use {#examples-of-use}
|
## Examples of Use {#examples-of-use}
|
||||||
|
|
||||||
@ -111,9 +146,9 @@ mysql> SELECT * FROM test;
|
|||||||
```
|
```
|
||||||
|
|
||||||
```text
|
```text
|
||||||
+---+------+------+
|
+---+------+------+
|
||||||
| a | b | c |
|
| a | b | c |
|
||||||
+---+------+------+
|
+---+------+------+
|
||||||
| 2 | 222 | Wow! |
|
| 2 | 222 | Wow! |
|
||||||
+---+------+------+
|
+---+------+------+
|
||||||
```
|
```
|
||||||
@ -123,7 +158,7 @@ Database in ClickHouse, exchanging data with the MySQL server:
|
|||||||
The database and the table created:
|
The database and the table created:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
CREATE DATABASE mysql ENGINE = MaterializeMySQL('localhost:3306', 'db', 'user', '***');
|
CREATE DATABASE mysql ENGINE = MaterializedMySQL('localhost:3306', 'db', 'user', '***');
|
||||||
SHOW TABLES FROM mysql;
|
SHOW TABLES FROM mysql;
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -140,9 +175,9 @@ SELECT * FROM mysql.test;
|
|||||||
```
|
```
|
||||||
|
|
||||||
``` text
|
``` text
|
||||||
┌─a─┬──b─┐
|
┌─a─┬──b─┐
|
||||||
│ 1 │ 11 │
|
│ 1 │ 11 │
|
||||||
│ 2 │ 22 │
|
│ 2 │ 22 │
|
||||||
└───┴────┘
|
└───┴────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -153,9 +188,9 @@ SELECT * FROM mysql.test;
|
|||||||
```
|
```
|
||||||
|
|
||||||
``` text
|
``` text
|
||||||
┌─a─┬───b─┬─c────┐
|
┌─a─┬───b─┬─c────┐
|
||||||
│ 2 │ 222 │ Wow! │
|
│ 2 │ 222 │ Wow! │
|
||||||
└───┴─────┴──────┘
|
└───┴─────┴──────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/engines/database-engines/materialize-mysql/) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/engines/database-engines/materialized-mysql/) <!--hide-->
|
@ -53,7 +53,7 @@ All other MySQL data types are converted into [String](../../sql-reference/data-
|
|||||||
|
|
||||||
## Global Variables Support {#global-variables-support}
|
## Global Variables Support {#global-variables-support}
|
||||||
|
|
||||||
For better compatibility you may address global variables in MySQL style, as `@@identifier`.
|
For better compatibility you may address global variables in MySQL style, as `@@identifier`.
|
||||||
|
|
||||||
These variables are supported:
|
These variables are supported:
|
||||||
- `version`
|
- `version`
|
||||||
|
@ -14,7 +14,7 @@ Supports table structure modifications (`ALTER TABLE ... ADD|DROP COLUMN`). If `
|
|||||||
## Creating a Database {#creating-a-database}
|
## Creating a Database {#creating-a-database}
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
CREATE DATABASE test_database
|
CREATE DATABASE test_database
|
||||||
ENGINE = PostgreSQL('host:port', 'database', 'user', 'password'[, `use_table_cache`]);
|
ENGINE = PostgreSQL('host:port', 'database', 'user', 'password'[, `use_table_cache`]);
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -43,14 +43,14 @@ ENGINE = PostgreSQL('host:port', 'database', 'user', 'password'[, `use_table_cac
|
|||||||
| TEXT, CHAR | [String](../../sql-reference/data-types/string.md) |
|
| TEXT, CHAR | [String](../../sql-reference/data-types/string.md) |
|
||||||
| INTEGER | Nullable([Int32](../../sql-reference/data-types/int-uint.md))|
|
| INTEGER | Nullable([Int32](../../sql-reference/data-types/int-uint.md))|
|
||||||
| ARRAY | [Array](../../sql-reference/data-types/array.md) |
|
| ARRAY | [Array](../../sql-reference/data-types/array.md) |
|
||||||
|
|
||||||
|
|
||||||
## Examples of Use {#examples-of-use}
|
## Examples of Use {#examples-of-use}
|
||||||
|
|
||||||
Database in ClickHouse, exchanging data with the PostgreSQL server:
|
Database in ClickHouse, exchanging data with the PostgreSQL server:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
CREATE DATABASE test_database
|
CREATE DATABASE test_database
|
||||||
ENGINE = PostgreSQL('postgres1:5432', 'test_database', 'postgres', 'mysecretpassword', 1);
|
ENGINE = PostgreSQL('postgres1:5432', 'test_database', 'postgres', 'mysecretpassword', 1);
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -102,7 +102,7 @@ SELECT * FROM test_database.test_table;
|
|||||||
└────────┴───────┘
|
└────────┴───────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
Consider the table structure was modified in PostgreSQL:
|
Consider the table structure was modified in PostgreSQL:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
postgre> ALTER TABLE test_table ADD COLUMN data Text
|
postgre> ALTER TABLE test_table ADD COLUMN data Text
|
||||||
|
115
docs/en/engines/database-engines/replicated.md
Normal file
115
docs/en/engines/database-engines/replicated.md
Normal file
@ -0,0 +1,115 @@
|
|||||||
|
# [experimental] Replicated {#replicated}
|
||||||
|
|
||||||
|
The engine is based on the [Atomic](../../engines/database-engines/atomic.md) engine. It supports replication of metadata via DDL log being written to ZooKeeper and executed on all of the replicas for a given database.
|
||||||
|
|
||||||
|
One ClickHouse server can have multiple replicated databases running and updating at the same time. But there can't be multiple replicas of the same replicated database.
|
||||||
|
|
||||||
|
## Creating a Database {#creating-a-database}
|
||||||
|
``` sql
|
||||||
|
CREATE DATABASE testdb ENGINE = Replicated('zoo_path', 'shard_name', 'replica_name') [SETTINGS ...]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Engine Parameters**
|
||||||
|
|
||||||
|
- `zoo_path` — ZooKeeper path. The same ZooKeeper path corresponds to the same database.
|
||||||
|
- `shard_name` — Shard name. Database replicas are grouped into shards by `shard_name`.
|
||||||
|
- `replica_name` — Replica name. Replica names must be different for all replicas of the same shard.
|
||||||
|
|
||||||
|
!!! note "Warning"
|
||||||
|
For [ReplicatedMergeTree](../table-engines/mergetree-family/replication.md#table_engines-replication) tables if no arguments provided, then default arguments are used: `/clickhouse/tables/{uuid}/{shard}` and `{replica}`. These can be changed in the server settings [default_replica_path](../../operations/server-configuration-parameters/settings.md#default_replica_path) and [default_replica_name](../../operations/server-configuration-parameters/settings.md#default_replica_name). Macro `{uuid}` is unfolded to table's uuid, `{shard}` and `{replica}` are unfolded to values from server config, not from database engine arguments. But in the future, it will be possible to use `shard_name` and `replica_name` of Replicated database.
|
||||||
|
|
||||||
|
## Specifics and Recommendations {#specifics-and-recommendations}
|
||||||
|
|
||||||
|
DDL queries with `Replicated` database work in a similar way to [ON CLUSTER](../../sql-reference/distributed-ddl.md) queries, but with minor differences.
|
||||||
|
|
||||||
|
First, the DDL request tries to execute on the initiator (the host that originally received the request from the user). If the request is not fulfilled, then the user immediately receives an error, other hosts do not try to fulfill it. If the request has been successfully completed on the initiator, then all other hosts will automatically retry until they complete it. The initiator will try to wait for the query to be completed on other hosts (no longer than [distributed_ddl_task_timeout](../../operations/settings/settings.md#distributed_ddl_task_timeout)) and will return a table with the query execution statuses on each host.
|
||||||
|
|
||||||
|
The behavior in case of errors is regulated by the [distributed_ddl_output_mode](../../operations/settings/settings.md#distributed_ddl_output_mode) setting, for a `Replicated` database it is better to set it to `null_status_on_timeout` — i.e. if some hosts did not have time to execute the request for [distributed_ddl_task_timeout](../../operations/settings/settings.md#distributed_ddl_task_timeout), then do not throw an exception, but show the `NULL` status for them in the table.
|
||||||
|
|
||||||
|
The [system.clusters](../../operations/system-tables/clusters.md) system table contains a cluster named like the replicated database, which consists of all replicas of the database. This cluster is updated automatically when creating/deleting replicas, and it can be used for [Distributed](../../engines/table-engines/special/distributed.md#distributed) tables.
|
||||||
|
|
||||||
|
When creating a new replica of the database, this replica creates tables by itself. If the replica has been unavailable for a long time and has lagged behind the replication log — it checks its local metadata with the current metadata in ZooKeeper, moves the extra tables with data to a separate non-replicated database (so as not to accidentally delete anything superfluous), creates the missing tables, updates the table names if they have been renamed. The data is replicated at the `ReplicatedMergeTree` level, i.e. if the table is not replicated, the data will not be replicated (the database is responsible only for metadata).
|
||||||
|
|
||||||
|
## Usage Example {#usage-example}
|
||||||
|
|
||||||
|
Creating a cluster with three hosts:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
node1 :) CREATE DATABASE r ENGINE=Replicated('some/path/r','shard1','replica1');
|
||||||
|
node2 :) CREATE DATABASE r ENGINE=Replicated('some/path/r','shard1','other_replica');
|
||||||
|
node3 :) CREATE DATABASE r ENGINE=Replicated('some/path/r','other_shard','{replica}');
|
||||||
|
```
|
||||||
|
|
||||||
|
Running the DDL-query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
CREATE TABLE r.rmt (n UInt64) ENGINE=ReplicatedMergeTree ORDER BY n;
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─────hosts────────────┬──status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
|
||||||
|
│ shard1|replica1 │ 0 │ │ 2 │ 0 │
|
||||||
|
│ shard1|other_replica │ 0 │ │ 1 │ 0 │
|
||||||
|
│ other_shard|r1 │ 0 │ │ 0 │ 0 │
|
||||||
|
└──────────────────────┴─────────┴───────┴─────────────────────┴──────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Showing the system table:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT cluster, shard_num, replica_num, host_name, host_address, port, is_local
|
||||||
|
FROM system.clusters WHERE cluster='r';
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─cluster─┬─shard_num─┬─replica_num─┬─host_name─┬─host_address─┬─port─┬─is_local─┐
|
||||||
|
│ r │ 1 │ 1 │ node3 │ 127.0.0.1 │ 9002 │ 0 │
|
||||||
|
│ r │ 2 │ 1 │ node2 │ 127.0.0.1 │ 9001 │ 0 │
|
||||||
|
│ r │ 2 │ 2 │ node1 │ 127.0.0.1 │ 9000 │ 1 │
|
||||||
|
└─────────┴───────────┴─────────────┴───────────┴──────────────┴──────┴──────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Creating a distributed table and inserting the data:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
node2 :) CREATE TABLE r.d (n UInt64) ENGINE=Distributed('r','r','rmt', n % 2);
|
||||||
|
node3 :) INSERT INTO r SELECT * FROM numbers(10);
|
||||||
|
node1 :) SELECT materialize(hostName()) AS host, groupArray(n) FROM r.d GROUP BY host;
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─hosts─┬─groupArray(n)─┐
|
||||||
|
│ node1 │ [1,3,5,7,9] │
|
||||||
|
│ node2 │ [0,2,4,6,8] │
|
||||||
|
└───────┴───────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Adding replica on the one more host:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
node4 :) CREATE DATABASE r ENGINE=Replicated('some/path/r','other_shard','r2');
|
||||||
|
```
|
||||||
|
|
||||||
|
The cluster configuration will look like this:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─cluster─┬─shard_num─┬─replica_num─┬─host_name─┬─host_address─┬─port─┬─is_local─┐
|
||||||
|
│ r │ 1 │ 1 │ node3 │ 127.0.0.1 │ 9002 │ 0 │
|
||||||
|
│ r │ 1 │ 2 │ node4 │ 127.0.0.1 │ 9003 │ 0 │
|
||||||
|
│ r │ 2 │ 1 │ node2 │ 127.0.0.1 │ 9001 │ 0 │
|
||||||
|
│ r │ 2 │ 2 │ node1 │ 127.0.0.1 │ 9000 │ 1 │
|
||||||
|
└─────────┴───────────┴─────────────┴───────────┴──────────────┴──────┴──────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
The distributed table also will get data from the new host:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
node2 :) SELECT materialize(hostName()) AS host, groupArray(n) FROM r.d GROUP BY host;
|
||||||
|
```
|
||||||
|
|
||||||
|
```text
|
||||||
|
┌─hosts─┬─groupArray(n)─┐
|
||||||
|
│ node2 │ [1,3,5,7,9] │
|
||||||
|
│ node4 │ [0,2,4,6,8] │
|
||||||
|
└───────┴───────────────┘
|
||||||
|
```
|
@ -35,7 +35,7 @@ The table structure can differ from the original table structure:
|
|||||||
- `password` — User password.
|
- `password` — User password.
|
||||||
|
|
||||||
## Implementation Details {#implementation-details}
|
## Implementation Details {#implementation-details}
|
||||||
|
|
||||||
Supports multiple replicas that must be listed by `|` and shards must be listed by `,`. For example:
|
Supports multiple replicas that must be listed by `|` and shards must be listed by `,`. For example:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
|
@ -20,7 +20,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
|||||||
|
|
||||||
Required parameters:
|
Required parameters:
|
||||||
|
|
||||||
- `primary_key_name` – any column name in the column list.
|
- `primary_key_name` – any column name in the column list.
|
||||||
- `primary key` must be specified, it supports only one column in the primary key. The primary key will be serialized in binary as a `rocksdb key`.
|
- `primary key` must be specified, it supports only one column in the primary key. The primary key will be serialized in binary as a `rocksdb key`.
|
||||||
- columns other than the primary key will be serialized in binary as `rocksdb` value in corresponding order.
|
- columns other than the primary key will be serialized in binary as `rocksdb` value in corresponding order.
|
||||||
- queries with key `equals` or `in` filtering will be optimized to multi keys lookup from `rocksdb`.
|
- queries with key `equals` or `in` filtering will be optimized to multi keys lookup from `rocksdb`.
|
||||||
@ -39,4 +39,46 @@ ENGINE = EmbeddedRocksDB
|
|||||||
PRIMARY KEY key
|
PRIMARY KEY key
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Metrics
|
||||||
|
|
||||||
|
There is also `system.rocksdb` table, that expose rocksdb statistics:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT
|
||||||
|
name,
|
||||||
|
value
|
||||||
|
FROM system.rocksdb
|
||||||
|
|
||||||
|
┌─name──────────────────────┬─value─┐
|
||||||
|
│ no.file.opens │ 1 │
|
||||||
|
│ number.block.decompressed │ 1 │
|
||||||
|
└───────────────────────────┴───────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
You can also change any [rocksdb options](https://github.com/facebook/rocksdb/wiki/Option-String-and-Option-Map) using config:
|
||||||
|
|
||||||
|
```xml
|
||||||
|
<rocksdb>
|
||||||
|
<options>
|
||||||
|
<max_background_jobs>8</max_background_jobs>
|
||||||
|
</options>
|
||||||
|
<column_family_options>
|
||||||
|
<num_levels>2</num_levels>
|
||||||
|
</column_family_options>
|
||||||
|
<tables>
|
||||||
|
<table>
|
||||||
|
<name>TABLE</name>
|
||||||
|
<options>
|
||||||
|
<max_background_jobs>8</max_background_jobs>
|
||||||
|
</options>
|
||||||
|
<column_family_options>
|
||||||
|
<num_levels>2</num_levels>
|
||||||
|
</column_family_options>
|
||||||
|
</table>
|
||||||
|
</tables>
|
||||||
|
</rocksdb>
|
||||||
|
```
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/embedded-rocksdb/) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/embedded-rocksdb/) <!--hide-->
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
---
|
---
|
||||||
toc_priority: 12
|
toc_priority: 12
|
||||||
toc_title: MateriaziePostgreSQL
|
toc_title: MaterializedPostgreSQL
|
||||||
---
|
---
|
||||||
|
|
||||||
# MaterializedPostgreSQL {#materialize-postgresql}
|
# MaterializedPostgreSQL {#materialize-postgresql}
|
||||||
|
@ -37,7 +37,7 @@ Table in ClickHouse which allows to read data from MongoDB collection:
|
|||||||
``` text
|
``` text
|
||||||
CREATE TABLE mongo_table
|
CREATE TABLE mongo_table
|
||||||
(
|
(
|
||||||
key UInt64,
|
key UInt64,
|
||||||
data String
|
data String
|
||||||
) ENGINE = MongoDB('mongo1:27017', 'test', 'simple_table', 'testuser', 'clickhouse');
|
) ENGINE = MongoDB('mongo1:27017', 'test', 'simple_table', 'testuser', 'clickhouse');
|
||||||
```
|
```
|
||||||
|
@ -49,14 +49,14 @@ PostgreSQL `Array` types are converted into ClickHouse arrays.
|
|||||||
|
|
||||||
!!! info "Note"
|
!!! info "Note"
|
||||||
Be careful - in PostgreSQL an array data, created like a `type_name[]`, may contain multi-dimensional arrays of different dimensions in different table rows in same column. But in ClickHouse it is only allowed to have multidimensional arrays of the same count of dimensions in all table rows in same column.
|
Be careful - in PostgreSQL an array data, created like a `type_name[]`, may contain multi-dimensional arrays of different dimensions in different table rows in same column. But in ClickHouse it is only allowed to have multidimensional arrays of the same count of dimensions in all table rows in same column.
|
||||||
|
|
||||||
Supports multiple replicas that must be listed by `|`. For example:
|
Supports multiple replicas that must be listed by `|`. For example:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
CREATE TABLE test_replicas (id UInt32, name String) ENGINE = PostgreSQL(`postgres{2|3|4}:5432`, 'clickhouse', 'test_replicas', 'postgres', 'mysecretpassword');
|
CREATE TABLE test_replicas (id UInt32, name String) ENGINE = PostgreSQL(`postgres{2|3|4}:5432`, 'clickhouse', 'test_replicas', 'postgres', 'mysecretpassword');
|
||||||
```
|
```
|
||||||
|
|
||||||
Replicas priority for PostgreSQL dictionary source is supported. The bigger the number in map, the less the priority. The highest priority is `0`.
|
Replicas priority for PostgreSQL dictionary source is supported. The bigger the number in map, the less the priority. The highest priority is `0`.
|
||||||
|
|
||||||
In the example below replica `example01-1` has the highest priority:
|
In the example below replica `example01-1` has the highest priority:
|
||||||
|
|
||||||
|
@ -14,6 +14,8 @@ Engines of the family:
|
|||||||
- [Log](../../../engines/table-engines/log-family/log.md)
|
- [Log](../../../engines/table-engines/log-family/log.md)
|
||||||
- [TinyLog](../../../engines/table-engines/log-family/tinylog.md)
|
- [TinyLog](../../../engines/table-engines/log-family/tinylog.md)
|
||||||
|
|
||||||
|
`Log` family table engines can store data to [HDFS](../../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-hdfs) or [S3](../../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-s3) distributed file systems.
|
||||||
|
|
||||||
## Common Properties {#common-properties}
|
## Common Properties {#common-properties}
|
||||||
|
|
||||||
Engines:
|
Engines:
|
||||||
|
@ -5,10 +5,8 @@ toc_title: Log
|
|||||||
|
|
||||||
# Log {#log}
|
# Log {#log}
|
||||||
|
|
||||||
Engine belongs to the family of log engines. See the common properties of log engines and their differences in the [Log Engine Family](../../../engines/table-engines/log-family/index.md) article.
|
The engine belongs to the family of `Log` engines. See the common properties of `Log` engines and their differences in the [Log Engine Family](../../../engines/table-engines/log-family/index.md) article.
|
||||||
|
|
||||||
Log differs from [TinyLog](../../../engines/table-engines/log-family/tinylog.md) in that a small file of “marks” resides with the column files. These marks are written on every data block and contain offsets that indicate where to start reading the file in order to skip the specified number of rows. This makes it possible to read table data in multiple threads.
|
`Log` differs from [TinyLog](../../../engines/table-engines/log-family/tinylog.md) in that a small file of "marks" resides with the column files. These marks are written on every data block and contain offsets that indicate where to start reading the file in order to skip the specified number of rows. This makes it possible to read table data in multiple threads.
|
||||||
For concurrent data access, the read operations can be performed simultaneously, while write operations block reads and each other.
|
For concurrent data access, the read operations can be performed simultaneously, while write operations block reads and each other.
|
||||||
The Log engine does not support indexes. Similarly, if writing to a table failed, the table is broken, and reading from it returns an error. The Log engine is appropriate for temporary data, write-once tables, and for testing or demonstration purposes.
|
The `Log` engine does not support indexes. Similarly, if writing to a table failed, the table is broken, and reading from it returns an error. The `Log` engine is appropriate for temporary data, write-once tables, and for testing or demonstration purposes.
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/log/) <!--hide-->
|
|
||||||
|
@ -76,7 +76,7 @@ For a description of parameters, see the [CREATE query description](../../../sql
|
|||||||
|
|
||||||
- `SAMPLE BY` — An expression for sampling. Optional.
|
- `SAMPLE BY` — An expression for sampling. Optional.
|
||||||
|
|
||||||
If a sampling expression is used, the primary key must contain it. Example: `SAMPLE BY intHash32(UserID) ORDER BY (CounterID, EventDate, intHash32(UserID))`.
|
If a sampling expression is used, the primary key must contain it. The result of sampling expression must be unsigned integer. Example: `SAMPLE BY intHash32(UserID) ORDER BY (CounterID, EventDate, intHash32(UserID))`.
|
||||||
|
|
||||||
- `TTL` — A list of rules specifying storage duration of rows and defining logic of automatic parts movement [between disks and volumes](#table_engine-mergetree-multiple-volumes). Optional.
|
- `TTL` — A list of rules specifying storage duration of rows and defining logic of automatic parts movement [between disks and volumes](#table_engine-mergetree-multiple-volumes). Optional.
|
||||||
|
|
||||||
@ -728,7 +728,9 @@ During this time, they are not moved to other volumes or disks. Therefore, until
|
|||||||
|
|
||||||
## Using S3 for Data Storage {#table_engine-mergetree-s3}
|
## Using S3 for Data Storage {#table_engine-mergetree-s3}
|
||||||
|
|
||||||
`MergeTree` family table engines is able to store data to [S3](https://aws.amazon.com/s3/) using a disk with type `s3`.
|
`MergeTree` family table engines can store data to [S3](https://aws.amazon.com/s3/) using a disk with type `s3`.
|
||||||
|
|
||||||
|
This feature is under development and not ready for production. There are known drawbacks such as very low performance.
|
||||||
|
|
||||||
Configuration markup:
|
Configuration markup:
|
||||||
``` xml
|
``` xml
|
||||||
@ -762,11 +764,13 @@ Configuration markup:
|
|||||||
```
|
```
|
||||||
|
|
||||||
Required parameters:
|
Required parameters:
|
||||||
- `endpoint` — S3 endpoint url in `path` or `virtual hosted` [styles](https://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html). Endpoint url should contain bucket and root path to store data.
|
|
||||||
|
- `endpoint` — S3 endpoint URL in `path` or `virtual hosted` [styles](https://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html). Endpoint URL should contain a bucket and root path to store data.
|
||||||
- `access_key_id` — S3 access key id.
|
- `access_key_id` — S3 access key id.
|
||||||
- `secret_access_key` — S3 secret access key.
|
- `secret_access_key` — S3 secret access key.
|
||||||
|
|
||||||
Optional parameters:
|
Optional parameters:
|
||||||
|
|
||||||
- `region` — S3 region name.
|
- `region` — S3 region name.
|
||||||
- `use_environment_credentials` — Reads AWS credentials from the Environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN if they exist. Default value is `false`.
|
- `use_environment_credentials` — Reads AWS credentials from the Environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN if they exist. Default value is `false`.
|
||||||
- `use_insecure_imds_request` — If set to `true`, S3 client will use insecure IMDS request while obtaining credentials from Amazon EC2 metadata. Default value is `false`.
|
- `use_insecure_imds_request` — If set to `true`, S3 client will use insecure IMDS request while obtaining credentials from Amazon EC2 metadata. Default value is `false`.
|
||||||
@ -782,7 +786,6 @@ Optional parameters:
|
|||||||
- `skip_access_check` — If true, disk access checks will not be performed on disk start-up. Default value is `false`.
|
- `skip_access_check` — If true, disk access checks will not be performed on disk start-up. Default value is `false`.
|
||||||
- `server_side_encryption_customer_key_base64` — If specified, required headers for accessing S3 objects with SSE-C encryption will be set.
|
- `server_side_encryption_customer_key_base64` — If specified, required headers for accessing S3 objects with SSE-C encryption will be set.
|
||||||
|
|
||||||
|
|
||||||
S3 disk can be configured as `main` or `cold` storage:
|
S3 disk can be configured as `main` or `cold` storage:
|
||||||
``` xml
|
``` xml
|
||||||
<storage_configuration>
|
<storage_configuration>
|
||||||
@ -821,4 +824,43 @@ S3 disk can be configured as `main` or `cold` storage:
|
|||||||
|
|
||||||
In case of `cold` option a data can be moved to S3 if local disk free size will be smaller than `move_factor * disk_size` or by TTL move rule.
|
In case of `cold` option a data can be moved to S3 if local disk free size will be smaller than `move_factor * disk_size` or by TTL move rule.
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/ru/operations/table_engines/mergetree/) <!--hide-->
|
## Using HDFS for Data Storage {#table_engine-mergetree-hdfs}
|
||||||
|
|
||||||
|
[HDFS](https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html) is a distributed file system for remote data storage.
|
||||||
|
|
||||||
|
`MergeTree` family table engines can store data to HDFS using a disk with type `HDFS`.
|
||||||
|
|
||||||
|
Configuration markup:
|
||||||
|
``` xml
|
||||||
|
<yandex>
|
||||||
|
<storage_configuration>
|
||||||
|
<disks>
|
||||||
|
<hdfs>
|
||||||
|
<type>hdfs</type>
|
||||||
|
<endpoint>hdfs://hdfs1:9000/clickhouse/</endpoint>
|
||||||
|
</hdfs>
|
||||||
|
</disks>
|
||||||
|
<policies>
|
||||||
|
<hdfs>
|
||||||
|
<volumes>
|
||||||
|
<main>
|
||||||
|
<disk>hdfs</disk>
|
||||||
|
</main>
|
||||||
|
</volumes>
|
||||||
|
</hdfs>
|
||||||
|
</policies>
|
||||||
|
</storage_configuration>
|
||||||
|
|
||||||
|
<merge_tree>
|
||||||
|
<min_bytes_for_wide_part>0</min_bytes_for_wide_part>
|
||||||
|
</merge_tree>
|
||||||
|
</yandex>
|
||||||
|
```
|
||||||
|
|
||||||
|
Required parameters:
|
||||||
|
|
||||||
|
- `endpoint` — HDFS endpoint URL in `path` format. Endpoint URL should contain a root path to store data.
|
||||||
|
|
||||||
|
Optional parameters:
|
||||||
|
|
||||||
|
- `min_bytes_for_seek` — The minimal number of bytes to use seek operation instead of sequential read. Default value: `1 Mb`.
|
||||||
|
@ -101,7 +101,7 @@ For very large clusters, you can use different ZooKeeper clusters for different
|
|||||||
|
|
||||||
Replication is asynchronous and multi-master. `INSERT` queries (as well as `ALTER`) can be sent to any available server. Data is inserted on the server where the query is run, and then it is copied to the other servers. Because it is asynchronous, recently inserted data appears on the other replicas with some latency. If part of the replicas are not available, the data is written when they become available. If a replica is available, the latency is the amount of time it takes to transfer the block of compressed data over the network. The number of threads performing background tasks for replicated tables can be set by [background_schedule_pool_size](../../../operations/settings/settings.md#background_schedule_pool_size) setting.
|
Replication is asynchronous and multi-master. `INSERT` queries (as well as `ALTER`) can be sent to any available server. Data is inserted on the server where the query is run, and then it is copied to the other servers. Because it is asynchronous, recently inserted data appears on the other replicas with some latency. If part of the replicas are not available, the data is written when they become available. If a replica is available, the latency is the amount of time it takes to transfer the block of compressed data over the network. The number of threads performing background tasks for replicated tables can be set by [background_schedule_pool_size](../../../operations/settings/settings.md#background_schedule_pool_size) setting.
|
||||||
|
|
||||||
`ReplicatedMergeTree` engine uses a separate thread pool for replicated fetches. Size of the pool is limited by the [background_fetches_pool_size](../../../operations/settings/settings.md#background_fetches_pool_size) setting which can be tuned with a server restart.
|
`ReplicatedMergeTree` engine uses a separate thread pool for replicated fetches. Size of the pool is limited by the [background_fetches_pool_size](../../../operations/settings/settings.md#background_fetches_pool_size) setting which can be tuned with a server restart.
|
||||||
|
|
||||||
By default, an INSERT query waits for confirmation of writing the data from only one replica. If the data was successfully written to only one replica and the server with this replica ceases to exist, the stored data will be lost. To enable getting confirmation of data writes from multiple replicas, use the `insert_quorum` option.
|
By default, an INSERT query waits for confirmation of writing the data from only one replica. If the data was successfully written to only one replica and the server with this replica ceases to exist, the stored data will be lost. To enable getting confirmation of data writes from multiple replicas, use the `insert_quorum` option.
|
||||||
|
|
||||||
@ -155,7 +155,7 @@ CREATE TABLE table_name
|
|||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
As the example shows, these parameters can contain substitutions in curly brackets. The substituted values are taken from the «[macros](../../../operations/server-configuration-parameters/settings/#macros) section of the configuration file.
|
As the example shows, these parameters can contain substitutions in curly brackets. The substituted values are taken from the «[macros](../../../operations/server-configuration-parameters/settings/#macros) section of the configuration file.
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
|
|
||||||
@ -198,7 +198,7 @@ In this case, you can omit arguments when creating tables:
|
|||||||
``` sql
|
``` sql
|
||||||
CREATE TABLE table_name (
|
CREATE TABLE table_name (
|
||||||
x UInt32
|
x UInt32
|
||||||
) ENGINE = ReplicatedMergeTree
|
) ENGINE = ReplicatedMergeTree
|
||||||
ORDER BY x;
|
ORDER BY x;
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -207,7 +207,7 @@ It is equivalent to:
|
|||||||
``` sql
|
``` sql
|
||||||
CREATE TABLE table_name (
|
CREATE TABLE table_name (
|
||||||
x UInt32
|
x UInt32
|
||||||
) ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/{database}/table_name', '{replica}')
|
) ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/{database}/table_name', '{replica}')
|
||||||
ORDER BY x;
|
ORDER BY x;
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -37,6 +37,14 @@ Also, it accepts the following settings:
|
|||||||
|
|
||||||
- `max_delay_to_insert` - max delay of inserting data into Distributed table in seconds, if there are a lot of pending bytes for async send. Default 60.
|
- `max_delay_to_insert` - max delay of inserting data into Distributed table in seconds, if there are a lot of pending bytes for async send. Default 60.
|
||||||
|
|
||||||
|
- `monitor_batch_inserts` - same as [distributed_directory_monitor_batch_inserts](../../../operations/settings/settings.md#distributed_directory_monitor_batch_inserts)
|
||||||
|
|
||||||
|
- `monitor_split_batch_on_failure` - same as [distributed_directory_monitor_split_batch_on_failure](../../../operations/settings/settings.md#distributed_directory_monitor_split_batch_on_failure)
|
||||||
|
|
||||||
|
- `monitor_sleep_time_ms` - same as [distributed_directory_monitor_sleep_time_ms](../../../operations/settings/settings.md#distributed_directory_monitor_sleep_time_ms)
|
||||||
|
|
||||||
|
- `monitor_max_sleep_time_ms` - same as [distributed_directory_monitor_max_sleep_time_ms](../../../operations/settings/settings.md#distributed_directory_monitor_max_sleep_time_ms)
|
||||||
|
|
||||||
!!! note "Note"
|
!!! note "Note"
|
||||||
|
|
||||||
**Durability settings** (`fsync_...`):
|
**Durability settings** (`fsync_...`):
|
||||||
|
BIN
docs/en/images/play.png
Normal file
BIN
docs/en/images/play.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 26 KiB |
@ -1130,17 +1130,18 @@ The table below shows supported data types and how they match ClickHouse [data t
|
|||||||
| `boolean`, `int`, `long`, `float`, `double` | [Int64](../sql-reference/data-types/int-uint.md), [UInt64](../sql-reference/data-types/int-uint.md) | `long` |
|
| `boolean`, `int`, `long`, `float`, `double` | [Int64](../sql-reference/data-types/int-uint.md), [UInt64](../sql-reference/data-types/int-uint.md) | `long` |
|
||||||
| `boolean`, `int`, `long`, `float`, `double` | [Float32](../sql-reference/data-types/float.md) | `float` |
|
| `boolean`, `int`, `long`, `float`, `double` | [Float32](../sql-reference/data-types/float.md) | `float` |
|
||||||
| `boolean`, `int`, `long`, `float`, `double` | [Float64](../sql-reference/data-types/float.md) | `double` |
|
| `boolean`, `int`, `long`, `float`, `double` | [Float64](../sql-reference/data-types/float.md) | `double` |
|
||||||
| `bytes`, `string`, `fixed`, `enum` | [String](../sql-reference/data-types/string.md) | `bytes` |
|
| `bytes`, `string`, `fixed`, `enum` | [String](../sql-reference/data-types/string.md) | `bytes` or `string` \* |
|
||||||
| `bytes`, `string`, `fixed` | [FixedString(N)](../sql-reference/data-types/fixedstring.md) | `fixed(N)` |
|
| `bytes`, `string`, `fixed` | [FixedString(N)](../sql-reference/data-types/fixedstring.md) | `fixed(N)` |
|
||||||
| `enum` | [Enum(8\|16)](../sql-reference/data-types/enum.md) | `enum` |
|
| `enum` | [Enum(8\|16)](../sql-reference/data-types/enum.md) | `enum` |
|
||||||
| `array(T)` | [Array(T)](../sql-reference/data-types/array.md) | `array(T)` |
|
| `array(T)` | [Array(T)](../sql-reference/data-types/array.md) | `array(T)` |
|
||||||
| `union(null, T)`, `union(T, null)` | [Nullable(T)](../sql-reference/data-types/date.md) | `union(null, T)` |
|
| `union(null, T)`, `union(T, null)` | [Nullable(T)](../sql-reference/data-types/date.md) | `union(null, T)` |
|
||||||
| `null` | [Nullable(Nothing)](../sql-reference/data-types/special-data-types/nothing.md) | `null` |
|
| `null` | [Nullable(Nothing)](../sql-reference/data-types/special-data-types/nothing.md) | `null` |
|
||||||
| `int (date)` \* | [Date](../sql-reference/data-types/date.md) | `int (date)` \* |
|
| `int (date)` \** | [Date](../sql-reference/data-types/date.md) | `int (date)` \** |
|
||||||
| `long (timestamp-millis)` \* | [DateTime64(3)](../sql-reference/data-types/datetime.md) | `long (timestamp-millis)` \* |
|
| `long (timestamp-millis)` \** | [DateTime64(3)](../sql-reference/data-types/datetime.md) | `long (timestamp-millis)` \* |
|
||||||
| `long (timestamp-micros)` \* | [DateTime64(6)](../sql-reference/data-types/datetime.md) | `long (timestamp-micros)` \* |
|
| `long (timestamp-micros)` \** | [DateTime64(6)](../sql-reference/data-types/datetime.md) | `long (timestamp-micros)` \* |
|
||||||
|
|
||||||
\* [Avro logical types](https://avro.apache.org/docs/current/spec.html#Logical+Types)
|
\* `bytes` is default, controlled by [output_format_avro_string_column_pattern](../operations/settings/settings.md#settings-output_format_avro_string_column_pattern)
|
||||||
|
\** [Avro logical types](https://avro.apache.org/docs/current/spec.html#Logical+Types)
|
||||||
|
|
||||||
Unsupported Avro data types: `record` (non-root), `map`
|
Unsupported Avro data types: `record` (non-root), `map`
|
||||||
|
|
||||||
@ -1246,12 +1247,14 @@ The table below shows supported data types and how they match ClickHouse [data t
|
|||||||
| `DOUBLE` | [Float64](../sql-reference/data-types/float.md) | `DOUBLE` |
|
| `DOUBLE` | [Float64](../sql-reference/data-types/float.md) | `DOUBLE` |
|
||||||
| `DATE32` | [Date](../sql-reference/data-types/date.md) | `UINT16` |
|
| `DATE32` | [Date](../sql-reference/data-types/date.md) | `UINT16` |
|
||||||
| `DATE64`, `TIMESTAMP` | [DateTime](../sql-reference/data-types/datetime.md) | `UINT32` |
|
| `DATE64`, `TIMESTAMP` | [DateTime](../sql-reference/data-types/datetime.md) | `UINT32` |
|
||||||
| `STRING`, `BINARY` | [String](../sql-reference/data-types/string.md) | `STRING` |
|
| `STRING`, `BINARY` | [String](../sql-reference/data-types/string.md) | `BINARY` |
|
||||||
| — | [FixedString](../sql-reference/data-types/fixedstring.md) | `STRING` |
|
| — | [FixedString](../sql-reference/data-types/fixedstring.md) | `BINARY` |
|
||||||
| `DECIMAL` | [Decimal](../sql-reference/data-types/decimal.md) | `DECIMAL` |
|
| `DECIMAL` | [Decimal](../sql-reference/data-types/decimal.md) | `DECIMAL` |
|
||||||
| `LIST` | [Array](../sql-reference/data-types/array.md) | `LIST` |
|
| `LIST` | [Array](../sql-reference/data-types/array.md) | `LIST` |
|
||||||
|
| `STRUCT` | [Tuple](../sql-reference/data-types/tuple.md) | `STRUCT` |
|
||||||
|
| `MAP` | [Map](../sql-reference/data-types/map.md) | `MAP` |
|
||||||
|
|
||||||
Arrays can be nested and can have a value of the `Nullable` type as an argument.
|
Arrays can be nested and can have a value of the `Nullable` type as an argument. `Tuple` and `Map` types also can be nested.
|
||||||
|
|
||||||
ClickHouse supports configurable precision of `Decimal` type. The `INSERT` query treats the Parquet `DECIMAL` type as the ClickHouse `Decimal128` type.
|
ClickHouse supports configurable precision of `Decimal` type. The `INSERT` query treats the Parquet `DECIMAL` type as the ClickHouse `Decimal128` type.
|
||||||
|
|
||||||
@ -1299,13 +1302,17 @@ The table below shows supported data types and how they match ClickHouse [data t
|
|||||||
| `DOUBLE` | [Float64](../sql-reference/data-types/float.md) | `FLOAT64` |
|
| `DOUBLE` | [Float64](../sql-reference/data-types/float.md) | `FLOAT64` |
|
||||||
| `DATE32` | [Date](../sql-reference/data-types/date.md) | `UINT16` |
|
| `DATE32` | [Date](../sql-reference/data-types/date.md) | `UINT16` |
|
||||||
| `DATE64`, `TIMESTAMP` | [DateTime](../sql-reference/data-types/datetime.md) | `UINT32` |
|
| `DATE64`, `TIMESTAMP` | [DateTime](../sql-reference/data-types/datetime.md) | `UINT32` |
|
||||||
| `STRING`, `BINARY` | [String](../sql-reference/data-types/string.md) | `UTF8` |
|
| `STRING`, `BINARY` | [String](../sql-reference/data-types/string.md) | `BINARY` |
|
||||||
| `STRING`, `BINARY` | [FixedString](../sql-reference/data-types/fixedstring.md) | `UTF8` |
|
| `STRING`, `BINARY` | [FixedString](../sql-reference/data-types/fixedstring.md) | `BINARY` |
|
||||||
| `DECIMAL` | [Decimal](../sql-reference/data-types/decimal.md) | `DECIMAL` |
|
| `DECIMAL` | [Decimal](../sql-reference/data-types/decimal.md) | `DECIMAL` |
|
||||||
| `DECIMAL256` | [Decimal256](../sql-reference/data-types/decimal.md)| `DECIMAL256` |
|
| `DECIMAL256` | [Decimal256](../sql-reference/data-types/decimal.md)| `DECIMAL256` |
|
||||||
| `LIST` | [Array](../sql-reference/data-types/array.md) | `LIST` |
|
| `LIST` | [Array](../sql-reference/data-types/array.md) | `LIST` |
|
||||||
|
| `STRUCT` | [Tuple](../sql-reference/data-types/tuple.md) | `STRUCT` |
|
||||||
|
| `MAP` | [Map](../sql-reference/data-types/map.md) | `MAP` |
|
||||||
|
|
||||||
Arrays can be nested and can have a value of the `Nullable` type as an argument.
|
Arrays can be nested and can have a value of the `Nullable` type as an argument. `Tuple` and `Map` types also can be nested.
|
||||||
|
|
||||||
|
The `DICTIONARY` type is supported for `INSERT` queries, and for `SELECT` queries there is an [output_format_arrow_low_cardinality_as_dictionary](../operations/settings/settings.md#output-format-arrow-low-cardinality-as-dictionary) setting that allows to output [LowCardinality](../sql-reference/data-types/lowcardinality.md) type as a `DICTIONARY` type.
|
||||||
|
|
||||||
ClickHouse supports configurable precision of the `Decimal` type. The `INSERT` query treats the Arrow `DECIMAL` type as the ClickHouse `Decimal128` type.
|
ClickHouse supports configurable precision of the `Decimal` type. The `INSERT` query treats the Arrow `DECIMAL` type as the ClickHouse `Decimal128` type.
|
||||||
|
|
||||||
@ -1358,8 +1365,10 @@ The table below shows supported data types and how they match ClickHouse [data t
|
|||||||
| `STRING`, `BINARY` | [String](../sql-reference/data-types/string.md) | `BINARY` |
|
| `STRING`, `BINARY` | [String](../sql-reference/data-types/string.md) | `BINARY` |
|
||||||
| `DECIMAL` | [Decimal](../sql-reference/data-types/decimal.md) | `DECIMAL` |
|
| `DECIMAL` | [Decimal](../sql-reference/data-types/decimal.md) | `DECIMAL` |
|
||||||
| `LIST` | [Array](../sql-reference/data-types/array.md) | `LIST` |
|
| `LIST` | [Array](../sql-reference/data-types/array.md) | `LIST` |
|
||||||
|
| `STRUCT` | [Tuple](../sql-reference/data-types/tuple.md) | `STRUCT` |
|
||||||
|
| `MAP` | [Map](../sql-reference/data-types/map.md) | `MAP` |
|
||||||
|
|
||||||
Arrays can be nested and can have a value of the `Nullable` type as an argument.
|
Arrays can be nested and can have a value of the `Nullable` type as an argument. `Tuple` and `Map` types also can be nested.
|
||||||
|
|
||||||
ClickHouse supports configurable precision of the `Decimal` type. The `INSERT` query treats the ORC `DECIMAL` type as the ClickHouse `Decimal128` type.
|
ClickHouse supports configurable precision of the `Decimal` type. The `INSERT` query treats the ORC `DECIMAL` type as the ClickHouse `Decimal128` type.
|
||||||
|
|
||||||
|
@ -7,16 +7,21 @@ toc_title: HTTP Interface
|
|||||||
|
|
||||||
The HTTP interface lets you use ClickHouse on any platform from any programming language. We use it for working from Java and Perl, as well as shell scripts. In other departments, the HTTP interface is used from Perl, Python, and Go. The HTTP interface is more limited than the native interface, but it has better compatibility.
|
The HTTP interface lets you use ClickHouse on any platform from any programming language. We use it for working from Java and Perl, as well as shell scripts. In other departments, the HTTP interface is used from Perl, Python, and Go. The HTTP interface is more limited than the native interface, but it has better compatibility.
|
||||||
|
|
||||||
By default, clickhouse-server listens for HTTP on port 8123 (this can be changed in the config).
|
By default, `clickhouse-server` listens for HTTP on port 8123 (this can be changed in the config).
|
||||||
|
|
||||||
If you make a GET / request without parameters, it returns 200 response code and the string which defined in [http_server_default_response](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-http_server_default_response) default value “Ok.” (with a line feed at the end)
|
If you make a `GET /` request without parameters, it returns 200 response code and the string which defined in [http_server_default_response](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-http_server_default_response) default value “Ok.” (with a line feed at the end)
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ curl 'http://localhost:8123/'
|
$ curl 'http://localhost:8123/'
|
||||||
Ok.
|
Ok.
|
||||||
```
|
```
|
||||||
|
|
||||||
Use GET /ping request in health-check scripts. This handler always returns “Ok.” (with a line feed at the end). Available from version 18.12.13.
|
Web UI can be accessed here: `http://localhost:8123/play`.
|
||||||
|
|
||||||
|
![Web UI](../images/play.png)
|
||||||
|
|
||||||
|
|
||||||
|
In health-check scripts use `GET /ping` request. This handler always returns “Ok.” (with a line feed at the end). Available from version 18.12.13.
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ curl 'http://localhost:8123/ping'
|
$ curl 'http://localhost:8123/ping'
|
||||||
@ -51,8 +56,8 @@ X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","writ
|
|||||||
1
|
1
|
||||||
```
|
```
|
||||||
|
|
||||||
As you can see, curl is somewhat inconvenient in that spaces must be URL escaped.
|
As you can see, `curl` is somewhat inconvenient in that spaces must be URL escaped.
|
||||||
Although wget escapes everything itself, we do not recommend using it because it does not work well over HTTP 1.1 when using keep-alive and Transfer-Encoding: chunked.
|
Although `wget` escapes everything itself, we do not recommend using it because it does not work well over HTTP 1.1 when using keep-alive and Transfer-Encoding: chunked.
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ echo 'SELECT 1' | curl 'http://localhost:8123/' --data-binary @-
|
$ echo 'SELECT 1' | curl 'http://localhost:8123/' --data-binary @-
|
||||||
@ -75,7 +80,7 @@ ECT 1
|
|||||||
, expected One of: SHOW TABLES, SHOW DATABASES, SELECT, INSERT, CREATE, ATTACH, RENAME, DROP, DETACH, USE, SET, OPTIMIZE., e.what() = DB::Exception
|
, expected One of: SHOW TABLES, SHOW DATABASES, SELECT, INSERT, CREATE, ATTACH, RENAME, DROP, DETACH, USE, SET, OPTIMIZE., e.what() = DB::Exception
|
||||||
```
|
```
|
||||||
|
|
||||||
By default, data is returned in TabSeparated format (for more information, see the “Formats” section).
|
By default, data is returned in [TabSeparated](formats.md#tabseparated) format.
|
||||||
|
|
||||||
You use the FORMAT clause of the query to request any other format.
|
You use the FORMAT clause of the query to request any other format.
|
||||||
|
|
||||||
@ -90,9 +95,11 @@ $ echo 'SELECT 1 FORMAT Pretty' | curl 'http://localhost:8123/?' --data-binary @
|
|||||||
└───┘
|
└───┘
|
||||||
```
|
```
|
||||||
|
|
||||||
The POST method of transmitting data is necessary for INSERT queries. In this case, you can write the beginning of the query in the URL parameter, and use POST to pass the data to insert. The data to insert could be, for example, a tab-separated dump from MySQL. In this way, the INSERT query replaces LOAD DATA LOCAL INFILE from MySQL.
|
The POST method of transmitting data is necessary for `INSERT` queries. In this case, you can write the beginning of the query in the URL parameter, and use POST to pass the data to insert. The data to insert could be, for example, a tab-separated dump from MySQL. In this way, the `INSERT` query replaces `LOAD DATA LOCAL INFILE` from MySQL.
|
||||||
|
|
||||||
Examples: Creating a table:
|
**Examples**
|
||||||
|
|
||||||
|
Creating a table:
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ echo 'CREATE TABLE t (a UInt8) ENGINE = Memory' | curl 'http://localhost:8123/' --data-binary @-
|
$ echo 'CREATE TABLE t (a UInt8) ENGINE = Memory' | curl 'http://localhost:8123/' --data-binary @-
|
||||||
@ -632,6 +639,4 @@ $ curl -vv -H 'XXX:xxx' 'http://localhost:8123/get_relative_path_static_handler'
|
|||||||
<
|
<
|
||||||
<html><body>Relative Path File</body></html>
|
<html><body>Relative Path File</body></html>
|
||||||
* Connection #0 to host localhost left intact
|
* Connection #0 to host localhost left intact
|
||||||
```
|
```
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/interfaces/http_interface/) <!--hide-->
|
|
@ -43,7 +43,7 @@ toc_title: Integrations
|
|||||||
- Monitoring
|
- Monitoring
|
||||||
- [Graphite](https://graphiteapp.org)
|
- [Graphite](https://graphiteapp.org)
|
||||||
- [graphouse](https://github.com/yandex/graphouse)
|
- [graphouse](https://github.com/yandex/graphouse)
|
||||||
- [carbon-clickhouse](https://github.com/lomik/carbon-clickhouse) +
|
- [carbon-clickhouse](https://github.com/lomik/carbon-clickhouse)
|
||||||
- [graphite-clickhouse](https://github.com/lomik/graphite-clickhouse)
|
- [graphite-clickhouse](https://github.com/lomik/graphite-clickhouse)
|
||||||
- [graphite-ch-optimizer](https://github.com/innogames/graphite-ch-optimizer) - optimizes staled partitions in [\*GraphiteMergeTree](../../engines/table-engines/mergetree-family/graphitemergetree.md#graphitemergetree) if rules from [rollup configuration](../../engines/table-engines/mergetree-family/graphitemergetree.md#rollup-configuration) could be applied
|
- [graphite-ch-optimizer](https://github.com/innogames/graphite-ch-optimizer) - optimizes staled partitions in [\*GraphiteMergeTree](../../engines/table-engines/mergetree-family/graphitemergetree.md#graphitemergetree) if rules from [rollup configuration](../../engines/table-engines/mergetree-family/graphitemergetree.md#rollup-configuration) could be applied
|
||||||
- [Grafana](https://grafana.com/)
|
- [Grafana](https://grafana.com/)
|
||||||
|
@ -59,6 +59,7 @@ toc_title: Adopters
|
|||||||
| <a href="https://www.huya.com/" class="favicon">HUYA</a> | Video Streaming | Analytics | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/7.%20ClickHouse万亿数据分析实践%20李本旺(sundy-li)%20虎牙.pdf) |
|
| <a href="https://www.huya.com/" class="favicon">HUYA</a> | Video Streaming | Analytics | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/7.%20ClickHouse万亿数据分析实践%20李本旺(sundy-li)%20虎牙.pdf) |
|
||||||
| <a href="https://www.the-ica.com/" class="favicon">ICA</a> | FinTech | Risk Management | — | — | [Blog Post in English, Sep 2020](https://altinity.com/blog/clickhouse-vs-redshift-performance-for-fintech-risk-management?utm_campaign=ClickHouse%20vs%20RedShift&utm_content=143520807&utm_medium=social&utm_source=twitter&hss_channel=tw-3894792263) |
|
| <a href="https://www.the-ica.com/" class="favicon">ICA</a> | FinTech | Risk Management | — | — | [Blog Post in English, Sep 2020](https://altinity.com/blog/clickhouse-vs-redshift-performance-for-fintech-risk-management?utm_campaign=ClickHouse%20vs%20RedShift&utm_content=143520807&utm_medium=social&utm_source=twitter&hss_channel=tw-3894792263) |
|
||||||
| <a href="https://www.idealista.com" class="favicon">Idealista</a> | Real Estate | Analytics | — | — | [Blog Post in English, April 2019](https://clickhouse.tech/blog/en/clickhouse-meetup-in-madrid-on-april-2-2019) |
|
| <a href="https://www.idealista.com" class="favicon">Idealista</a> | Real Estate | Analytics | — | — | [Blog Post in English, April 2019](https://clickhouse.tech/blog/en/clickhouse-meetup-in-madrid-on-april-2-2019) |
|
||||||
|
| <a href="https://infobaleen.com" class="favicon">Infobaleen</a> | AI markting tool | Analytics | — | — | [Official site](https://infobaleen.com) |
|
||||||
| <a href="https://www.infovista.com/" class="favicon">Infovista</a> | Networks | Analytics | — | — | [Slides in English, October 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup30/infovista.pdf) |
|
| <a href="https://www.infovista.com/" class="favicon">Infovista</a> | Networks | Analytics | — | — | [Slides in English, October 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup30/infovista.pdf) |
|
||||||
| <a href="https://www.innogames.com" class="favicon">InnoGames</a> | Games | Metrics, Logging | — | — | [Slides in Russian, September 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup28/graphite_and_clickHouse.pdf) |
|
| <a href="https://www.innogames.com" class="favicon">InnoGames</a> | Games | Metrics, Logging | — | — | [Slides in Russian, September 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup28/graphite_and_clickHouse.pdf) |
|
||||||
| <a href="https://instabug.com/" class="favicon">Instabug</a> | APM Platform | Main product | — | — | [A quote from Co-Founder](https://altinity.com/) |
|
| <a href="https://instabug.com/" class="favicon">Instabug</a> | APM Platform | Main product | — | — | [A quote from Co-Founder](https://altinity.com/) |
|
||||||
@ -156,5 +157,6 @@ toc_title: Adopters
|
|||||||
| <a href="https://signoz.io/" class="favicon">SigNoz</a> | Observability Platform | Main Product | — | — | [Source code](https://github.com/SigNoz/signoz) |
|
| <a href="https://signoz.io/" class="favicon">SigNoz</a> | Observability Platform | Main Product | — | — | [Source code](https://github.com/SigNoz/signoz) |
|
||||||
| <a href="https://chelpipegroup.com/" class="favicon">ChelPipe Group</a> | Analytics | — | — | — | [Blog post, June 2021](https://vc.ru/trade/253172-tyazhelomu-proizvodstvu-user-friendly-sayt-internet-magazin-trub-dlya-chtpz) |
|
| <a href="https://chelpipegroup.com/" class="favicon">ChelPipe Group</a> | Analytics | — | — | — | [Blog post, June 2021](https://vc.ru/trade/253172-tyazhelomu-proizvodstvu-user-friendly-sayt-internet-magazin-trub-dlya-chtpz) |
|
||||||
| <a href="https://zagravagames.com/en/" class="favicon">Zagrava Trading</a> | — | — | — | — | [Job offer, May 2021](https://twitter.com/datastackjobs/status/1394707267082063874) |
|
| <a href="https://zagravagames.com/en/" class="favicon">Zagrava Trading</a> | — | — | — | — | [Job offer, May 2021](https://twitter.com/datastackjobs/status/1394707267082063874) |
|
||||||
|
| <a href="https://beeline.ru/" class="favicon">Beeline</a> | Telecom | Data Platform | — | — | [Blog post, July 2021](https://habr.com/en/company/beeline/blog/567508/) |
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/introduction/adopters/) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/introduction/adopters/) <!--hide-->
|
||||||
|
118
docs/en/operations/clickhouse-keeper.md
Normal file
118
docs/en/operations/clickhouse-keeper.md
Normal file
@ -0,0 +1,118 @@
|
|||||||
|
---
|
||||||
|
toc_priority: 66
|
||||||
|
toc_title: ClickHouse Keeper
|
||||||
|
---
|
||||||
|
|
||||||
|
# [pre-production] clickhouse-keeper
|
||||||
|
|
||||||
|
ClickHouse server use [ZooKeeper](https://zookeeper.apache.org/) coordination system for data [replication](../engines/table-engines/mergetree-family/replication.md) and [distributed DDL](../sql-reference/distributed-ddl.md) queries execution. ClickHouse Keeper is an alternative coordination system compatible with ZooKeeper.
|
||||||
|
|
||||||
|
!!! warning "Warning"
|
||||||
|
This feature currently in pre-production stage. We test it in our CI and on small internal installations.
|
||||||
|
|
||||||
|
## Implementation details
|
||||||
|
|
||||||
|
ZooKeeper is one of the first well-known open-source coordination systems. It's implemented in Java, has quite a simple and powerful data model. ZooKeeper's coordination algorithm called ZAB (ZooKeeper Atomic Broadcast) doesn't provide linearizability guarantees for reads, because each ZooKeeper node serves reads locally. Unlike ZooKeeper `clickhouse-keeper` written in C++ and use [RAFT algorithm](https://raft.github.io/) [implementation](https://github.com/eBay/NuRaft). This algorithm allows to have linearizability for reads and writes, has several open-source implementations in different languages.
|
||||||
|
|
||||||
|
By default, `clickhouse-keeper` provides the same guarantees as ZooKeeper (linearizable writes, non-linearizable reads). It has a compatible client-server protocol, so any standard ZooKeeper client can be used to interact with `clickhouse-keeper`. Snapshots and logs have an incompatible format with ZooKeeper, but `clickhouse-keeper-converter` tool allows to convert ZooKeeper data to `clickhouse-keeper` snapshot. Interserver protocol in `clickhouse-keeper` also incompatible with ZooKeeper so mixed ZooKeeper/clickhouse-keeper cluster is impossible.
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
`clickhouse-keeper` can be used as a standalone replacement for ZooKeeper or as an internal part of the `clickhouse-server`, but in both cases configuration is almost the same `.xml` file. The main `clickhouse-keeper` configuration tag is `<keeper_server>`. Keeper configuration has the following parameters:
|
||||||
|
|
||||||
|
- `tcp_port` — the port for a client to connect (default for ZooKeeper is `2181`)
|
||||||
|
- `tcp_port_secure` — the secure port for a client to connect
|
||||||
|
- `server_id` — unique server id, each participant of the clickhouse-keeper cluster must have a unique number (1, 2, 3, and so on)
|
||||||
|
- `log_storage_path` — path to coordination logs, better to store logs on the non-busy device (same for ZooKeeper)
|
||||||
|
- `snapshot_storage_path` — path to coordination snapshots
|
||||||
|
|
||||||
|
Other common parameters are inherited from clickhouse-server config (`listen_host`, `logger` and so on).
|
||||||
|
|
||||||
|
Internal coordination settings are located in `<keeper_server>.<coordination_settings>` section:
|
||||||
|
|
||||||
|
- `operation_timeout_ms` — timeout for a single client operation (default: 10000)
|
||||||
|
- `session_timeout_ms` — timeout for client session (default: 30000)
|
||||||
|
- `dead_session_check_period_ms` — how often clickhouse-keeper check dead sessions and remove them (default: 500)
|
||||||
|
- `heart_beat_interval_ms` — how often a clickhouse-keeper leader will send heartbeats to followers (default: 500)
|
||||||
|
- `election_timeout_lower_bound_ms` — if follower didn't receive heartbeats from the leader in this interval, then it can initiate leader election (default: 1000)
|
||||||
|
- `election_timeout_upper_bound_ms` — if follower didn't receive heartbeats from the leader in this interval, then it must initiate leader election (default: 2000)
|
||||||
|
- `rotate_log_storage_interval` — how many log records to store in a single file (default: 100000)
|
||||||
|
- `reserved_log_items` — how many coordination log records to store before compaction (default: 100000)
|
||||||
|
- `snapshot_distance` — how often clickhouse-keeper will create new snapshots (in the number of records in logs) (default: 100000)
|
||||||
|
- `snapshots_to_keep` — how many snapshots to keep (default: 3)
|
||||||
|
- `stale_log_gap` — the threshold when leader consider follower as stale and send snapshot to it instead of logs (default: 10000)
|
||||||
|
- `fresh_log_gap` - when node became fresh (default: 200)
|
||||||
|
- `max_requests_batch_size` - max size of batch in requests count before it will be sent to RAFT (default: 100)
|
||||||
|
- `force_sync` — call `fsync` on each write to coordination log (default: true)
|
||||||
|
- `quorum_reads` - execute read requests as writes through whole RAFT consesus with similar speed (default: false)
|
||||||
|
- `raft_logs_level` — text logging level about coordination (trace, debug, and so on) (default: system default)
|
||||||
|
- `auto_forwarding` - allow to forward write requests from followers to leader (default: true)
|
||||||
|
- `shutdown_timeout` — wait to finish internal connections and shutdown (ms) (default: 5000)
|
||||||
|
- `startup_timeout` — if the server doesn't connect to other quorum participants in the specified timeout it will terminate (ms) (default: 30000)
|
||||||
|
|
||||||
|
Quorum configuration is located in `<keeper_server>.<raft_configuration>` section and contain servers description. The only parameter for the whole quorum is `secure`, which enables encrypted connection for communication between quorum participants. The main parameters for each `<server>` are:
|
||||||
|
|
||||||
|
- `id` — server_id in quorum
|
||||||
|
- `hostname` — hostname where this server placed
|
||||||
|
- `port` — port where this server listen for connections
|
||||||
|
|
||||||
|
|
||||||
|
Examples of configuration for quorum with three nodes can be found in [integration tests](https://github.com/ClickHouse/ClickHouse/tree/master/tests/integration) with `test_keeper_` prefix. Example configuration for server #1:
|
||||||
|
|
||||||
|
```xml
|
||||||
|
<keeper_server>
|
||||||
|
<tcp_port>2181</tcp_port>
|
||||||
|
<server_id>1</server_id>
|
||||||
|
<log_storage_path>/var/lib/clickhouse/coordination/log</log_storage_path>
|
||||||
|
<snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path>
|
||||||
|
|
||||||
|
<coordination_settings>
|
||||||
|
<operation_timeout_ms>10000</operation_timeout_ms>
|
||||||
|
<session_timeout_ms>30000</session_timeout_ms>
|
||||||
|
<raft_logs_level>trace</raft_logs_level>
|
||||||
|
</coordination_settings>
|
||||||
|
|
||||||
|
<raft_configuration>
|
||||||
|
<server>
|
||||||
|
<id>1</id>
|
||||||
|
<hostname>zoo1</hostname>
|
||||||
|
<port>9444</port>
|
||||||
|
</server>
|
||||||
|
<server>
|
||||||
|
<id>2</id>
|
||||||
|
<hostname>zoo2</hostname>
|
||||||
|
<port>9444</port>
|
||||||
|
</server>
|
||||||
|
<server>
|
||||||
|
<id>3</id>
|
||||||
|
<hostname>zoo3</hostname>
|
||||||
|
<port>9444</port>
|
||||||
|
</server>
|
||||||
|
</raft_configuration>
|
||||||
|
</keeper_server>
|
||||||
|
```
|
||||||
|
|
||||||
|
## How to run
|
||||||
|
|
||||||
|
`clickhouse-keeper` is bundled into `clickhouse-server` package, just add configuration of `<keeper_server>` and start clickhouse-server as always. If you want to run standalone `clickhouse-keeper` you can start it in a similar way with:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
clickhouse-keeper --config /etc/your_path_to_config/config.xml --daemon
|
||||||
|
```
|
||||||
|
|
||||||
|
## [experimental] Migration from ZooKeeper
|
||||||
|
|
||||||
|
Seamlessly migration from ZooKeeper to `clickhouse-keeper` is impossible you have to stop your ZooKeeper cluster, convert data and start `clickhouse-keeper`. `clickhouse-keeper-converter` tool allows to convert ZooKeeper logs and snapshots to `clickhouse-keeper` snapshot. It works only with ZooKeeper > 3.4. Steps for migration:
|
||||||
|
|
||||||
|
1. Stop all ZooKeeper nodes.
|
||||||
|
|
||||||
|
2. [optional, but recommended] Found ZooKeeper leader node, start and stop it again. It will force ZooKeeper to create consistent snapshot.
|
||||||
|
|
||||||
|
3. Run `clickhouse-keeper-converter` on leader, example
|
||||||
|
|
||||||
|
```bash
|
||||||
|
clickhouse-keeper-converter --zookeeper-logs-dir /var/lib/zookeeper/version-2 --zookeeper-snapshots-dir /var/lib/zookeeper/version-2 --output-dir /path/to/clickhouse/keeper/snapshots
|
||||||
|
```
|
||||||
|
|
||||||
|
4. Copy snapshot to `clickhouse-server` nodes with configured `keeper` or start `clickhouse-keeper` instead of ZooKeeper. Snapshot must persist only on leader node, leader will sync it automatically to other nodes.
|
||||||
|
|
@ -22,6 +22,23 @@ Some settings specified in the main configuration file can be overridden in othe
|
|||||||
|
|
||||||
The config can also define “substitutions”. If an element has the `incl` attribute, the corresponding substitution from the file will be used as the value. By default, the path to the file with substitutions is `/etc/metrika.xml`. This can be changed in the [include_from](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-include_from) element in the server config. The substitution values are specified in `/yandex/substitution_name` elements in this file. If a substitution specified in `incl` does not exist, it is recorded in the log. To prevent ClickHouse from logging missing substitutions, specify the `optional="true"` attribute (for example, settings for [macros](../operations/server-configuration-parameters/settings.md)).
|
The config can also define “substitutions”. If an element has the `incl` attribute, the corresponding substitution from the file will be used as the value. By default, the path to the file with substitutions is `/etc/metrika.xml`. This can be changed in the [include_from](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-include_from) element in the server config. The substitution values are specified in `/yandex/substitution_name` elements in this file. If a substitution specified in `incl` does not exist, it is recorded in the log. To prevent ClickHouse from logging missing substitutions, specify the `optional="true"` attribute (for example, settings for [macros](../operations/server-configuration-parameters/settings.md)).
|
||||||
|
|
||||||
|
If you want to replace an entire element with a substitution use `include` as element name.
|
||||||
|
|
||||||
|
XML substitution example:
|
||||||
|
|
||||||
|
```xml
|
||||||
|
<yandex>
|
||||||
|
<!-- Appends XML subtree found at `/profiles-in-zookeeper` ZK path to `<profiles>` element. -->
|
||||||
|
<profiles from_zk="/profiles-in-zookeeper" />
|
||||||
|
|
||||||
|
<users>
|
||||||
|
<!-- Replaces `include` element with the subtree found at `/users-in-zookeeper` ZK path. -->
|
||||||
|
<include from_zk="/users-in-zookeeper" />
|
||||||
|
<include from_zk="/other-users-in-zookeeper" />
|
||||||
|
</users>
|
||||||
|
</yandex>
|
||||||
|
```
|
||||||
|
|
||||||
Substitutions can also be performed from ZooKeeper. To do this, specify the attribute `from_zk = "/path/to/node"`. The element value is replaced with the contents of the node at `/path/to/node` in ZooKeeper. You can also put an entire XML subtree on the ZooKeeper node and it will be fully inserted into the source element.
|
Substitutions can also be performed from ZooKeeper. To do this, specify the attribute `from_zk = "/path/to/node"`. The element value is replaced with the contents of the node at `/path/to/node` in ZooKeeper. You can also put an entire XML subtree on the ZooKeeper node and it will be fully inserted into the source element.
|
||||||
|
|
||||||
## User Settings {#user-settings}
|
## User Settings {#user-settings}
|
||||||
@ -32,6 +49,8 @@ Users configuration can be splitted into separate files similar to `config.xml`
|
|||||||
Directory name is defined as `users_config` setting without `.xml` postfix concatenated with `.d`.
|
Directory name is defined as `users_config` setting without `.xml` postfix concatenated with `.d`.
|
||||||
Directory `users.d` is used by default, as `users_config` defaults to `users.xml`.
|
Directory `users.d` is used by default, as `users_config` defaults to `users.xml`.
|
||||||
|
|
||||||
|
Note that configuration files are first merged taking into account [Override](#override) settings and includes are processed after that.
|
||||||
|
|
||||||
## XML example {#example}
|
## XML example {#example}
|
||||||
|
|
||||||
For example, you can have separate config file for each user like this:
|
For example, you can have separate config file for each user like this:
|
||||||
|
@ -5,50 +5,67 @@ toc_title: Testing Hardware
|
|||||||
|
|
||||||
# How to Test Your Hardware with ClickHouse {#how-to-test-your-hardware-with-clickhouse}
|
# How to Test Your Hardware with ClickHouse {#how-to-test-your-hardware-with-clickhouse}
|
||||||
|
|
||||||
With this instruction you can run basic ClickHouse performance test on any server without installation of ClickHouse packages.
|
You can run basic ClickHouse performance test on any server without installation of ClickHouse packages.
|
||||||
|
|
||||||
1. Go to “commits” page: https://github.com/ClickHouse/ClickHouse/commits/master
|
|
||||||
2. Click on the first green check mark or red cross with green “ClickHouse Build Check” and click on the “Details” link near “ClickHouse Build Check”. There is no such link in some commits, for example commits with documentation. In this case, choose the nearest commit having this link.
|
## Automated Run
|
||||||
3. Copy the link to `clickhouse` binary for amd64 or aarch64.
|
|
||||||
4. ssh to the server and download it with wget:
|
You can run benchmark with a single script.
|
||||||
|
|
||||||
|
1. Download the script.
|
||||||
|
```
|
||||||
|
wget https://raw.githubusercontent.com/ClickHouse/ClickHouse/master/benchmark/hardware.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Run the script.
|
||||||
|
```
|
||||||
|
chmod a+x ./hardware.sh
|
||||||
|
./hardware.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Copy the output and send it to clickhouse-feedback@yandex-team.com
|
||||||
|
|
||||||
|
All the results are published here: https://clickhouse.tech/benchmark/hardware/
|
||||||
|
|
||||||
|
|
||||||
|
## Manual Run
|
||||||
|
|
||||||
|
Alternatively you can perform benchmark in the following steps.
|
||||||
|
|
||||||
|
1. ssh to the server and download the binary with wget:
|
||||||
```bash
|
```bash
|
||||||
# These links are outdated, please obtain the fresh link from the "commits" page.
|
|
||||||
# For amd64:
|
# For amd64:
|
||||||
wget https://clickhouse-builds.s3.yandex.net/0/e29c4c3cc47ab2a6c4516486c1b77d57e7d42643/clickhouse_build_check/gcc-10_relwithdebuginfo_none_bundled_unsplitted_disable_False_binary/clickhouse
|
wget https://builds.clickhouse.tech/master/amd64/clickhouse
|
||||||
# For aarch64:
|
# For aarch64:
|
||||||
wget https://clickhouse-builds.s3.yandex.net/0/e29c4c3cc47ab2a6c4516486c1b77d57e7d42643/clickhouse_special_build_check/clang-10-aarch64_relwithdebuginfo_none_bundled_unsplitted_disable_False_binary/clickhouse
|
wget https://builds.clickhouse.tech/master/aarch64/clickhouse
|
||||||
# Then do:
|
# Then do:
|
||||||
chmod a+x clickhouse
|
chmod a+x clickhouse
|
||||||
```
|
```
|
||||||
5. Download benchmark files:
|
2. Download benchmark files:
|
||||||
```bash
|
```bash
|
||||||
wget https://raw.githubusercontent.com/ClickHouse/ClickHouse/master/benchmark/clickhouse/benchmark-new.sh
|
wget https://raw.githubusercontent.com/ClickHouse/ClickHouse/master/benchmark/clickhouse/benchmark-new.sh
|
||||||
chmod a+x benchmark-new.sh
|
chmod a+x benchmark-new.sh
|
||||||
wget https://raw.githubusercontent.com/ClickHouse/ClickHouse/master/benchmark/clickhouse/queries.sql
|
wget https://raw.githubusercontent.com/ClickHouse/ClickHouse/master/benchmark/clickhouse/queries.sql
|
||||||
```
|
```
|
||||||
6. Download test data according to the [Yandex.Metrica dataset](../getting-started/example-datasets/metrica.md) instruction (“hits” table containing 100 million rows).
|
3. Download test data according to the [Yandex.Metrica dataset](../getting-started/example-datasets/metrica.md) instruction (“hits” table containing 100 million rows).
|
||||||
```bash
|
```bash
|
||||||
wget https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz
|
wget https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz
|
||||||
tar xvf hits_100m_obfuscated_v1.tar.xz -C .
|
tar xvf hits_100m_obfuscated_v1.tar.xz -C .
|
||||||
mv hits_100m_obfuscated_v1/* .
|
mv hits_100m_obfuscated_v1/* .
|
||||||
```
|
```
|
||||||
7. Run the server:
|
4. Run the server:
|
||||||
```bash
|
```bash
|
||||||
./clickhouse server
|
./clickhouse server
|
||||||
```
|
```
|
||||||
8. Check the data: ssh to the server in another terminal
|
5. Check the data: ssh to the server in another terminal
|
||||||
```bash
|
```bash
|
||||||
./clickhouse client --query "SELECT count() FROM hits_100m_obfuscated"
|
./clickhouse client --query "SELECT count() FROM hits_100m_obfuscated"
|
||||||
100000000
|
100000000
|
||||||
```
|
```
|
||||||
9. Edit the benchmark-new.sh, change `clickhouse-client` to `./clickhouse client` and add `--max_memory_usage 100000000000` parameter.
|
6. Run the benchmark:
|
||||||
```bash
|
|
||||||
mcedit benchmark-new.sh
|
|
||||||
```
|
|
||||||
10. Run the benchmark:
|
|
||||||
```bash
|
```bash
|
||||||
./benchmark-new.sh hits_100m_obfuscated
|
./benchmark-new.sh hits_100m_obfuscated
|
||||||
```
|
```
|
||||||
11. Send the numbers and the info about your hardware configuration to clickhouse-feedback@yandex-team.com
|
7. Send the numbers and the info about your hardware configuration to clickhouse-feedback@yandex-team.com
|
||||||
|
|
||||||
All the results are published here: https://clickhouse.tech/benchmark/hardware/
|
All the results are published here: https://clickhouse.tech/benchmark/hardware/
|
||||||
|
@ -34,6 +34,7 @@ Configuration template:
|
|||||||
<min_part_size>...</min_part_size>
|
<min_part_size>...</min_part_size>
|
||||||
<min_part_size_ratio>...</min_part_size_ratio>
|
<min_part_size_ratio>...</min_part_size_ratio>
|
||||||
<method>...</method>
|
<method>...</method>
|
||||||
|
<level>...</level>
|
||||||
</case>
|
</case>
|
||||||
...
|
...
|
||||||
</compression>
|
</compression>
|
||||||
@ -43,7 +44,8 @@ Configuration template:
|
|||||||
|
|
||||||
- `min_part_size` – The minimum size of a data part.
|
- `min_part_size` – The minimum size of a data part.
|
||||||
- `min_part_size_ratio` – The ratio of the data part size to the table size.
|
- `min_part_size_ratio` – The ratio of the data part size to the table size.
|
||||||
- `method` – Compression method. Acceptable values: `lz4` or `zstd`.
|
- `method` – Compression method. Acceptable values: `lz4`, `lz4hc`, `zstd`.
|
||||||
|
- `level` – Compression level. See [Codecs](../../sql-reference/statements/create/table/#create-query-general-purpose-codecs).
|
||||||
|
|
||||||
You can configure multiple `<case>` sections.
|
You can configure multiple `<case>` sections.
|
||||||
|
|
||||||
@ -62,6 +64,7 @@ If no conditions met for a data part, ClickHouse uses the `lz4` compression.
|
|||||||
<min_part_size>10000000000</min_part_size>
|
<min_part_size>10000000000</min_part_size>
|
||||||
<min_part_size_ratio>0.01</min_part_size_ratio>
|
<min_part_size_ratio>0.01</min_part_size_ratio>
|
||||||
<method>zstd</method>
|
<method>zstd</method>
|
||||||
|
<level>1</level>
|
||||||
</case>
|
</case>
|
||||||
</compression>
|
</compression>
|
||||||
```
|
```
|
||||||
@ -98,7 +101,7 @@ Default value: `1073741824` (1 GB).
|
|||||||
```xml
|
```xml
|
||||||
<core_dump>
|
<core_dump>
|
||||||
<size_limit>1073741824</size_limit>
|
<size_limit>1073741824</size_limit>
|
||||||
</core_dump>
|
</core_dump>
|
||||||
```
|
```
|
||||||
## database_atomic_delay_before_drop_table_sec {#database_atomic_delay_before_drop_table_sec}
|
## database_atomic_delay_before_drop_table_sec {#database_atomic_delay_before_drop_table_sec}
|
||||||
|
|
||||||
@ -439,8 +442,8 @@ The server will need access to the public Internet via IPv4 (at the time of writ
|
|||||||
|
|
||||||
Keys:
|
Keys:
|
||||||
|
|
||||||
- `enabled` – Boolean flag to enable the feature, `false` by default. Set to `true` to allow sending crash reports.
|
- `enabled` – Boolean flag to enable the feature, `false` by default. Set to `true` to allow sending crash reports.
|
||||||
- `endpoint` – You can override the Sentry endpoint URL for sending crash reports. It can be either a separate Sentry account or your self-hosted Sentry instance. Use the [Sentry DSN](https://docs.sentry.io/error-reporting/quickstart/?platform=native#configure-the-sdk) syntax.
|
- `endpoint` – You can override the Sentry endpoint URL for sending crash reports. It can be either a separate Sentry account or your self-hosted Sentry instance. Use the [Sentry DSN](https://docs.sentry.io/error-reporting/quickstart/?platform=native#configure-the-sdk) syntax.
|
||||||
- `anonymize` - Avoid attaching the server hostname to the crash report.
|
- `anonymize` - Avoid attaching the server hostname to the crash report.
|
||||||
- `http_proxy` - Configure HTTP proxy for sending crash reports.
|
- `http_proxy` - Configure HTTP proxy for sending crash reports.
|
||||||
- `debug` - Sets the Sentry client into debug mode.
|
- `debug` - Sets the Sentry client into debug mode.
|
||||||
@ -502,7 +505,7 @@ The default `max_server_memory_usage` value is calculated as `memory_amount * ma
|
|||||||
|
|
||||||
## max_server_memory_usage_to_ram_ratio {#max_server_memory_usage_to_ram_ratio}
|
## max_server_memory_usage_to_ram_ratio {#max_server_memory_usage_to_ram_ratio}
|
||||||
|
|
||||||
Defines the fraction of total physical RAM amount, available to the ClickHouse server. If the server tries to utilize more, the memory is cut down to the appropriate amount.
|
Defines the fraction of total physical RAM amount, available to the ClickHouse server. If the server tries to utilize more, the memory is cut down to the appropriate amount.
|
||||||
|
|
||||||
Possible values:
|
Possible values:
|
||||||
|
|
||||||
@ -713,7 +716,7 @@ Keys for server/client settings:
|
|||||||
- extendedVerification – Automatically extended verification of certificates after the session ends. Acceptable values: `true`, `false`.
|
- extendedVerification – Automatically extended verification of certificates after the session ends. Acceptable values: `true`, `false`.
|
||||||
- requireTLSv1 – Require a TLSv1 connection. Acceptable values: `true`, `false`.
|
- requireTLSv1 – Require a TLSv1 connection. Acceptable values: `true`, `false`.
|
||||||
- requireTLSv1_1 – Require a TLSv1.1 connection. Acceptable values: `true`, `false`.
|
- requireTLSv1_1 – Require a TLSv1.1 connection. Acceptable values: `true`, `false`.
|
||||||
- requireTLSv1 – Require a TLSv1.2 connection. Acceptable values: `true`, `false`.
|
- requireTLSv1_2 – Require a TLSv1.2 connection. Acceptable values: `true`, `false`.
|
||||||
- fips – Activates OpenSSL FIPS mode. Supported if the library’s OpenSSL version supports FIPS.
|
- fips – Activates OpenSSL FIPS mode. Supported if the library’s OpenSSL version supports FIPS.
|
||||||
- privateKeyPassphraseHandler – Class (PrivateKeyPassphraseHandler subclass) that requests the passphrase for accessing the private key. For example: `<privateKeyPassphraseHandler>`, `<name>KeyFileHandler</name>`, `<options><password>test</password></options>`, `</privateKeyPassphraseHandler>`.
|
- privateKeyPassphraseHandler – Class (PrivateKeyPassphraseHandler subclass) that requests the passphrase for accessing the private key. For example: `<privateKeyPassphraseHandler>`, `<name>KeyFileHandler</name>`, `<options><password>test</password></options>`, `</privateKeyPassphraseHandler>`.
|
||||||
- invalidCertificateHandler – Class (a subclass of CertificateHandler) for verifying invalid certificates. For example: `<invalidCertificateHandler> <name>ConsoleCertificateHandler</name> </invalidCertificateHandler>` .
|
- invalidCertificateHandler – Class (a subclass of CertificateHandler) for verifying invalid certificates. For example: `<invalidCertificateHandler> <name>ConsoleCertificateHandler</name> </invalidCertificateHandler>` .
|
||||||
@ -880,7 +883,7 @@ Parameters:
|
|||||||
- `flush_interval_milliseconds` — Interval for flushing data from the buffer in memory to the table.
|
- `flush_interval_milliseconds` — Interval for flushing data from the buffer in memory to the table.
|
||||||
|
|
||||||
**Example**
|
**Example**
|
||||||
```xml
|
```xml
|
||||||
<yandex>
|
<yandex>
|
||||||
<text_log>
|
<text_log>
|
||||||
<level>notice</level>
|
<level>notice</level>
|
||||||
|
@ -31,7 +31,7 @@ Settings that can only be made in the server config file are not covered in this
|
|||||||
|
|
||||||
## Custom Settings {#custom_settings}
|
## Custom Settings {#custom_settings}
|
||||||
|
|
||||||
In addition to the common [settings](../../operations/settings/settings.md), users can define custom settings.
|
In addition to the common [settings](../../operations/settings/settings.md), users can define custom settings.
|
||||||
|
|
||||||
A custom setting name must begin with one of predefined prefixes. The list of these prefixes must be declared in the [custom_settings_prefixes](../../operations/server-configuration-parameters/settings.md#custom_settings_prefixes) parameter in the server configuration file.
|
A custom setting name must begin with one of predefined prefixes. The list of these prefixes must be declared in the [custom_settings_prefixes](../../operations/server-configuration-parameters/settings.md#custom_settings_prefixes) parameter in the server configuration file.
|
||||||
|
|
||||||
@ -48,7 +48,7 @@ SET custom_a = 123;
|
|||||||
To get the current value of a custom setting use `getSetting()` function:
|
To get the current value of a custom setting use `getSetting()` function:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SELECT getSetting('custom_a');
|
SELECT getSetting('custom_a');
|
||||||
```
|
```
|
||||||
|
|
||||||
**See Also**
|
**See Also**
|
||||||
|
@ -278,4 +278,15 @@ Possible values:
|
|||||||
|
|
||||||
Default value: `0`.
|
Default value: `0`.
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/settings/merge_tree_settings/) <!--hide-->
|
## check_sample_column_is_correct {#check_sample_column_is_correct}
|
||||||
|
|
||||||
|
Enables the check at table creation, that the data type of a column for sampling or sampling expression is correct. The data type must be one of unsigned [integer types](../../sql-reference/data-types/int-uint.md): `UInt8`, `UInt16`, `UInt32`, `UInt64`.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- true — The check is enabled.
|
||||||
|
- false — The check is disabled at table creation.
|
||||||
|
|
||||||
|
Default value: `true`.
|
||||||
|
|
||||||
|
By default, the ClickHouse server checks at table creation the data type of a column for sampling or sampling expression. If you already have tables with incorrect sampling expression and do not want the server to raise an exception during startup, set `check_sample_column_is_correct` to `false`.
|
||||||
|
@ -65,20 +65,20 @@ What to do when the volume of data read exceeds one of the limits: ‘throw’ o
|
|||||||
The following restrictions can be checked on each block (instead of on each row). That is, the restrictions can be broken a little.
|
The following restrictions can be checked on each block (instead of on each row). That is, the restrictions can be broken a little.
|
||||||
|
|
||||||
A maximum number of rows that can be read from a local table on a leaf node when running a distributed query. While
|
A maximum number of rows that can be read from a local table on a leaf node when running a distributed query. While
|
||||||
distributed queries can issue a multiple sub-queries to each shard (leaf) - this limit will be checked only on the read
|
distributed queries can issue a multiple sub-queries to each shard (leaf) - this limit will be checked only on the read
|
||||||
stage on the leaf nodes and ignored on results merging stage on the root node. For example, cluster consists of 2 shards
|
stage on the leaf nodes and ignored on results merging stage on the root node. For example, cluster consists of 2 shards
|
||||||
and each shard contains a table with 100 rows. Then distributed query which suppose to read all the data from both
|
and each shard contains a table with 100 rows. Then distributed query which suppose to read all the data from both
|
||||||
tables with setting `max_rows_to_read=150` will fail as in total it will be 200 rows. While query
|
tables with setting `max_rows_to_read=150` will fail as in total it will be 200 rows. While query
|
||||||
with `max_rows_to_read_leaf=150` will succeed since leaf nodes will read 100 rows at max.
|
with `max_rows_to_read_leaf=150` will succeed since leaf nodes will read 100 rows at max.
|
||||||
|
|
||||||
## max_bytes_to_read_leaf {#max-bytes-to-read-leaf}
|
## max_bytes_to_read_leaf {#max-bytes-to-read-leaf}
|
||||||
|
|
||||||
A maximum number of bytes (uncompressed data) that can be read from a local table on a leaf node when running
|
A maximum number of bytes (uncompressed data) that can be read from a local table on a leaf node when running
|
||||||
a distributed query. While distributed queries can issue a multiple sub-queries to each shard (leaf) - this limit will
|
a distributed query. While distributed queries can issue a multiple sub-queries to each shard (leaf) - this limit will
|
||||||
be checked only on the read stage on the leaf nodes and ignored on results merging stage on the root node.
|
be checked only on the read stage on the leaf nodes and ignored on results merging stage on the root node.
|
||||||
For example, cluster consists of 2 shards and each shard contains a table with 100 bytes of data.
|
For example, cluster consists of 2 shards and each shard contains a table with 100 bytes of data.
|
||||||
Then distributed query which suppose to read all the data from both tables with setting `max_bytes_to_read=150` will fail
|
Then distributed query which suppose to read all the data from both tables with setting `max_bytes_to_read=150` will fail
|
||||||
as in total it will be 200 bytes. While query with `max_bytes_to_read_leaf=150` will succeed since leaf nodes will read
|
as in total it will be 200 bytes. While query with `max_bytes_to_read_leaf=150` will succeed since leaf nodes will read
|
||||||
100 bytes at max.
|
100 bytes at max.
|
||||||
|
|
||||||
## read_overflow_mode_leaf {#read-overflow-mode-leaf}
|
## read_overflow_mode_leaf {#read-overflow-mode-leaf}
|
||||||
|
@ -153,6 +153,26 @@ Possible values:
|
|||||||
|
|
||||||
Default value: 1048576.
|
Default value: 1048576.
|
||||||
|
|
||||||
|
## table_function_remote_max_addresses {#table_function_remote_max_addresses}
|
||||||
|
|
||||||
|
Sets the maximum number of addresses generated from patterns for the [remote](../../sql-reference/table-functions/remote.md) function.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- Positive integer.
|
||||||
|
|
||||||
|
Default value: `1000`.
|
||||||
|
|
||||||
|
## glob_expansion_max_elements {#glob_expansion_max_elements }
|
||||||
|
|
||||||
|
Sets the maximum number of addresses generated from patterns for external storages and table functions (like [url](../../sql-reference/table-functions/url.md)) except the `remote` function.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- Positive integer.
|
||||||
|
|
||||||
|
Default value: `1000`.
|
||||||
|
|
||||||
## send_progress_in_http_headers {#settings-send_progress_in_http_headers}
|
## send_progress_in_http_headers {#settings-send_progress_in_http_headers}
|
||||||
|
|
||||||
Enables or disables `X-ClickHouse-Progress` HTTP response headers in `clickhouse-server` responses.
|
Enables or disables `X-ClickHouse-Progress` HTTP response headers in `clickhouse-server` responses.
|
||||||
@ -509,6 +529,23 @@ Possible values:
|
|||||||
|
|
||||||
Default value: `ALL`.
|
Default value: `ALL`.
|
||||||
|
|
||||||
|
## join_algorithm {#settings-join_algorithm}
|
||||||
|
|
||||||
|
Specifies [JOIN](../../sql-reference/statements/select/join.md) algorithm.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- `hash` — [Hash join algorithm](https://en.wikipedia.org/wiki/Hash_join) is used.
|
||||||
|
- `partial_merge` — [Sort-merge algorithm](https://en.wikipedia.org/wiki/Sort-merge_join) is used.
|
||||||
|
- `prefer_partial_merge` — ClickHouse always tries to use `merge` join if possible.
|
||||||
|
- `auto` — ClickHouse tries to change `hash` join to `merge` join on the fly to avoid out of memory.
|
||||||
|
|
||||||
|
Default value: `hash`.
|
||||||
|
|
||||||
|
When using `hash` algorithm the right part of `JOIN` is uploaded into RAM.
|
||||||
|
|
||||||
|
When using `partial_merge` algorithm ClickHouse sorts the data and dumps it to the disk. The `merge` algorithm in ClickHouse differs a bit from the classic realization. First ClickHouse sorts the right table by [join key](../../sql-reference/statements/select/join.md#select-join) in blocks and creates min-max index for sorted blocks. Then it sorts parts of left table by `join key` and joins them over right table. The min-max index is also used to skip unneeded right table blocks.
|
||||||
|
|
||||||
## join_any_take_last_row {#settings-join_any_take_last_row}
|
## join_any_take_last_row {#settings-join_any_take_last_row}
|
||||||
|
|
||||||
Changes behaviour of join operations with `ANY` strictness.
|
Changes behaviour of join operations with `ANY` strictness.
|
||||||
@ -1213,7 +1250,15 @@ Default value: `3`.
|
|||||||
|
|
||||||
## output_format_json_quote_64bit_integers {#session_settings-output_format_json_quote_64bit_integers}
|
## output_format_json_quote_64bit_integers {#session_settings-output_format_json_quote_64bit_integers}
|
||||||
|
|
||||||
If the value is true, integers appear in quotes when using JSON\* Int64 and UInt64 formats (for compatibility with most JavaScript implementations); otherwise, integers are output without the quotes.
|
Controls quoting of 64-bit or bigger [integers](../../sql-reference/data-types/int-uint.md) (like `UInt64` or `Int128`) when they are output in a [JSON](../../interfaces/formats.md#json) format.
|
||||||
|
Such integers are enclosed in quotes by default. This behavior is compatible with most JavaScript implementations.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- 0 — Integers are output without quotes.
|
||||||
|
- 1 — Integers are enclosed in quotes.
|
||||||
|
|
||||||
|
Default value: 1.
|
||||||
|
|
||||||
## output_format_json_quote_denormals {#settings-output_format_json_quote_denormals}
|
## output_format_json_quote_denormals {#settings-output_format_json_quote_denormals}
|
||||||
|
|
||||||
@ -1730,7 +1775,7 @@ Default value: 0.
|
|||||||
|
|
||||||
## optimize_functions_to_subcolumns {#optimize-functions-to-subcolumns}
|
## optimize_functions_to_subcolumns {#optimize-functions-to-subcolumns}
|
||||||
|
|
||||||
Enables or disables optimization by transforming some functions to reading subcolumns. This reduces the amount of data to read.
|
Enables or disables optimization by transforming some functions to reading subcolumns. This reduces the amount of data to read.
|
||||||
|
|
||||||
These functions can be transformed:
|
These functions can be transformed:
|
||||||
|
|
||||||
@ -1961,6 +2006,13 @@ Possible values: 32 (32 bytes) - 1073741824 (1 GiB)
|
|||||||
|
|
||||||
Default value: 32768 (32 KiB)
|
Default value: 32768 (32 KiB)
|
||||||
|
|
||||||
|
## output_format_avro_string_column_pattern {#output_format_avro_string_column_pattern}
|
||||||
|
|
||||||
|
Regexp of column names of type String to output as Avro `string` (default is `bytes`).
|
||||||
|
RE2 syntax is supported.
|
||||||
|
|
||||||
|
Type: string
|
||||||
|
|
||||||
## format_avro_schema_registry_url {#format_avro_schema_registry_url}
|
## format_avro_schema_registry_url {#format_avro_schema_registry_url}
|
||||||
|
|
||||||
Sets [Confluent Schema Registry](https://docs.confluent.io/current/schema-registry/index.html) URL to use with [AvroConfluent](../../interfaces/formats.md#data-format-avro-confluent) format.
|
Sets [Confluent Schema Registry](https://docs.confluent.io/current/schema-registry/index.html) URL to use with [AvroConfluent](../../interfaces/formats.md#data-format-avro-confluent) format.
|
||||||
@ -1990,6 +2042,16 @@ Possible values:
|
|||||||
|
|
||||||
Default value: 16.
|
Default value: 16.
|
||||||
|
|
||||||
|
## merge_selecting_sleep_ms {#merge_selecting_sleep_ms}
|
||||||
|
|
||||||
|
Sleep time for merge selecting when no part is selected. A lower setting triggers selecting tasks in `background_schedule_pool` frequently, which results in a large number of requests to Zookeeper in large-scale clusters.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- Any positive integer.
|
||||||
|
|
||||||
|
Default value: `5000`.
|
||||||
|
|
||||||
## parallel_distributed_insert_select {#parallel_distributed_insert_select}
|
## parallel_distributed_insert_select {#parallel_distributed_insert_select}
|
||||||
|
|
||||||
Enables parallel distributed `INSERT ... SELECT` query.
|
Enables parallel distributed `INSERT ... SELECT` query.
|
||||||
@ -2865,7 +2927,7 @@ Result:
|
|||||||
└─────────────┘
|
└─────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
Note that this setting influences [Materialized view](../../sql-reference/statements/create/view.md#materialized) and [MaterializeMySQL](../../engines/database-engines/materialize-mysql.md) behaviour.
|
Note that this setting influences [Materialized view](../../sql-reference/statements/create/view.md#materialized) and [MaterializedMySQL](../../engines/database-engines/materialized-mysql.md) behaviour.
|
||||||
|
|
||||||
## engine_file_empty_if_not_exists {#engine-file-empty_if-not-exists}
|
## engine_file_empty_if_not_exists {#engine-file-empty_if-not-exists}
|
||||||
|
|
||||||
@ -3123,6 +3185,53 @@ SELECT
|
|||||||
FROM fuse_tbl
|
FROM fuse_tbl
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## allow_experimental_database_replicated {#allow_experimental_database_replicated}
|
||||||
|
|
||||||
|
Enables to create databases with [Replicated](../../engines/database-engines/replicated.md) engine.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- 0 — Disabled.
|
||||||
|
- 1 — Enabled.
|
||||||
|
|
||||||
|
Default value: `0`.
|
||||||
|
|
||||||
|
## database_replicated_initial_query_timeout_sec {#database_replicated_initial_query_timeout_sec}
|
||||||
|
|
||||||
|
Sets how long initial DDL query should wait for Replicated database to precess previous DDL queue entries in seconds.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- Positive integer.
|
||||||
|
- 0 — Unlimited.
|
||||||
|
|
||||||
|
Default value: `300`.
|
||||||
|
|
||||||
|
## distributed_ddl_task_timeout {#distributed_ddl_task_timeout}
|
||||||
|
|
||||||
|
Sets timeout for DDL query responses from all hosts in cluster. If a DDL request has not been performed on all hosts, a response will contain a timeout error and a request will be executed in an async mode. Negative value means infinite.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- Positive integer.
|
||||||
|
- 0 — Async mode.
|
||||||
|
- Negative integer — infinite timeout.
|
||||||
|
|
||||||
|
Default value: `180`.
|
||||||
|
|
||||||
|
## distributed_ddl_output_mode {#distributed_ddl_output_mode}
|
||||||
|
|
||||||
|
Sets format of distributed DDL query result.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- `throw` — Returns result set with query execution status for all hosts where query is finished. If query has failed on some hosts, then it will rethrow the first exception. If query is not finished yet on some hosts and [distributed_ddl_task_timeout](#distributed_ddl_task_timeout) exceeded, then it throws `TIMEOUT_EXCEEDED` exception.
|
||||||
|
- `none` — Is similar to throw, but distributed DDL query returns no result set.
|
||||||
|
- `null_status_on_timeout` — Returns `NULL` as execution status in some rows of result set instead of throwing `TIMEOUT_EXCEEDED` if query is not finished on the corresponding hosts.
|
||||||
|
- `never_throw` — Do not throw `TIMEOUT_EXCEEDED` and do not rethrow exceptions if query has failed on some hosts.
|
||||||
|
|
||||||
|
Default value: `throw`.
|
||||||
|
|
||||||
## flatten_nested {#flatten-nested}
|
## flatten_nested {#flatten-nested}
|
||||||
|
|
||||||
Sets the data format of a [nested](../../sql-reference/data-types/nested-data-structures/nested.md) columns.
|
Sets the data format of a [nested](../../sql-reference/data-types/nested-data-structures/nested.md) columns.
|
||||||
@ -3202,3 +3311,14 @@ Default value: `1`.
|
|||||||
**Usage**
|
**Usage**
|
||||||
|
|
||||||
If the setting is set to `0`, the table function does not make Nullable columns and inserts default values instead of NULL. This is also applicable for NULL values inside arrays.
|
If the setting is set to `0`, the table function does not make Nullable columns and inserts default values instead of NULL. This is also applicable for NULL values inside arrays.
|
||||||
|
|
||||||
|
## output_format_arrow_low_cardinality_as_dictionary {#output-format-arrow-low-cardinality-as-dictionary}
|
||||||
|
|
||||||
|
Allows to convert the [LowCardinality](../../sql-reference/data-types/lowcardinality.md) type to the `DICTIONARY` type of the [Arrow](../../interfaces/formats.md#data-format-arrow) format for `SELECT` queries.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- 0 — The `LowCardinality` type is not converted to the `DICTIONARY` type.
|
||||||
|
- 1 — The `LowCardinality` type is converted to the `DICTIONARY` type.
|
||||||
|
|
||||||
|
Default value: `0`.
|
||||||
|
@ -33,7 +33,7 @@ SELECT * FROM system.asynchronous_metric_log LIMIT 10
|
|||||||
|
|
||||||
**See Also**
|
**See Also**
|
||||||
|
|
||||||
- [system.asynchronous_metrics](../system-tables/asynchronous_metrics.md) — Contains metrics, calculated periodically in the background.
|
- [system.asynchronous_metrics](../system-tables/asynchronous_metrics.md) — Contains metrics, calculated periodically in the background.
|
||||||
- [system.metric_log](../system-tables/metric_log.md) — Contains history of metrics values from tables `system.metrics` and `system.events`, periodically flushed to disk.
|
- [system.metric_log](../system-tables/metric_log.md) — Contains history of metrics values from tables `system.metrics` and `system.events`, periodically flushed to disk.
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/asynchronous_metric_log) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/asynchronous_metric_log) <!--hide-->
|
||||||
|
@ -4,7 +4,7 @@ Contains information about columns in all the tables.
|
|||||||
|
|
||||||
You can use this table to get information similar to the [DESCRIBE TABLE](../../sql-reference/statements/misc.md#misc-describe-table) query, but for multiple tables at once.
|
You can use this table to get information similar to the [DESCRIBE TABLE](../../sql-reference/statements/misc.md#misc-describe-table) query, but for multiple tables at once.
|
||||||
|
|
||||||
Columns from [temporary tables](../../sql-reference/statements/create/table.md#temporary-tables) are visible in the `system.columns` only in those session where they have been created. They are shown with the empty `database` field.
|
Columns from [temporary tables](../../sql-reference/statements/create/table.md#temporary-tables) are visible in the `system.columns` only in those session where they have been created. They are shown with the empty `database` field.
|
||||||
|
|
||||||
Columns:
|
Columns:
|
||||||
|
|
||||||
@ -38,17 +38,17 @@ database: system
|
|||||||
table: aggregate_function_combinators
|
table: aggregate_function_combinators
|
||||||
name: name
|
name: name
|
||||||
type: String
|
type: String
|
||||||
default_kind:
|
default_kind:
|
||||||
default_expression:
|
default_expression:
|
||||||
data_compressed_bytes: 0
|
data_compressed_bytes: 0
|
||||||
data_uncompressed_bytes: 0
|
data_uncompressed_bytes: 0
|
||||||
marks_bytes: 0
|
marks_bytes: 0
|
||||||
comment:
|
comment:
|
||||||
is_in_partition_key: 0
|
is_in_partition_key: 0
|
||||||
is_in_sorting_key: 0
|
is_in_sorting_key: 0
|
||||||
is_in_primary_key: 0
|
is_in_primary_key: 0
|
||||||
is_in_sampling_key: 0
|
is_in_sampling_key: 0
|
||||||
compression_codec:
|
compression_codec:
|
||||||
|
|
||||||
Row 2:
|
Row 2:
|
||||||
──────
|
──────
|
||||||
@ -56,17 +56,17 @@ database: system
|
|||||||
table: aggregate_function_combinators
|
table: aggregate_function_combinators
|
||||||
name: is_internal
|
name: is_internal
|
||||||
type: UInt8
|
type: UInt8
|
||||||
default_kind:
|
default_kind:
|
||||||
default_expression:
|
default_expression:
|
||||||
data_compressed_bytes: 0
|
data_compressed_bytes: 0
|
||||||
data_uncompressed_bytes: 0
|
data_uncompressed_bytes: 0
|
||||||
marks_bytes: 0
|
marks_bytes: 0
|
||||||
comment:
|
comment:
|
||||||
is_in_partition_key: 0
|
is_in_partition_key: 0
|
||||||
is_in_sorting_key: 0
|
is_in_sorting_key: 0
|
||||||
is_in_primary_key: 0
|
is_in_primary_key: 0
|
||||||
is_in_sampling_key: 0
|
is_in_sampling_key: 0
|
||||||
compression_codec:
|
compression_codec:
|
||||||
```
|
```
|
||||||
|
|
||||||
The `system.columns` table contains the following columns (the column type is shown in brackets):
|
The `system.columns` table contains the following columns (the column type is shown in brackets):
|
||||||
|
@ -8,12 +8,11 @@ Columns:
|
|||||||
- `table` ([String](../../sql-reference/data-types/string.md)) — Table name.
|
- `table` ([String](../../sql-reference/data-types/string.md)) — Table name.
|
||||||
- `name` ([String](../../sql-reference/data-types/string.md)) — Index name.
|
- `name` ([String](../../sql-reference/data-types/string.md)) — Index name.
|
||||||
- `type` ([String](../../sql-reference/data-types/string.md)) — Index type.
|
- `type` ([String](../../sql-reference/data-types/string.md)) — Index type.
|
||||||
- `expr` ([String](../../sql-reference/data-types/string.md)) — Expression used to calculate the index.
|
- `expr` ([String](../../sql-reference/data-types/string.md)) — Expression for the index calculation.
|
||||||
- `granularity` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Number of granules in the block.
|
- `granularity` ([UInt64](../../sql-reference/data-types/int-uint.md)) — The number of granules in the block.
|
||||||
|
|
||||||
**Example**
|
**Example**
|
||||||
|
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SELECT * FROM system.data_skipping_indices LIMIT 2 FORMAT Vertical;
|
SELECT * FROM system.data_skipping_indices LIMIT 2 FORMAT Vertical;
|
||||||
```
|
```
|
||||||
|
@ -21,7 +21,7 @@ Columns:
|
|||||||
│ default │ /var/lib/clickhouse/ │ 276392587264 │ 490652508160 │ 0 │
|
│ default │ /var/lib/clickhouse/ │ 276392587264 │ 490652508160 │ 0 │
|
||||||
└─────────┴──────────────────────┴──────────────┴──────────────┴─────────────────┘
|
└─────────┴──────────────────────┴──────────────┴──────────────┴─────────────────┘
|
||||||
|
|
||||||
1 rows in set. Elapsed: 0.001 sec.
|
1 rows in set. Elapsed: 0.001 sec.
|
||||||
```
|
```
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/disks) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/disks) <!--hide-->
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user