mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-21 15:12:02 +00:00
Merge remote-tracking branch 'origin' into integration-6
This commit is contained in:
commit
a4a960328e
3
.gitmodules
vendored
3
.gitmodules
vendored
@ -168,9 +168,6 @@
|
|||||||
[submodule "contrib/fmtlib"]
|
[submodule "contrib/fmtlib"]
|
||||||
path = contrib/fmtlib
|
path = contrib/fmtlib
|
||||||
url = https://github.com/fmtlib/fmt.git
|
url = https://github.com/fmtlib/fmt.git
|
||||||
[submodule "contrib/antlr4-runtime"]
|
|
||||||
path = contrib/antlr4-runtime
|
|
||||||
url = https://github.com/ClickHouse-Extras/antlr4-runtime.git
|
|
||||||
[submodule "contrib/sentry-native"]
|
[submodule "contrib/sentry-native"]
|
||||||
path = contrib/sentry-native
|
path = contrib/sentry-native
|
||||||
url = https://github.com/ClickHouse-Extras/sentry-native.git
|
url = https://github.com/ClickHouse-Extras/sentry-native.git
|
||||||
|
156
CHANGELOG.md
156
CHANGELOG.md
@ -1,3 +1,159 @@
|
|||||||
|
### ClickHouse release v21.7, 2021-07-09
|
||||||
|
|
||||||
|
#### Backward Incompatible Change
|
||||||
|
|
||||||
|
* Improved performance of queries with explicitly defined large sets. Added compatibility setting `legacy_column_name_of_tuple_literal`. It makes sense to set it to `true`, while doing rolling update of cluster from version lower than 21.7 to any higher version. Otherwise distributed queries with explicitly defined sets at `IN` clause may fail during update. [#25371](https://github.com/ClickHouse/ClickHouse/pull/25371) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Forward/backward incompatible change of maximum buffer size in clickhouse-keeper (an experimental alternative to ZooKeeper). Better to do it now (before production), than later. [#25421](https://github.com/ClickHouse/ClickHouse/pull/25421) ([alesapin](https://github.com/alesapin)).
|
||||||
|
|
||||||
|
#### New Feature
|
||||||
|
|
||||||
|
* Support configuration in YAML format as alternative to XML. This closes [#3607](https://github.com/ClickHouse/ClickHouse/issues/3607). [#21858](https://github.com/ClickHouse/ClickHouse/pull/21858) ([BoloniniD](https://github.com/BoloniniD)).
|
||||||
|
* Provides a way to restore replicated table when the data is (possibly) present, but the ZooKeeper metadata is lost. Resolves [#13458](https://github.com/ClickHouse/ClickHouse/issues/13458). [#13652](https://github.com/ClickHouse/ClickHouse/pull/13652) ([Mike Kot](https://github.com/myrrc)).
|
||||||
|
* Support structs and maps in Arrow/Parquet/ORC and dictionaries in Arrow input/output formats. Present new setting `output_format_arrow_low_cardinality_as_dictionary`. [#24341](https://github.com/ClickHouse/ClickHouse/pull/24341) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Added support for `Array` type in dictionaries. [#25119](https://github.com/ClickHouse/ClickHouse/pull/25119) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Added function `bitPositionsToArray`. Closes [#23792](https://github.com/ClickHouse/ClickHouse/issues/23792). Author [Kevin Wan] (@MaxWk). [#25394](https://github.com/ClickHouse/ClickHouse/pull/25394) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Added function `dateName` to return names like 'Friday' or 'April'. Author [Daniil Kondratyev] (@dankondr). [#25372](https://github.com/ClickHouse/ClickHouse/pull/25372) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Add `toJSONString` function to serialize columns to their JSON representations. [#25164](https://github.com/ClickHouse/ClickHouse/pull/25164) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Now `query_log` has two new columns: `initial_query_start_time`, `initial_query_start_time_microsecond` that record the starting time of a distributed query if any. [#25022](https://github.com/ClickHouse/ClickHouse/pull/25022) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Add aggregate function `segmentLengthSum`. [#24250](https://github.com/ClickHouse/ClickHouse/pull/24250) ([flynn](https://github.com/ucasfl)).
|
||||||
|
* Add a new boolean setting `prefer_global_in_and_join` which defaults all IN/JOIN as GLOBAL IN/JOIN. [#23434](https://github.com/ClickHouse/ClickHouse/pull/23434) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Support `ALTER DELETE` queries for `Join` table engine. [#23260](https://github.com/ClickHouse/ClickHouse/pull/23260) ([foolchi](https://github.com/foolchi)).
|
||||||
|
* Add `quantileBFloat16` aggregate function as well as the corresponding `quantilesBFloat16` and `medianBFloat16`. It is very simple and fast quantile estimator with relative error not more than 0.390625%. This closes [#16641](https://github.com/ClickHouse/ClickHouse/issues/16641). [#23204](https://github.com/ClickHouse/ClickHouse/pull/23204) ([Ivan Novitskiy](https://github.com/RedClusive)).
|
||||||
|
* Implement `sequenceNextNode()` function useful for `flow analysis`. [#19766](https://github.com/ClickHouse/ClickHouse/pull/19766) ([achimbab](https://github.com/achimbab)).
|
||||||
|
|
||||||
|
#### Experimental Feature
|
||||||
|
|
||||||
|
* Add support for virtual filesystem over HDFS. [#11058](https://github.com/ClickHouse/ClickHouse/pull/11058) ([overshov](https://github.com/overshov)) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Now clickhouse-keeper (an experimental alternative to ZooKeeper) supports ZooKeeper-like `digest` ACLs. [#24448](https://github.com/ClickHouse/ClickHouse/pull/24448) ([alesapin](https://github.com/alesapin)).
|
||||||
|
|
||||||
|
#### Performance Improvement
|
||||||
|
|
||||||
|
* Added optimization that transforms some functions to reading of subcolumns to reduce amount of read data. E.g., statement `col IS NULL` is transformed to reading of subcolumn `col.null`. Optimization can be enabled by setting `optimize_functions_to_subcolumns` which is currently off by default. [#24406](https://github.com/ClickHouse/ClickHouse/pull/24406) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Rewrite more columns to possible alias expressions. This may enable better optimization, such as projections. [#24405](https://github.com/ClickHouse/ClickHouse/pull/24405) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Index of type `bloom_filter` can be used for expressions with `hasAny` function with constant arrays. This closes: [#24291](https://github.com/ClickHouse/ClickHouse/issues/24291). [#24900](https://github.com/ClickHouse/ClickHouse/pull/24900) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||||
|
* Add exponential backoff to reschedule read attempt in case RabbitMQ queues are empty. (ClickHouse has support for importing data from RabbitMQ). Closes [#24340](https://github.com/ClickHouse/ClickHouse/issues/24340). [#24415](https://github.com/ClickHouse/ClickHouse/pull/24415) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
|
||||||
|
* Allow to limit bandwidth for replication. Add two Replicated\*MergeTree settings: `max_replicated_fetches_network_bandwidth` and `max_replicated_sends_network_bandwidth` which allows to limit maximum speed of replicated fetches/sends for table. Add two server-wide settings (in `default` user profile): `max_replicated_fetches_network_bandwidth_for_server` and `max_replicated_sends_network_bandwidth_for_server` which limit maximum speed of replication for all tables. The settings are not followed perfectly accurately. Turned off by default. Fixes [#1821](https://github.com/ClickHouse/ClickHouse/issues/1821). [#24573](https://github.com/ClickHouse/ClickHouse/pull/24573) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Resource constraints and isolation for ODBC and Library bridges. Use separate `clickhouse-bridge` group and user for bridge processes. Set oom_score_adj so the bridges will be first subjects for OOM killer. Set set maximum RSS to 1 GiB. Closes [#23861](https://github.com/ClickHouse/ClickHouse/issues/23861). [#25280](https://github.com/ClickHouse/ClickHouse/pull/25280) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Add standalone `clickhouse-keeper` symlink to the main `clickhouse` binary. Now it's possible to run coordination without the main clickhouse server. [#24059](https://github.com/ClickHouse/ClickHouse/pull/24059) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Use global settings for query to `VIEW`. Fixed the behavior when queries to `VIEW` use local settings, that leads to errors if setting on `CREATE VIEW` and `SELECT` were different. As for now, `VIEW` won't use these modified settings, but you can still pass additional settings in `SETTINGS` section of `CREATE VIEW` query. Close [#20551](https://github.com/ClickHouse/ClickHouse/issues/20551). [#24095](https://github.com/ClickHouse/ClickHouse/pull/24095) ([Vladimir](https://github.com/vdimir)).
|
||||||
|
* On server start, parts with incorrect partition ID would not be ever removed, but always detached. [#25070](https://github.com/ClickHouse/ClickHouse/issues/25070). [#25166](https://github.com/ClickHouse/ClickHouse/pull/25166) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Increase size of background schedule pool to 128 (`background_schedule_pool_size` setting). It allows avoiding replication queue hung on slow zookeeper connection. [#25072](https://github.com/ClickHouse/ClickHouse/pull/25072) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Add merge tree setting `max_parts_to_merge_at_once` which limits the number of parts that can be merged in the background at once. Doesn't affect `OPTIMIZE FINAL` query. Fixes [#1820](https://github.com/ClickHouse/ClickHouse/issues/1820). [#24496](https://github.com/ClickHouse/ClickHouse/pull/24496) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Allow `NOT IN` operator to be used in partition pruning. [#24894](https://github.com/ClickHouse/ClickHouse/pull/24894) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Recognize IPv4 addresses like `127.0.1.1` as local. This is controversial and closes [#23504](https://github.com/ClickHouse/ClickHouse/issues/23504). Michael Filimonov will test this feature. [#24316](https://github.com/ClickHouse/ClickHouse/pull/24316) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* ClickHouse database created with MaterializeMySQL (it is an experimental feature) now contains all column comments from the MySQL database that materialized. [#25199](https://github.com/ClickHouse/ClickHouse/pull/25199) ([Storozhuk Kostiantyn](https://github.com/sand6255)).
|
||||||
|
* Add settings (`connection_auto_close`/`connection_max_tries`/`connection_pool_size`) for MySQL storage engine. [#24146](https://github.com/ClickHouse/ClickHouse/pull/24146) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Improve startup time of Distributed engine. [#25663](https://github.com/ClickHouse/ClickHouse/pull/25663) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Improvement for Distributed tables. Drop replicas from dirname for internal_replication=true (allows INSERT into Distributed with cluster from any number of replicas, before only 15 replicas was supported, everything more will fail with ENAMETOOLONG while creating directory for async blocks). [#25513](https://github.com/ClickHouse/ClickHouse/pull/25513) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Added support `Interval` type for `LowCardinality`. It is needed for intermediate values of some expressions. Closes [#21730](https://github.com/ClickHouse/ClickHouse/issues/21730). [#25410](https://github.com/ClickHouse/ClickHouse/pull/25410) ([Vladimir](https://github.com/vdimir)).
|
||||||
|
* Add `==` operator on time conditions for `sequenceMatch` and `sequenceCount` functions. For eg: sequenceMatch('(?1)(?t==1)(?2)')(time, data = 1, data = 2). [#25299](https://github.com/ClickHouse/ClickHouse/pull/25299) ([Christophe Kalenzaga](https://github.com/mga-chka)).
|
||||||
|
* Add settings `http_max_fields`, `http_max_field_name_size`, `http_max_field_value_size`. [#25296](https://github.com/ClickHouse/ClickHouse/pull/25296) ([Ivan](https://github.com/abyss7)).
|
||||||
|
* Add support for function `if` with `Decimal` and `Int` types on its branches. This closes [#20549](https://github.com/ClickHouse/ClickHouse/issues/20549). This closes [#10142](https://github.com/ClickHouse/ClickHouse/issues/10142). [#25283](https://github.com/ClickHouse/ClickHouse/pull/25283) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Update prompt in `clickhouse-client` and display a message when reconnecting. This closes [#10577](https://github.com/ClickHouse/ClickHouse/issues/10577). [#25281](https://github.com/ClickHouse/ClickHouse/pull/25281) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Correct memory tracking in aggregate function `topK`. This closes [#25259](https://github.com/ClickHouse/ClickHouse/issues/25259). [#25260](https://github.com/ClickHouse/ClickHouse/pull/25260) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix `topLevelDomain` for IDN hosts (i.e. `example.рф`), before it returns empty string for such hosts. [#25103](https://github.com/ClickHouse/ClickHouse/pull/25103) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Detect Linux kernel version at runtime (for worked nested epoll, that is required for `async_socket_for_remote`/`use_hedged_requests`, otherwise remote queries may stuck). [#25067](https://github.com/ClickHouse/ClickHouse/pull/25067) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* For distributed query, when `optimize_skip_unused_shards=1`, allow to skip shard with condition like `(sharding key) IN (one-element-tuple)`. (Tuples with many elements were supported. Tuple with single element did not work because it is parsed as literal). [#24930](https://github.com/ClickHouse/ClickHouse/pull/24930) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Improved log messages of S3 errors, no more double whitespaces in case of empty keys and buckets. [#24897](https://github.com/ClickHouse/ClickHouse/pull/24897) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
* Some queries require multi-pass semantic analysis. Try reusing built sets for `IN` in this case. [#24874](https://github.com/ClickHouse/ClickHouse/pull/24874) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Respect `max_distributed_connections` for `insert_distributed_sync` (otherwise for huge clusters and sync insert it may run out of `max_thread_pool_size`). [#24754](https://github.com/ClickHouse/ClickHouse/pull/24754) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Avoid hiding errors like `Limit for rows or bytes to read exceeded` for scalar subqueries. [#24545](https://github.com/ClickHouse/ClickHouse/pull/24545) ([nvartolomei](https://github.com/nvartolomei)).
|
||||||
|
* Make String-to-Int parser stricter so that `toInt64('+')` will throw. [#24475](https://github.com/ClickHouse/ClickHouse/pull/24475) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* If `SSD_CACHE` is created with DDL query, it can be created only inside `user_files` directory. [#24466](https://github.com/ClickHouse/ClickHouse/pull/24466) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* PostgreSQL support for specifying non default schema for insert queries. Closes [#24149](https://github.com/ClickHouse/ClickHouse/issues/24149). [#24413](https://github.com/ClickHouse/ClickHouse/pull/24413) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Fix IPv6 addresses resolving (i.e. fixes `select * from remote('[::1]', system.one)`). [#24319](https://github.com/ClickHouse/ClickHouse/pull/24319) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix trailing whitespaces in FROM clause with subqueries in multiline mode, and also changes the output of the queries slightly in a more human friendly way. [#24151](https://github.com/ClickHouse/ClickHouse/pull/24151) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Improvement for Distributed tables. Add ability to split distributed batch on failures (i.e. due to memory limits, corruptions), under `distributed_directory_monitor_split_batch_on_failure` (OFF by default). [#23864](https://github.com/ClickHouse/ClickHouse/pull/23864) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Handle column name clashes for `Join` table engine. Closes [#20309](https://github.com/ClickHouse/ClickHouse/issues/20309). [#23769](https://github.com/ClickHouse/ClickHouse/pull/23769) ([Vladimir](https://github.com/vdimir)).
|
||||||
|
* Display progress for `File` table engine in `clickhouse-local` and on INSERT query in `clickhouse-client` when data is passed to stdin. Closes [#18209](https://github.com/ClickHouse/ClickHouse/issues/18209). [#23656](https://github.com/ClickHouse/ClickHouse/pull/23656) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Bugfixes and improvements of `clickhouse-copier`. Allow to copy tables with different (but compatible schemas). Closes [#9159](https://github.com/ClickHouse/ClickHouse/issues/9159). Added test to copy ReplacingMergeTree. Closes [#22711](https://github.com/ClickHouse/ClickHouse/issues/22711). Support TTL on columns and Data Skipping Indices. It simply removes it to create internal Distributed table (underlying table will have TTL and skipping indices). Closes [#19384](https://github.com/ClickHouse/ClickHouse/issues/19384). Allow to copy MATERIALIZED and ALIAS columns. There are some cases in which it could be helpful (e.g. if this column is in PRIMARY KEY). Now it could be allowed by setting `allow_to_copy_alias_and_materialized_columns` property to true in task configuration. Closes [#9177](https://github.com/ClickHouse/ClickHouse/issues/9177). Closes [#11007] (https://github.com/ClickHouse/ClickHouse/issues/11007). Closes [#9514](https://github.com/ClickHouse/ClickHouse/issues/9514). Added a property `allow_to_drop_target_partitions` in task configuration to drop partition in original table before moving helping tables. Closes [#20957](https://github.com/ClickHouse/ClickHouse/issues/20957). Get rid of `OPTIMIZE DEDUPLICATE` query. This hack was needed, because `ALTER TABLE MOVE PARTITION` was retried many times and plain MergeTree tables don't have deduplication. Closes [#17966](https://github.com/ClickHouse/ClickHouse/issues/17966). Write progress to ZooKeeper node on path `task_path + /status` in JSON format. Closes [#20955](https://github.com/ClickHouse/ClickHouse/issues/20955). Support for ReplicatedTables without arguments. Closes [#24834](https://github.com/ClickHouse/ClickHouse/issues/24834) .[#23518](https://github.com/ClickHouse/ClickHouse/pull/23518) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Added sleep with backoff between read retries from S3. [#23461](https://github.com/ClickHouse/ClickHouse/pull/23461) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
* Respect `insert_allow_materialized_columns` (allows materialized columns) for INSERT into `Distributed` table. [#23349](https://github.com/ClickHouse/ClickHouse/pull/23349) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Add ability to push down LIMIT for distributed queries. [#23027](https://github.com/ClickHouse/ClickHouse/pull/23027) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix zero-copy replication with several S3 volumes (Fixes [#22679](https://github.com/ClickHouse/ClickHouse/issues/22679)). [#22864](https://github.com/ClickHouse/ClickHouse/pull/22864) ([ianton-ru](https://github.com/ianton-ru)).
|
||||||
|
* Resolve the actual port number bound when a user requests any available port from the operating system to show it in the log message. [#25569](https://github.com/ClickHouse/ClickHouse/pull/25569) ([bnaecker](https://github.com/bnaecker)).
|
||||||
|
* Fixed case, when sometimes conversion of postgres arrays resulted in String data type, not n-dimensional array, because `attndims` works incorrectly in some cases. Closes [#24804](https://github.com/ClickHouse/ClickHouse/issues/24804). [#25538](https://github.com/ClickHouse/ClickHouse/pull/25538) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Fix convertion of DateTime with timezone for MySQL, PostgreSQL, ODBC. Closes [#5057](https://github.com/ClickHouse/ClickHouse/issues/5057). [#25528](https://github.com/ClickHouse/ClickHouse/pull/25528) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Distinguish KILL MUTATION for different tables (fixes unexpected `Cancelled mutating parts` error). [#25025](https://github.com/ClickHouse/ClickHouse/pull/25025) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Allow to declare S3 disk at root of bucket (S3 virtual filesystem is an experimental feature under development). [#24898](https://github.com/ClickHouse/ClickHouse/pull/24898) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
* Enable reading of subcolumns (e.g. components of Tuples) for distributed tables. [#24472](https://github.com/ClickHouse/ClickHouse/pull/24472) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* A feature for MySQL compatibility protocol: make `user` function to return correct output. Closes [#25697](https://github.com/ClickHouse/ClickHouse/pull/25697). [#25697](https://github.com/ClickHouse/ClickHouse/pull/25697) ([sundyli](https://github.com/sundy-li)).
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Improvement for backward compatibility. Use old modulo function version when used in partition key. Closes [#23508](https://github.com/ClickHouse/ClickHouse/issues/23508). [#24157](https://github.com/ClickHouse/ClickHouse/pull/24157) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Fix extremely rare bug on low-memory servers which can lead to the inability to perform merges without restart. Possibly fixes [#24603](https://github.com/ClickHouse/ClickHouse/issues/24603). [#24872](https://github.com/ClickHouse/ClickHouse/pull/24872) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix extremely rare error `Tagging already tagged part` in replication queue during concurrent `alter move/replace partition`. Possibly fixes [#22142](https://github.com/ClickHouse/ClickHouse/issues/22142). [#24961](https://github.com/ClickHouse/ClickHouse/pull/24961) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix potential crash when calculating aggregate function states by aggregation of aggregate function states of other aggregate functions (not a practical use case). See [#24523](https://github.com/ClickHouse/ClickHouse/issues/24523). [#25015](https://github.com/ClickHouse/ClickHouse/pull/25015) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fixed the behavior when query `SYSTEM RESTART REPLICA` or `SYSTEM SYNC REPLICA` does not finish. This was detected on server with extremely low amount of RAM. [#24457](https://github.com/ClickHouse/ClickHouse/pull/24457) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Fix bug which can lead to ZooKeeper client hung inside clickhouse-server. [#24721](https://github.com/ClickHouse/ClickHouse/pull/24721) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* If ZooKeeper connection was lost and replica was cloned after restoring the connection, its replication queue might contain outdated entries. Fixed failed assertion when replication queue contains intersecting virtual parts. It may rarely happen if some data part was lost. Print error in log instead of terminating. [#24777](https://github.com/ClickHouse/ClickHouse/pull/24777) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix lost `WHERE` condition in expression-push-down optimization of query plan (setting `query_plan_filter_push_down = 1` by default). Fixes [#25368](https://github.com/ClickHouse/ClickHouse/issues/25368). [#25370](https://github.com/ClickHouse/ClickHouse/pull/25370) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix bug which can lead to intersecting parts after merges with TTL: `Part all_40_40_0 is covered by all_40_40_1 but should be merged into all_40_41_1. This shouldn't happen often.`. [#25549](https://github.com/ClickHouse/ClickHouse/pull/25549) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* On ZooKeeper connection loss `ReplicatedMergeTree` table might wait for background operations to complete before trying to reconnect. It's fixed, now background operations are stopped forcefully. [#25306](https://github.com/ClickHouse/ClickHouse/pull/25306) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix error `Key expression contains comparison between inconvertible types` for queries with `ARRAY JOIN` in case if array is used in primary key. Fixes [#8247](https://github.com/ClickHouse/ClickHouse/issues/8247). [#25546](https://github.com/ClickHouse/ClickHouse/pull/25546) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix wrong totals for query `WITH TOTALS` and `WITH FILL`. Fixes [#20872](https://github.com/ClickHouse/ClickHouse/issues/20872). [#25539](https://github.com/ClickHouse/ClickHouse/pull/25539) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix data race when querying `system.clusters` while reloading the cluster configuration at the same time. [#25737](https://github.com/ClickHouse/ClickHouse/pull/25737) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fixed `No such file or directory` error on moving `Distributed` table between databases. Fixes [#24971](https://github.com/ClickHouse/ClickHouse/issues/24971). [#25667](https://github.com/ClickHouse/ClickHouse/pull/25667) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* `REPLACE PARTITION` might be ignored in rare cases if the source partition was empty. It's fixed. Fixes [#24869](https://github.com/ClickHouse/ClickHouse/issues/24869). [#25665](https://github.com/ClickHouse/ClickHouse/pull/25665) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fixed a bug in `Replicated` database engine that might rarely cause some replica to skip enqueued DDL query. [#24805](https://github.com/ClickHouse/ClickHouse/pull/24805) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix null pointer dereference in `EXPLAIN AST` without query. [#25631](https://github.com/ClickHouse/ClickHouse/pull/25631) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix waiting of automatic dropping of empty parts. It could lead to full filling of background pool and stuck of replication. [#23315](https://github.com/ClickHouse/ClickHouse/pull/23315) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix restore of a table stored in S3 virtual filesystem (it is an experimental feature not ready for production). [#25601](https://github.com/ClickHouse/ClickHouse/pull/25601) ([ianton-ru](https://github.com/ianton-ru)).
|
||||||
|
* Fix nullptr dereference in `Arrow` format when using `Decimal256`. Add `Decimal256` support for `Arrow` format. [#25531](https://github.com/ClickHouse/ClickHouse/pull/25531) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Fix excessive underscore before the names of the preprocessed configuration files. [#25431](https://github.com/ClickHouse/ClickHouse/pull/25431) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* A fix for `clickhouse-copier` tool: Fix segfault when sharding_key is absent in task config for copier. [#25419](https://github.com/ClickHouse/ClickHouse/pull/25419) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Fix `REPLACE` column transformer when used in DDL by correctly quoting the formated query. This fixes [#23925](https://github.com/ClickHouse/ClickHouse/issues/23925). [#25391](https://github.com/ClickHouse/ClickHouse/pull/25391) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix the possibility of non-deterministic behaviour of the `quantileDeterministic` function and similar. This closes [#20480](https://github.com/ClickHouse/ClickHouse/issues/20480). [#25313](https://github.com/ClickHouse/ClickHouse/pull/25313) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Support `SimpleAggregateFunction(LowCardinality)` for `SummingMergeTree`. Fixes [#25134](https://github.com/ClickHouse/ClickHouse/issues/25134). [#25300](https://github.com/ClickHouse/ClickHouse/pull/25300) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix logical error with exception message "Cannot sum Array/Tuple in min/maxMap". [#25298](https://github.com/ClickHouse/ClickHouse/pull/25298) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Fix error `Bad cast from type DB::ColumnLowCardinality to DB::ColumnVector<char8_t>` for queries where `LowCardinality` argument was used for IN (this bug appeared in 21.6). Fixes [#25187](https://github.com/ClickHouse/ClickHouse/issues/25187). [#25290](https://github.com/ClickHouse/ClickHouse/pull/25290) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix incorrect behaviour of `joinGetOrNull` with not-nullable columns. This fixes [#24261](https://github.com/ClickHouse/ClickHouse/issues/24261). [#25288](https://github.com/ClickHouse/ClickHouse/pull/25288) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix incorrect behaviour and UBSan report in big integers. In previous versions `CAST(1e19 AS UInt128)` returned zero. [#25279](https://github.com/ClickHouse/ClickHouse/pull/25279) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fixed an error which occurred while inserting a subset of columns using CSVWithNames format. Fixes [#25129](https://github.com/ClickHouse/ClickHouse/issues/25129). [#25169](https://github.com/ClickHouse/ClickHouse/pull/25169) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Do not use table's projection for `SELECT` with `FINAL`. It is not supported yet. [#25163](https://github.com/ClickHouse/ClickHouse/pull/25163) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix possible parts loss after updating up to 21.5 in case table used `UUID` in partition key. (It is not recommended to use `UUID` in partition key). Fixes [#25070](https://github.com/ClickHouse/ClickHouse/issues/25070). [#25127](https://github.com/ClickHouse/ClickHouse/pull/25127) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix crash in query with cross join and `joined_subquery_requires_alias = 0`. Fixes [#24011](https://github.com/ClickHouse/ClickHouse/issues/24011). [#25082](https://github.com/ClickHouse/ClickHouse/pull/25082) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix bug with constant maps in mapContains function that lead to error `empty column was returned by function mapContains`. Closes [#25077](https://github.com/ClickHouse/ClickHouse/issues/25077). [#25080](https://github.com/ClickHouse/ClickHouse/pull/25080) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Remove possibility to create tables with columns referencing themselves like `a UInt32 ALIAS a + 1` or `b UInt32 MATERIALIZED b`. Fixes [#24910](https://github.com/ClickHouse/ClickHouse/issues/24910), [#24292](https://github.com/ClickHouse/ClickHouse/issues/24292). [#25059](https://github.com/ClickHouse/ClickHouse/pull/25059) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix wrong result when using aggregate projection with *not empty* `GROUP BY` key to execute query with `GROUP BY` by *empty* key. [#25055](https://github.com/ClickHouse/ClickHouse/pull/25055) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix serialization of splitted nested messages in Protobuf format. This PR fixes [#24647](https://github.com/ClickHouse/ClickHouse/issues/24647). [#25000](https://github.com/ClickHouse/ClickHouse/pull/25000) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* Fix limit/offset settings for distributed queries (ignore on the remote nodes). [#24940](https://github.com/ClickHouse/ClickHouse/pull/24940) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix possible heap-buffer-overflow in `Arrow` format. [#24922](https://github.com/ClickHouse/ClickHouse/pull/24922) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Fixed possible error 'Cannot read from istream at offset 0' when reading a file from DiskS3 (S3 virtual filesystem is an experimental feature under development that should not be used in production). [#24885](https://github.com/ClickHouse/ClickHouse/pull/24885) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* Fix "Missing columns" exception when joining Distributed Materialized View. [#24870](https://github.com/ClickHouse/ClickHouse/pull/24870) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Allow `NULL` values in postgresql compatibility protocol. Closes [#22622](https://github.com/ClickHouse/ClickHouse/issues/22622). [#24857](https://github.com/ClickHouse/ClickHouse/pull/24857) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Fix bug when exception `Mutation was killed` can be thrown to the client on mutation wait when mutation not loaded into memory yet. [#24809](https://github.com/ClickHouse/ClickHouse/pull/24809) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fixed bug in deserialization of random generator state with might cause some data types such as `AggregateFunction(groupArraySample(N), T))` to behave in a non-deterministic way. [#24538](https://github.com/ClickHouse/ClickHouse/pull/24538) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Disallow building uniqXXXXStates of other aggregation states. [#24523](https://github.com/ClickHouse/ClickHouse/pull/24523) ([Raúl Marín](https://github.com/Algunenano)). Then allow it back by actually eliminating the root cause of the related issue. ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix usage of tuples in `CREATE .. AS SELECT` queries. [#24464](https://github.com/ClickHouse/ClickHouse/pull/24464) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix computation of total bytes in `Buffer` table. In current ClickHouse version total_writes.bytes counter decreases too much during the buffer flush. It leads to counter overflow and totalBytes return something around 17.44 EB some time after the flush. [#24450](https://github.com/ClickHouse/ClickHouse/pull/24450) ([DimasKovas](https://github.com/DimasKovas)).
|
||||||
|
* Fix incorrect information about the monotonicity of toWeek function. This fixes [#24422](https://github.com/ClickHouse/ClickHouse/issues/24422) . This bug was introduced in https://github.com/ClickHouse/ClickHouse/pull/5212 , and was exposed later by smarter partition pruner. [#24446](https://github.com/ClickHouse/ClickHouse/pull/24446) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* When user authentication is managed by LDAP. Fixed potential deadlock that can happen during LDAP role (re)mapping, when LDAP group is mapped to a nonexistent local role. [#24431](https://github.com/ClickHouse/ClickHouse/pull/24431) ([Denis Glazachev](https://github.com/traceon)).
|
||||||
|
* In "multipart/form-data" message consider the CRLF preceding a boundary as part of it. Fixes [#23905](https://github.com/ClickHouse/ClickHouse/issues/23905). [#24399](https://github.com/ClickHouse/ClickHouse/pull/24399) ([Ivan](https://github.com/abyss7)).
|
||||||
|
* Fix drop partition with intersect fake parts. In rare cases there might be parts with mutation version greater than current block number. [#24321](https://github.com/ClickHouse/ClickHouse/pull/24321) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fixed a bug in moving Materialized View from Ordinary to Atomic database (`RENAME TABLE` query). Now inner table is moved to new database together with Materialized View. Fixes [#23926](https://github.com/ClickHouse/ClickHouse/issues/23926). [#24309](https://github.com/ClickHouse/ClickHouse/pull/24309) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Allow empty HTTP headers. Fixes [#23901](https://github.com/ClickHouse/ClickHouse/issues/23901). [#24285](https://github.com/ClickHouse/ClickHouse/pull/24285) ([Ivan](https://github.com/abyss7)).
|
||||||
|
* Correct processing of mutations (ALTER UPDATE/DELETE) in Memory tables. Closes [#24274](https://github.com/ClickHouse/ClickHouse/issues/24274). [#24275](https://github.com/ClickHouse/ClickHouse/pull/24275) ([flynn](https://github.com/ucasfl)).
|
||||||
|
* Make column LowCardinality property in JOIN output the same as in the input, close [#23351](https://github.com/ClickHouse/ClickHouse/issues/23351), close [#20315](https://github.com/ClickHouse/ClickHouse/issues/20315). [#24061](https://github.com/ClickHouse/ClickHouse/pull/24061) ([Vladimir](https://github.com/vdimir)).
|
||||||
|
* A fix for Kafka tables. Fix the bug in failover behavior when Engine = Kafka was not able to start consumption if the same consumer had an empty assignment previously. Closes [#21118](https://github.com/ClickHouse/ClickHouse/issues/21118). [#21267](https://github.com/ClickHouse/ClickHouse/pull/21267) ([filimonov](https://github.com/filimonov)).
|
||||||
|
|
||||||
|
#### Build/Testing/Packaging Improvement
|
||||||
|
|
||||||
|
* Add `darwin-aarch64` (Mac M1 / Apple Silicon) builds in CI [#25560](https://github.com/ClickHouse/ClickHouse/pull/25560) ([Ivan](https://github.com/abyss7)) and put the links to the docs and website ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Adds cross-platform embedding of binary resources into executables. It works on Illumos. [#25146](https://github.com/ClickHouse/ClickHouse/pull/25146) ([bnaecker](https://github.com/bnaecker)).
|
||||||
|
* Add join related options to stress tests to improve fuzzing. [#25200](https://github.com/ClickHouse/ClickHouse/pull/25200) ([Vladimir](https://github.com/vdimir)).
|
||||||
|
* Enable build with s3 module in osx [#25217](https://github.com/ClickHouse/ClickHouse/issues/25217). [#25218](https://github.com/ClickHouse/ClickHouse/pull/25218) ([kevin wan](https://github.com/MaxWk)).
|
||||||
|
* Add integration test cases to cover JDBC bridge. [#25047](https://github.com/ClickHouse/ClickHouse/pull/25047) ([Zhichun Wu](https://github.com/zhicwu)).
|
||||||
|
* Integration tests configuration has special treatment for dictionaries. Removed remaining dictionaries manual setup. [#24728](https://github.com/ClickHouse/ClickHouse/pull/24728) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||||
|
* Add libfuzzer tests for YAMLParser class. [#24480](https://github.com/ClickHouse/ClickHouse/pull/24480) ([BoloniniD](https://github.com/BoloniniD)).
|
||||||
|
* Ubuntu 20.04 is now used to run integration tests, docker-compose version used to run integration tests is updated to 1.28.2. Environment variables now take effect on docker-compose. Rework test_dictionaries_all_layouts_separate_sources to allow parallel run. [#20393](https://github.com/ClickHouse/ClickHouse/pull/20393) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||||
|
* Fix TOCTOU error in installation script. [#25277](https://github.com/ClickHouse/ClickHouse/pull/25277) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
|
||||||
|
|
||||||
### ClickHouse release 21.6, 2021-06-05
|
### ClickHouse release 21.6, 2021-06-05
|
||||||
|
|
||||||
#### Upgrade Notes
|
#### Upgrade Notes
|
||||||
|
@ -184,10 +184,27 @@ endif ()
|
|||||||
set (CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -rdynamic")
|
set (CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -rdynamic")
|
||||||
|
|
||||||
find_program (OBJCOPY_PATH NAMES "llvm-objcopy" "llvm-objcopy-12" "llvm-objcopy-11" "llvm-objcopy-10" "llvm-objcopy-9" "llvm-objcopy-8" "objcopy")
|
find_program (OBJCOPY_PATH NAMES "llvm-objcopy" "llvm-objcopy-12" "llvm-objcopy-11" "llvm-objcopy-10" "llvm-objcopy-9" "llvm-objcopy-8" "objcopy")
|
||||||
|
|
||||||
|
if (NOT OBJCOPY_PATH AND OS_DARWIN)
|
||||||
|
find_program (BREW_PATH NAMES "brew")
|
||||||
|
if (BREW_PATH)
|
||||||
|
execute_process (COMMAND ${BREW_PATH} --prefix llvm ERROR_QUIET OUTPUT_STRIP_TRAILING_WHITESPACE OUTPUT_VARIABLE LLVM_PREFIX)
|
||||||
|
if (LLVM_PREFIX)
|
||||||
|
find_program (OBJCOPY_PATH NAMES "llvm-objcopy" PATHS "${LLVM_PREFIX}/bin" NO_DEFAULT_PATH)
|
||||||
|
endif ()
|
||||||
|
if (NOT OBJCOPY_PATH)
|
||||||
|
execute_process (COMMAND ${BREW_PATH} --prefix binutils ERROR_QUIET OUTPUT_STRIP_TRAILING_WHITESPACE OUTPUT_VARIABLE BINUTILS_PREFIX)
|
||||||
|
if (BINUTILS_PREFIX)
|
||||||
|
find_program (OBJCOPY_PATH NAMES "objcopy" PATHS "${BINUTILS_PREFIX}/bin" NO_DEFAULT_PATH)
|
||||||
|
endif ()
|
||||||
|
endif ()
|
||||||
|
endif ()
|
||||||
|
endif ()
|
||||||
|
|
||||||
if (OBJCOPY_PATH)
|
if (OBJCOPY_PATH)
|
||||||
message(STATUS "Using objcopy: ${OBJCOPY_PATH}.")
|
message (STATUS "Using objcopy: ${OBJCOPY_PATH}")
|
||||||
else ()
|
else ()
|
||||||
message(FATAL_ERROR "Cannot find objcopy.")
|
message (FATAL_ERROR "Cannot find objcopy.")
|
||||||
endif ()
|
endif ()
|
||||||
|
|
||||||
if (OS_DARWIN)
|
if (OS_DARWIN)
|
||||||
|
@ -8,11 +8,8 @@ ClickHouse® is an open-source column-oriented database management system that a
|
|||||||
* [Tutorial](https://clickhouse.tech/docs/en/getting_started/tutorial/) shows how to set up and query small ClickHouse cluster.
|
* [Tutorial](https://clickhouse.tech/docs/en/getting_started/tutorial/) shows how to set up and query small ClickHouse cluster.
|
||||||
* [Documentation](https://clickhouse.tech/docs/en/) provides more in-depth information.
|
* [Documentation](https://clickhouse.tech/docs/en/) provides more in-depth information.
|
||||||
* [YouTube channel](https://www.youtube.com/c/ClickHouseDB) has a lot of content about ClickHouse in video format.
|
* [YouTube channel](https://www.youtube.com/c/ClickHouseDB) has a lot of content about ClickHouse in video format.
|
||||||
* [Slack](https://join.slack.com/t/clickhousedb/shared_invite/zt-qfort0u8-TWqK4wIP0YSdoDE0btKa1w) and [Telegram](https://telegram.me/clickhouse_en) allow to chat with ClickHouse users in real-time.
|
* [Slack](https://join.slack.com/t/clickhousedb/shared_invite/zt-rxm3rdrk-lIUmhLC3V8WTaL0TGxsOmg) and [Telegram](https://telegram.me/clickhouse_en) allow to chat with ClickHouse users in real-time.
|
||||||
* [Blog](https://clickhouse.yandex/blog/en/) contains various ClickHouse-related articles, as well as announcements and reports about events.
|
* [Blog](https://clickhouse.yandex/blog/en/) contains various ClickHouse-related articles, as well as announcements and reports about events.
|
||||||
* [Code Browser](https://clickhouse.tech/codebrowser/html_report/ClickHouse/index.html) with syntax highlight and navigation.
|
* [Code Browser](https://clickhouse.tech/codebrowser/html_report/ClickHouse/index.html) with syntax highlight and navigation.
|
||||||
* [Contacts](https://clickhouse.tech/#contacts) can help to get your questions answered if there are any.
|
* [Contacts](https://clickhouse.tech/#contacts) can help to get your questions answered if there are any.
|
||||||
* You can also [fill this form](https://clickhouse.tech/#meet) to meet Yandex ClickHouse team in person.
|
* You can also [fill this form](https://clickhouse.tech/#meet) to meet Yandex ClickHouse team in person.
|
||||||
|
|
||||||
## Upcoming Events
|
|
||||||
* [China ClickHouse Community Meetup (online)](http://hdxu.cn/rhbfZ) on 26 June 2021.
|
|
||||||
|
@ -17,7 +17,7 @@ class DateLUT : private boost::noncopyable
|
|||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
/// Return singleton DateLUTImpl instance for the default time zone.
|
/// Return singleton DateLUTImpl instance for the default time zone.
|
||||||
static ALWAYS_INLINE const DateLUTImpl & instance()
|
static ALWAYS_INLINE const DateLUTImpl & instance() // -V1071
|
||||||
{
|
{
|
||||||
const auto & date_lut = getInstance();
|
const auto & date_lut = getInstance();
|
||||||
return *date_lut.default_impl.load(std::memory_order_acquire);
|
return *date_lut.default_impl.load(std::memory_order_acquire);
|
||||||
|
@ -119,11 +119,16 @@ private:
|
|||||||
}
|
}
|
||||||
|
|
||||||
public:
|
public:
|
||||||
|
/// We use Int64 instead of time_t because time_t is mapped to the different types (long or long long)
|
||||||
|
/// on Linux and Darwin (on both of them, long and long long are 64 bit and behaves identically,
|
||||||
|
/// but they are different types in C++ and this affects function overload resolution).
|
||||||
|
using Time = Int64;
|
||||||
|
|
||||||
/// The order of fields matters for alignment and sizeof.
|
/// The order of fields matters for alignment and sizeof.
|
||||||
struct Values
|
struct Values
|
||||||
{
|
{
|
||||||
/// time_t at beginning of the day.
|
/// Time at beginning of the day.
|
||||||
Int64 date;
|
Time date;
|
||||||
|
|
||||||
/// Properties of the day.
|
/// Properties of the day.
|
||||||
UInt16 year;
|
UInt16 year;
|
||||||
@ -182,20 +187,20 @@ private:
|
|||||||
LUTIndex years_months_lut[DATE_LUT_YEARS * 12];
|
LUTIndex years_months_lut[DATE_LUT_YEARS * 12];
|
||||||
|
|
||||||
/// UTC offset at beginning of the Unix epoch. The same as unix timestamp of 1970-01-01 00:00:00 local time.
|
/// UTC offset at beginning of the Unix epoch. The same as unix timestamp of 1970-01-01 00:00:00 local time.
|
||||||
time_t offset_at_start_of_epoch;
|
Time offset_at_start_of_epoch;
|
||||||
/// UTC offset at the beginning of the first supported year.
|
/// UTC offset at the beginning of the first supported year.
|
||||||
time_t offset_at_start_of_lut;
|
Time offset_at_start_of_lut;
|
||||||
bool offset_is_whole_number_of_hours_during_epoch;
|
bool offset_is_whole_number_of_hours_during_epoch;
|
||||||
|
|
||||||
/// Time zone name.
|
/// Time zone name.
|
||||||
std::string time_zone;
|
std::string time_zone;
|
||||||
|
|
||||||
inline LUTIndex findIndex(time_t t) const
|
inline LUTIndex findIndex(Time t) const
|
||||||
{
|
{
|
||||||
/// First guess.
|
/// First guess.
|
||||||
Int64 guess = (t / 86400) + daynum_offset_epoch;
|
Time guess = (t / 86400) + daynum_offset_epoch;
|
||||||
|
|
||||||
/// For negative time_t the integer division was rounded up, so the guess is offset by one.
|
/// For negative Time the integer division was rounded up, so the guess is offset by one.
|
||||||
if (unlikely(t < 0))
|
if (unlikely(t < 0))
|
||||||
--guess;
|
--guess;
|
||||||
|
|
||||||
@ -227,7 +232,7 @@ private:
|
|||||||
return LUTIndex{static_cast<UInt32>(d + daynum_offset_epoch) & date_lut_mask};
|
return LUTIndex{static_cast<UInt32>(d + daynum_offset_epoch) & date_lut_mask};
|
||||||
}
|
}
|
||||||
|
|
||||||
inline LUTIndex toLUTIndex(time_t t) const
|
inline LUTIndex toLUTIndex(Time t) const
|
||||||
{
|
{
|
||||||
return findIndex(t);
|
return findIndex(t);
|
||||||
}
|
}
|
||||||
@ -280,7 +285,7 @@ public:
|
|||||||
|
|
||||||
/// Round down to start of monday.
|
/// Round down to start of monday.
|
||||||
template <typename DateOrTime>
|
template <typename DateOrTime>
|
||||||
inline time_t toFirstDayOfWeek(DateOrTime v) const
|
inline Time toFirstDayOfWeek(DateOrTime v) const
|
||||||
{
|
{
|
||||||
const LUTIndex i = toLUTIndex(v);
|
const LUTIndex i = toLUTIndex(v);
|
||||||
return lut[i - (lut[i].day_of_week - 1)].date;
|
return lut[i - (lut[i].day_of_week - 1)].date;
|
||||||
@ -295,7 +300,7 @@ public:
|
|||||||
|
|
||||||
/// Round down to start of month.
|
/// Round down to start of month.
|
||||||
template <typename DateOrTime>
|
template <typename DateOrTime>
|
||||||
inline time_t toFirstDayOfMonth(DateOrTime v) const
|
inline Time toFirstDayOfMonth(DateOrTime v) const
|
||||||
{
|
{
|
||||||
const LUTIndex i = toLUTIndex(v);
|
const LUTIndex i = toLUTIndex(v);
|
||||||
return lut[i - (lut[i].day_of_month - 1)].date;
|
return lut[i - (lut[i].day_of_month - 1)].date;
|
||||||
@ -332,13 +337,13 @@ public:
|
|||||||
}
|
}
|
||||||
|
|
||||||
template <typename DateOrTime>
|
template <typename DateOrTime>
|
||||||
inline time_t toFirstDayOfQuarter(DateOrTime v) const
|
inline Time toFirstDayOfQuarter(DateOrTime v) const
|
||||||
{
|
{
|
||||||
return toDate(toFirstDayOfQuarterIndex(v));
|
return toDate(toFirstDayOfQuarterIndex(v));
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Round down to start of year.
|
/// Round down to start of year.
|
||||||
inline time_t toFirstDayOfYear(time_t t) const
|
inline Time toFirstDayOfYear(Time t) const
|
||||||
{
|
{
|
||||||
return lut[years_lut[lut[findIndex(t)].year - DATE_LUT_MIN_YEAR]].date;
|
return lut[years_lut[lut[findIndex(t)].year - DATE_LUT_MIN_YEAR]].date;
|
||||||
}
|
}
|
||||||
@ -355,14 +360,14 @@ public:
|
|||||||
return toDayNum(toFirstDayNumOfYearIndex(v));
|
return toDayNum(toFirstDayNumOfYearIndex(v));
|
||||||
}
|
}
|
||||||
|
|
||||||
inline time_t toFirstDayOfNextMonth(time_t t) const
|
inline Time toFirstDayOfNextMonth(Time t) const
|
||||||
{
|
{
|
||||||
LUTIndex index = findIndex(t);
|
LUTIndex index = findIndex(t);
|
||||||
index += 32 - lut[index].day_of_month;
|
index += 32 - lut[index].day_of_month;
|
||||||
return lut[index - (lut[index].day_of_month - 1)].date;
|
return lut[index - (lut[index].day_of_month - 1)].date;
|
||||||
}
|
}
|
||||||
|
|
||||||
inline time_t toFirstDayOfPrevMonth(time_t t) const
|
inline Time toFirstDayOfPrevMonth(Time t) const
|
||||||
{
|
{
|
||||||
LUTIndex index = findIndex(t);
|
LUTIndex index = findIndex(t);
|
||||||
index -= lut[index].day_of_month;
|
index -= lut[index].day_of_month;
|
||||||
@ -389,16 +394,16 @@ public:
|
|||||||
|
|
||||||
/** Round to start of day, then shift for specified amount of days.
|
/** Round to start of day, then shift for specified amount of days.
|
||||||
*/
|
*/
|
||||||
inline time_t toDateAndShift(time_t t, Int32 days) const
|
inline Time toDateAndShift(Time t, Int32 days) const
|
||||||
{
|
{
|
||||||
return lut[findIndex(t) + days].date;
|
return lut[findIndex(t) + days].date;
|
||||||
}
|
}
|
||||||
|
|
||||||
inline time_t toTime(time_t t) const
|
inline Time toTime(Time t) const
|
||||||
{
|
{
|
||||||
const LUTIndex index = findIndex(t);
|
const LUTIndex index = findIndex(t);
|
||||||
|
|
||||||
time_t res = t - lut[index].date;
|
Time res = t - lut[index].date;
|
||||||
|
|
||||||
if (res >= lut[index].time_at_offset_change())
|
if (res >= lut[index].time_at_offset_change())
|
||||||
res += lut[index].amount_of_offset_change();
|
res += lut[index].amount_of_offset_change();
|
||||||
@ -406,11 +411,11 @@ public:
|
|||||||
return res - offset_at_start_of_epoch; /// Starting at 1970-01-01 00:00:00 local time.
|
return res - offset_at_start_of_epoch; /// Starting at 1970-01-01 00:00:00 local time.
|
||||||
}
|
}
|
||||||
|
|
||||||
inline unsigned toHour(time_t t) const
|
inline unsigned toHour(Time t) const
|
||||||
{
|
{
|
||||||
const LUTIndex index = findIndex(t);
|
const LUTIndex index = findIndex(t);
|
||||||
|
|
||||||
time_t time = t - lut[index].date;
|
Time time = t - lut[index].date;
|
||||||
|
|
||||||
if (time >= lut[index].time_at_offset_change())
|
if (time >= lut[index].time_at_offset_change())
|
||||||
time += lut[index].amount_of_offset_change();
|
time += lut[index].amount_of_offset_change();
|
||||||
@ -426,7 +431,7 @@ public:
|
|||||||
* then subtract the former from the latter to get the offset result.
|
* then subtract the former from the latter to get the offset result.
|
||||||
* The boundaries when meets DST(daylight saving time) change should be handled very carefully.
|
* The boundaries when meets DST(daylight saving time) change should be handled very carefully.
|
||||||
*/
|
*/
|
||||||
inline time_t timezoneOffset(time_t t) const
|
inline Time timezoneOffset(Time t) const
|
||||||
{
|
{
|
||||||
const LUTIndex index = findIndex(t);
|
const LUTIndex index = findIndex(t);
|
||||||
|
|
||||||
@ -434,7 +439,7 @@ public:
|
|||||||
/// Because the "amount_of_offset_change" in LUT entry only exists in the change day, it's costly to scan it from the very begin.
|
/// Because the "amount_of_offset_change" in LUT entry only exists in the change day, it's costly to scan it from the very begin.
|
||||||
/// but we can figure out all the accumulated offsets from 1970-01-01 to that day just by get the whole difference between lut[].date,
|
/// but we can figure out all the accumulated offsets from 1970-01-01 to that day just by get the whole difference between lut[].date,
|
||||||
/// and then, we can directly subtract multiple 86400s to get the real DST offsets for the leap seconds is not considered now.
|
/// and then, we can directly subtract multiple 86400s to get the real DST offsets for the leap seconds is not considered now.
|
||||||
time_t res = (lut[index].date - lut[daynum_offset_epoch].date) % 86400;
|
Time res = (lut[index].date - lut[daynum_offset_epoch].date) % 86400;
|
||||||
|
|
||||||
/// As so far to know, the maximal DST offset couldn't be more than 2 hours, so after the modulo operation the remainder
|
/// As so far to know, the maximal DST offset couldn't be more than 2 hours, so after the modulo operation the remainder
|
||||||
/// will sits between [-offset --> 0 --> offset] which respectively corresponds to moving clock forward or backward.
|
/// will sits between [-offset --> 0 --> offset] which respectively corresponds to moving clock forward or backward.
|
||||||
@ -448,7 +453,7 @@ public:
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
inline unsigned toSecond(time_t t) const
|
inline unsigned toSecond(Time t) const
|
||||||
{
|
{
|
||||||
auto res = t % 60;
|
auto res = t % 60;
|
||||||
if (likely(res >= 0))
|
if (likely(res >= 0))
|
||||||
@ -456,7 +461,7 @@ public:
|
|||||||
return res + 60;
|
return res + 60;
|
||||||
}
|
}
|
||||||
|
|
||||||
inline unsigned toMinute(time_t t) const
|
inline unsigned toMinute(Time t) const
|
||||||
{
|
{
|
||||||
if (t >= 0 && offset_is_whole_number_of_hours_during_epoch)
|
if (t >= 0 && offset_is_whole_number_of_hours_during_epoch)
|
||||||
return (t / 60) % 60;
|
return (t / 60) % 60;
|
||||||
@ -474,27 +479,27 @@ public:
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// NOTE: Assuming timezone offset is a multiple of 15 minutes.
|
/// NOTE: Assuming timezone offset is a multiple of 15 minutes.
|
||||||
inline time_t toStartOfMinute(time_t t) const { return roundDown(t, 60); }
|
inline Time toStartOfMinute(Time t) const { return roundDown(t, 60); }
|
||||||
inline time_t toStartOfFiveMinute(time_t t) const { return roundDown(t, 300); }
|
inline Time toStartOfFiveMinute(Time t) const { return roundDown(t, 300); }
|
||||||
inline time_t toStartOfFifteenMinutes(time_t t) const { return roundDown(t, 900); }
|
inline Time toStartOfFifteenMinutes(Time t) const { return roundDown(t, 900); }
|
||||||
|
|
||||||
inline time_t toStartOfTenMinutes(time_t t) const
|
inline Time toStartOfTenMinutes(Time t) const
|
||||||
{
|
{
|
||||||
if (t >= 0 && offset_is_whole_number_of_hours_during_epoch)
|
if (t >= 0 && offset_is_whole_number_of_hours_during_epoch)
|
||||||
return t / 600 * 600;
|
return t / 600 * 600;
|
||||||
|
|
||||||
/// More complex logic is for Nepal - it has offset 05:45. Australia/Eucla is also unfortunate.
|
/// More complex logic is for Nepal - it has offset 05:45. Australia/Eucla is also unfortunate.
|
||||||
Int64 date = find(t).date;
|
Time date = find(t).date;
|
||||||
return date + (t - date) / 600 * 600;
|
return date + (t - date) / 600 * 600;
|
||||||
}
|
}
|
||||||
|
|
||||||
/// NOTE: Assuming timezone transitions are multiple of hours. Lord Howe Island in Australia is a notable exception.
|
/// NOTE: Assuming timezone transitions are multiple of hours. Lord Howe Island in Australia is a notable exception.
|
||||||
inline time_t toStartOfHour(time_t t) const
|
inline Time toStartOfHour(Time t) const
|
||||||
{
|
{
|
||||||
if (t >= 0 && offset_is_whole_number_of_hours_during_epoch)
|
if (t >= 0 && offset_is_whole_number_of_hours_during_epoch)
|
||||||
return t / 3600 * 3600;
|
return t / 3600 * 3600;
|
||||||
|
|
||||||
Int64 date = find(t).date;
|
Time date = find(t).date;
|
||||||
return date + (t - date) / 3600 * 3600;
|
return date + (t - date) / 3600 * 3600;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -506,11 +511,11 @@ public:
|
|||||||
* because the same calendar day starts/ends at different timestamps in different time zones)
|
* because the same calendar day starts/ends at different timestamps in different time zones)
|
||||||
*/
|
*/
|
||||||
|
|
||||||
inline time_t fromDayNum(DayNum d) const { return lut[toLUTIndex(d)].date; }
|
inline Time fromDayNum(DayNum d) const { return lut[toLUTIndex(d)].date; }
|
||||||
inline time_t fromDayNum(ExtendedDayNum d) const { return lut[toLUTIndex(d)].date; }
|
inline Time fromDayNum(ExtendedDayNum d) const { return lut[toLUTIndex(d)].date; }
|
||||||
|
|
||||||
template <typename DateOrTime>
|
template <typename DateOrTime>
|
||||||
inline time_t toDate(DateOrTime v) const { return lut[toLUTIndex(v)].date; }
|
inline Time toDate(DateOrTime v) const { return lut[toLUTIndex(v)].date; }
|
||||||
|
|
||||||
template <typename DateOrTime>
|
template <typename DateOrTime>
|
||||||
inline unsigned toMonth(DateOrTime v) const { return lut[toLUTIndex(v)].month; }
|
inline unsigned toMonth(DateOrTime v) const { return lut[toLUTIndex(v)].month; }
|
||||||
@ -578,7 +583,7 @@ public:
|
|||||||
return toDayNum(toFirstDayNumOfISOYearIndex(v));
|
return toDayNum(toFirstDayNumOfISOYearIndex(v));
|
||||||
}
|
}
|
||||||
|
|
||||||
inline time_t toFirstDayOfISOYear(time_t t) const
|
inline Time toFirstDayOfISOYear(Time t) const
|
||||||
{
|
{
|
||||||
return lut[toFirstDayNumOfISOYearIndex(t)].date;
|
return lut[toFirstDayNumOfISOYearIndex(t)].date;
|
||||||
}
|
}
|
||||||
@ -773,7 +778,7 @@ public:
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// We count all hour-length intervals, unrelated to offset changes.
|
/// We count all hour-length intervals, unrelated to offset changes.
|
||||||
inline time_t toRelativeHourNum(time_t t) const
|
inline Time toRelativeHourNum(Time t) const
|
||||||
{
|
{
|
||||||
if (t >= 0 && offset_is_whole_number_of_hours_during_epoch)
|
if (t >= 0 && offset_is_whole_number_of_hours_during_epoch)
|
||||||
return t / 3600;
|
return t / 3600;
|
||||||
@ -784,18 +789,18 @@ public:
|
|||||||
}
|
}
|
||||||
|
|
||||||
template <typename DateOrTime>
|
template <typename DateOrTime>
|
||||||
inline time_t toRelativeHourNum(DateOrTime v) const
|
inline Time toRelativeHourNum(DateOrTime v) const
|
||||||
{
|
{
|
||||||
return toRelativeHourNum(lut[toLUTIndex(v)].date);
|
return toRelativeHourNum(lut[toLUTIndex(v)].date);
|
||||||
}
|
}
|
||||||
|
|
||||||
inline time_t toRelativeMinuteNum(time_t t) const
|
inline Time toRelativeMinuteNum(Time t) const
|
||||||
{
|
{
|
||||||
return (t + DATE_LUT_ADD) / 60 - (DATE_LUT_ADD / 60);
|
return (t + DATE_LUT_ADD) / 60 - (DATE_LUT_ADD / 60);
|
||||||
}
|
}
|
||||||
|
|
||||||
template <typename DateOrTime>
|
template <typename DateOrTime>
|
||||||
inline time_t toRelativeMinuteNum(DateOrTime v) const
|
inline Time toRelativeMinuteNum(DateOrTime v) const
|
||||||
{
|
{
|
||||||
return toRelativeMinuteNum(lut[toLUTIndex(v)].date);
|
return toRelativeMinuteNum(lut[toLUTIndex(v)].date);
|
||||||
}
|
}
|
||||||
@ -842,14 +847,14 @@ public:
|
|||||||
return ExtendedDayNum(4 + (d - 4) / days * days);
|
return ExtendedDayNum(4 + (d - 4) / days * days);
|
||||||
}
|
}
|
||||||
|
|
||||||
inline time_t toStartOfDayInterval(ExtendedDayNum d, UInt64 days) const
|
inline Time toStartOfDayInterval(ExtendedDayNum d, UInt64 days) const
|
||||||
{
|
{
|
||||||
if (days == 1)
|
if (days == 1)
|
||||||
return toDate(d);
|
return toDate(d);
|
||||||
return lut[toLUTIndex(ExtendedDayNum(d / days * days))].date;
|
return lut[toLUTIndex(ExtendedDayNum(d / days * days))].date;
|
||||||
}
|
}
|
||||||
|
|
||||||
inline time_t toStartOfHourInterval(time_t t, UInt64 hours) const
|
inline Time toStartOfHourInterval(Time t, UInt64 hours) const
|
||||||
{
|
{
|
||||||
if (hours == 1)
|
if (hours == 1)
|
||||||
return toStartOfHour(t);
|
return toStartOfHour(t);
|
||||||
@ -867,7 +872,7 @@ public:
|
|||||||
const LUTIndex index = findIndex(t);
|
const LUTIndex index = findIndex(t);
|
||||||
const Values & values = lut[index];
|
const Values & values = lut[index];
|
||||||
|
|
||||||
time_t time = t - values.date;
|
Time time = t - values.date;
|
||||||
if (time >= values.time_at_offset_change())
|
if (time >= values.time_at_offset_change())
|
||||||
{
|
{
|
||||||
/// Align to new hour numbers before rounding.
|
/// Align to new hour numbers before rounding.
|
||||||
@ -892,7 +897,7 @@ public:
|
|||||||
return values.date + time;
|
return values.date + time;
|
||||||
}
|
}
|
||||||
|
|
||||||
inline time_t toStartOfMinuteInterval(time_t t, UInt64 minutes) const
|
inline Time toStartOfMinuteInterval(Time t, UInt64 minutes) const
|
||||||
{
|
{
|
||||||
if (minutes == 1)
|
if (minutes == 1)
|
||||||
return toStartOfMinute(t);
|
return toStartOfMinute(t);
|
||||||
@ -909,7 +914,7 @@ public:
|
|||||||
return roundDown(t, seconds);
|
return roundDown(t, seconds);
|
||||||
}
|
}
|
||||||
|
|
||||||
inline time_t toStartOfSecondInterval(time_t t, UInt64 seconds) const
|
inline Time toStartOfSecondInterval(Time t, UInt64 seconds) const
|
||||||
{
|
{
|
||||||
if (seconds == 1)
|
if (seconds == 1)
|
||||||
return t;
|
return t;
|
||||||
@ -934,14 +939,14 @@ public:
|
|||||||
return toDayNum(makeLUTIndex(year, month, day_of_month));
|
return toDayNum(makeLUTIndex(year, month, day_of_month));
|
||||||
}
|
}
|
||||||
|
|
||||||
inline time_t makeDate(Int16 year, UInt8 month, UInt8 day_of_month) const
|
inline Time makeDate(Int16 year, UInt8 month, UInt8 day_of_month) const
|
||||||
{
|
{
|
||||||
return lut[makeLUTIndex(year, month, day_of_month)].date;
|
return lut[makeLUTIndex(year, month, day_of_month)].date;
|
||||||
}
|
}
|
||||||
|
|
||||||
/** Does not accept daylight saving time as argument: in case of ambiguity, it choose greater timestamp.
|
/** Does not accept daylight saving time as argument: in case of ambiguity, it choose greater timestamp.
|
||||||
*/
|
*/
|
||||||
inline time_t makeDateTime(Int16 year, UInt8 month, UInt8 day_of_month, UInt8 hour, UInt8 minute, UInt8 second) const
|
inline Time makeDateTime(Int16 year, UInt8 month, UInt8 day_of_month, UInt8 hour, UInt8 minute, UInt8 second) const
|
||||||
{
|
{
|
||||||
size_t index = makeLUTIndex(year, month, day_of_month);
|
size_t index = makeLUTIndex(year, month, day_of_month);
|
||||||
UInt32 time_offset = hour * 3600 + minute * 60 + second;
|
UInt32 time_offset = hour * 3600 + minute * 60 + second;
|
||||||
@ -969,7 +974,7 @@ public:
|
|||||||
return values.year * 10000 + values.month * 100 + values.day_of_month;
|
return values.year * 10000 + values.month * 100 + values.day_of_month;
|
||||||
}
|
}
|
||||||
|
|
||||||
inline time_t YYYYMMDDToDate(UInt32 num) const
|
inline Time YYYYMMDDToDate(UInt32 num) const
|
||||||
{
|
{
|
||||||
return makeDate(num / 10000, num / 100 % 100, num % 100);
|
return makeDate(num / 10000, num / 100 % 100, num % 100);
|
||||||
}
|
}
|
||||||
@ -1000,13 +1005,13 @@ public:
|
|||||||
TimeComponents time;
|
TimeComponents time;
|
||||||
};
|
};
|
||||||
|
|
||||||
inline DateComponents toDateComponents(time_t t) const
|
inline DateComponents toDateComponents(Time t) const
|
||||||
{
|
{
|
||||||
const Values & values = getValues(t);
|
const Values & values = getValues(t);
|
||||||
return { values.year, values.month, values.day_of_month };
|
return { values.year, values.month, values.day_of_month };
|
||||||
}
|
}
|
||||||
|
|
||||||
inline DateTimeComponents toDateTimeComponents(time_t t) const
|
inline DateTimeComponents toDateTimeComponents(Time t) const
|
||||||
{
|
{
|
||||||
const LUTIndex index = findIndex(t);
|
const LUTIndex index = findIndex(t);
|
||||||
const Values & values = lut[index];
|
const Values & values = lut[index];
|
||||||
@ -1017,7 +1022,7 @@ public:
|
|||||||
res.date.month = values.month;
|
res.date.month = values.month;
|
||||||
res.date.day = values.day_of_month;
|
res.date.day = values.day_of_month;
|
||||||
|
|
||||||
time_t time = t - values.date;
|
Time time = t - values.date;
|
||||||
if (time >= values.time_at_offset_change())
|
if (time >= values.time_at_offset_change())
|
||||||
time += values.amount_of_offset_change();
|
time += values.amount_of_offset_change();
|
||||||
|
|
||||||
@ -1042,7 +1047,7 @@ public:
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
inline UInt64 toNumYYYYMMDDhhmmss(time_t t) const
|
inline UInt64 toNumYYYYMMDDhhmmss(Time t) const
|
||||||
{
|
{
|
||||||
DateTimeComponents components = toDateTimeComponents(t);
|
DateTimeComponents components = toDateTimeComponents(t);
|
||||||
|
|
||||||
@ -1055,7 +1060,7 @@ public:
|
|||||||
+ UInt64(components.date.year) * 10000000000;
|
+ UInt64(components.date.year) * 10000000000;
|
||||||
}
|
}
|
||||||
|
|
||||||
inline time_t YYYYMMDDhhmmssToTime(UInt64 num) const
|
inline Time YYYYMMDDhhmmssToTime(UInt64 num) const
|
||||||
{
|
{
|
||||||
return makeDateTime(
|
return makeDateTime(
|
||||||
num / 10000000000,
|
num / 10000000000,
|
||||||
@ -1069,12 +1074,12 @@ public:
|
|||||||
/// Adding calendar intervals.
|
/// Adding calendar intervals.
|
||||||
/// Implementation specific behaviour when delta is too big.
|
/// Implementation specific behaviour when delta is too big.
|
||||||
|
|
||||||
inline NO_SANITIZE_UNDEFINED time_t addDays(time_t t, Int64 delta) const
|
inline NO_SANITIZE_UNDEFINED Time addDays(Time t, Int64 delta) const
|
||||||
{
|
{
|
||||||
const LUTIndex index = findIndex(t);
|
const LUTIndex index = findIndex(t);
|
||||||
const Values & values = lut[index];
|
const Values & values = lut[index];
|
||||||
|
|
||||||
time_t time = t - values.date;
|
Time time = t - values.date;
|
||||||
if (time >= values.time_at_offset_change())
|
if (time >= values.time_at_offset_change())
|
||||||
time += values.amount_of_offset_change();
|
time += values.amount_of_offset_change();
|
||||||
|
|
||||||
@ -1086,7 +1091,7 @@ public:
|
|||||||
return lut[new_index].date + time;
|
return lut[new_index].date + time;
|
||||||
}
|
}
|
||||||
|
|
||||||
inline NO_SANITIZE_UNDEFINED time_t addWeeks(time_t t, Int64 delta) const
|
inline NO_SANITIZE_UNDEFINED Time addWeeks(Time t, Int64 delta) const
|
||||||
{
|
{
|
||||||
return addDays(t, delta * 7);
|
return addDays(t, delta * 7);
|
||||||
}
|
}
|
||||||
@ -1131,14 +1136,14 @@ public:
|
|||||||
|
|
||||||
/// If resulting month has less deys than source month, then saturation can happen.
|
/// If resulting month has less deys than source month, then saturation can happen.
|
||||||
/// Example: 31 Aug + 1 month = 30 Sep.
|
/// Example: 31 Aug + 1 month = 30 Sep.
|
||||||
inline time_t NO_SANITIZE_UNDEFINED addMonths(time_t t, Int64 delta) const
|
inline Time NO_SANITIZE_UNDEFINED addMonths(Time t, Int64 delta) const
|
||||||
{
|
{
|
||||||
const auto result_day = addMonthsIndex(t, delta);
|
const auto result_day = addMonthsIndex(t, delta);
|
||||||
|
|
||||||
const LUTIndex index = findIndex(t);
|
const LUTIndex index = findIndex(t);
|
||||||
const Values & values = lut[index];
|
const Values & values = lut[index];
|
||||||
|
|
||||||
time_t time = t - values.date;
|
Time time = t - values.date;
|
||||||
if (time >= values.time_at_offset_change())
|
if (time >= values.time_at_offset_change())
|
||||||
time += values.amount_of_offset_change();
|
time += values.amount_of_offset_change();
|
||||||
|
|
||||||
@ -1153,7 +1158,7 @@ public:
|
|||||||
return toDayNum(addMonthsIndex(d, delta));
|
return toDayNum(addMonthsIndex(d, delta));
|
||||||
}
|
}
|
||||||
|
|
||||||
inline time_t NO_SANITIZE_UNDEFINED addQuarters(time_t t, Int64 delta) const
|
inline Time NO_SANITIZE_UNDEFINED addQuarters(Time t, Int64 delta) const
|
||||||
{
|
{
|
||||||
return addMonths(t, delta * 3);
|
return addMonths(t, delta * 3);
|
||||||
}
|
}
|
||||||
@ -1180,14 +1185,14 @@ public:
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// Saturation can occur if 29 Feb is mapped to non-leap year.
|
/// Saturation can occur if 29 Feb is mapped to non-leap year.
|
||||||
inline time_t addYears(time_t t, Int64 delta) const
|
inline Time addYears(Time t, Int64 delta) const
|
||||||
{
|
{
|
||||||
auto result_day = addYearsIndex(t, delta);
|
auto result_day = addYearsIndex(t, delta);
|
||||||
|
|
||||||
const LUTIndex index = findIndex(t);
|
const LUTIndex index = findIndex(t);
|
||||||
const Values & values = lut[index];
|
const Values & values = lut[index];
|
||||||
|
|
||||||
time_t time = t - values.date;
|
Time time = t - values.date;
|
||||||
if (time >= values.time_at_offset_change())
|
if (time >= values.time_at_offset_change())
|
||||||
time += values.amount_of_offset_change();
|
time += values.amount_of_offset_change();
|
||||||
|
|
||||||
@ -1203,7 +1208,7 @@ public:
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
inline std::string timeToString(time_t t) const
|
inline std::string timeToString(Time t) const
|
||||||
{
|
{
|
||||||
DateTimeComponents components = toDateTimeComponents(t);
|
DateTimeComponents components = toDateTimeComponents(t);
|
||||||
|
|
||||||
@ -1228,7 +1233,7 @@ public:
|
|||||||
return s;
|
return s;
|
||||||
}
|
}
|
||||||
|
|
||||||
inline std::string dateToString(time_t t) const
|
inline std::string dateToString(Time t) const
|
||||||
{
|
{
|
||||||
const Values & values = getValues(t);
|
const Values & values = getValues(t);
|
||||||
|
|
||||||
|
41
base/common/FunctorToStaticMethodAdaptor.h
Normal file
41
base/common/FunctorToStaticMethodAdaptor.h
Normal file
@ -0,0 +1,41 @@
|
|||||||
|
#include <functional>
|
||||||
|
|
||||||
|
/** Adapt functor to static method where functor passed as context.
|
||||||
|
* Main use case to convert lambda into function that can be passed into JIT code.
|
||||||
|
*/
|
||||||
|
template <typename Functor>
|
||||||
|
class FunctorToStaticMethodAdaptor : public FunctorToStaticMethodAdaptor<decltype(&Functor::operator())>
|
||||||
|
{
|
||||||
|
};
|
||||||
|
|
||||||
|
template <typename R, typename C, typename ...Args>
|
||||||
|
class FunctorToStaticMethodAdaptor<R (C::*)(Args...) const>
|
||||||
|
{
|
||||||
|
public:
|
||||||
|
static R call(C * ptr, Args &&... arguments)
|
||||||
|
{
|
||||||
|
return std::invoke(&C::operator(), ptr, std::forward<Args>(arguments)...);
|
||||||
|
}
|
||||||
|
|
||||||
|
static R unsafeCall(char * ptr, Args &&... arguments)
|
||||||
|
{
|
||||||
|
C * ptr_typed = reinterpret_cast<C*>(ptr);
|
||||||
|
return std::invoke(&C::operator(), ptr_typed, std::forward<Args>(arguments)...);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
template <typename R, typename C, typename ...Args>
|
||||||
|
class FunctorToStaticMethodAdaptor<R (C::*)(Args...)>
|
||||||
|
{
|
||||||
|
public:
|
||||||
|
static R call(C * ptr, Args &&... arguments)
|
||||||
|
{
|
||||||
|
return std::invoke(&C::operator(), ptr, std::forward<Args>(arguments)...);
|
||||||
|
}
|
||||||
|
|
||||||
|
static R unsafeCall(char * ptr, Args &&... arguments)
|
||||||
|
{
|
||||||
|
C * ptr_typed = static_cast<C*>(ptr);
|
||||||
|
return std::invoke(&C::operator(), ptr_typed, std::forward<Args>(arguments)...);
|
||||||
|
}
|
||||||
|
};
|
@ -1,8 +1,9 @@
|
|||||||
#include <common/ReplxxLineReader.h>
|
#include <common/ReplxxLineReader.h>
|
||||||
#include <common/errnoToString.h>
|
#include <common/errnoToString.h>
|
||||||
|
|
||||||
#include <errno.h>
|
#include <chrono>
|
||||||
#include <string.h>
|
#include <cerrno>
|
||||||
|
#include <cstring>
|
||||||
#include <unistd.h>
|
#include <unistd.h>
|
||||||
#include <functional>
|
#include <functional>
|
||||||
#include <sys/file.h>
|
#include <sys/file.h>
|
||||||
@ -24,6 +25,94 @@ void trim(String & s)
|
|||||||
s.erase(std::find_if(s.rbegin(), s.rend(), [](int ch) { return !std::isspace(ch); }).base(), s.end());
|
s.erase(std::find_if(s.rbegin(), s.rend(), [](int ch) { return !std::isspace(ch); }).base(), s.end());
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Copied from replxx::src/util.cxx::now_ms_str() under the terms of 3-clause BSD license of Replxx.
|
||||||
|
/// Copyright (c) 2017-2018, Marcin Konarski (amok at codestation.org)
|
||||||
|
/// Copyright (c) 2010, Salvatore Sanfilippo (antirez at gmail dot com)
|
||||||
|
/// Copyright (c) 2010, Pieter Noordhuis (pcnoordhuis at gmail dot com)
|
||||||
|
std::string replxx_now_ms_str()
|
||||||
|
{
|
||||||
|
std::chrono::milliseconds ms(std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::system_clock::now().time_since_epoch()));
|
||||||
|
time_t t = ms.count() / 1000;
|
||||||
|
tm broken;
|
||||||
|
if (!localtime_r(&t, &broken))
|
||||||
|
{
|
||||||
|
return std::string();
|
||||||
|
}
|
||||||
|
|
||||||
|
static int const BUFF_SIZE(32);
|
||||||
|
char str[BUFF_SIZE];
|
||||||
|
strftime(str, BUFF_SIZE, "%Y-%m-%d %H:%M:%S.", &broken);
|
||||||
|
snprintf(str + sizeof("YYYY-mm-dd HH:MM:SS"), 5, "%03d", static_cast<int>(ms.count() % 1000));
|
||||||
|
return str;
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Convert from readline to replxx format.
|
||||||
|
///
|
||||||
|
/// replxx requires each history line to prepended with time line:
|
||||||
|
///
|
||||||
|
/// ### YYYY-MM-DD HH:MM:SS.SSS
|
||||||
|
/// select 1
|
||||||
|
///
|
||||||
|
/// And w/o those service lines it will load all lines from history file as
|
||||||
|
/// one history line for suggestion. And if there are lots of lines in file it
|
||||||
|
/// will take lots of time (getline() + tons of reallocations).
|
||||||
|
///
|
||||||
|
/// NOTE: this code uses std::ifstream/std::ofstream like original replxx code.
|
||||||
|
void convertHistoryFile(const std::string & path, replxx::Replxx & rx)
|
||||||
|
{
|
||||||
|
std::ifstream in(path);
|
||||||
|
if (!in)
|
||||||
|
{
|
||||||
|
rx.print("Cannot open %s reading (for conversion): %s\n",
|
||||||
|
path.c_str(), errnoToString(errno).c_str());
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
std::string line;
|
||||||
|
if (!getline(in, line).good())
|
||||||
|
{
|
||||||
|
rx.print("Cannot read from %s (for conversion): %s\n",
|
||||||
|
path.c_str(), errnoToString(errno).c_str());
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
/// This is the marker of the date, no need to convert.
|
||||||
|
static char const REPLXX_TIMESTAMP_PATTERN[] = "### dddd-dd-dd dd:dd:dd.ddd";
|
||||||
|
if (line.starts_with("### ") && line.size() == strlen(REPLXX_TIMESTAMP_PATTERN))
|
||||||
|
{
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
std::vector<std::string> lines;
|
||||||
|
in.seekg(0);
|
||||||
|
while (getline(in, line).good())
|
||||||
|
{
|
||||||
|
lines.push_back(line);
|
||||||
|
}
|
||||||
|
in.close();
|
||||||
|
|
||||||
|
size_t lines_size = lines.size();
|
||||||
|
std::sort(lines.begin(), lines.end());
|
||||||
|
lines.erase(std::unique(lines.begin(), lines.end()), lines.end());
|
||||||
|
rx.print("The history file (%s) is in old format. %zu lines, %zu unique lines.\n",
|
||||||
|
path.c_str(), lines_size, lines.size());
|
||||||
|
|
||||||
|
std::ofstream out(path);
|
||||||
|
if (!out)
|
||||||
|
{
|
||||||
|
rx.print("Cannot open %s for writing (for conversion): %s\n",
|
||||||
|
path.c_str(), errnoToString(errno).c_str());
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const std::string & timestamp = replxx_now_ms_str();
|
||||||
|
for (const auto & out_line : lines)
|
||||||
|
{
|
||||||
|
out << "### " << timestamp << "\n" << out_line << std::endl;
|
||||||
|
}
|
||||||
|
out.close();
|
||||||
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
ReplxxLineReader::ReplxxLineReader(
|
ReplxxLineReader::ReplxxLineReader(
|
||||||
@ -47,6 +136,8 @@ ReplxxLineReader::ReplxxLineReader(
|
|||||||
}
|
}
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
|
convertHistoryFile(history_file_path, rx);
|
||||||
|
|
||||||
if (flock(history_file_fd, LOCK_SH))
|
if (flock(history_file_fd, LOCK_SH))
|
||||||
{
|
{
|
||||||
rx.print("Shared lock of history file failed: %s\n", errnoToString(errno).c_str());
|
rx.print("Shared lock of history file failed: %s\n", errnoToString(errno).c_str());
|
||||||
|
@ -1,9 +1,12 @@
|
|||||||
# This strings autochanged from release_lib.sh:
|
# This variables autochanged by release_lib.sh:
|
||||||
SET(VERSION_REVISION 54452)
|
|
||||||
|
# NOTE: has nothing common with DBMS_TCP_PROTOCOL_VERSION,
|
||||||
|
# only DBMS_TCP_PROTOCOL_VERSION should be incremented on protocol changes.
|
||||||
|
SET(VERSION_REVISION 54454)
|
||||||
SET(VERSION_MAJOR 21)
|
SET(VERSION_MAJOR 21)
|
||||||
SET(VERSION_MINOR 7)
|
SET(VERSION_MINOR 9)
|
||||||
SET(VERSION_PATCH 1)
|
SET(VERSION_PATCH 1)
|
||||||
SET(VERSION_GITHASH 976ccc2e908ac3bc28f763bfea8134ea0a121b40)
|
SET(VERSION_GITHASH f48c5af90c2ad51955d1ee3b6b05d006b03e4238)
|
||||||
SET(VERSION_DESCRIBE v21.7.1.1-prestable)
|
SET(VERSION_DESCRIBE v21.9.1.1-prestable)
|
||||||
SET(VERSION_STRING 21.7.1.1)
|
SET(VERSION_STRING 21.9.1.1)
|
||||||
# end of autochange
|
# end of autochange
|
||||||
|
@ -33,44 +33,26 @@ macro(clickhouse_embed_binaries)
|
|||||||
message(FATAL_ERROR "The list of binary resources to embed may not be empty")
|
message(FATAL_ERROR "The list of binary resources to embed may not be empty")
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
# If cross-compiling, ensure we use the toolchain file and target the
|
add_library("${EMBED_TARGET}" STATIC)
|
||||||
# actual target architecture
|
set_target_properties("${EMBED_TARGET}" PROPERTIES LINKER_LANGUAGE C)
|
||||||
if (CMAKE_CROSSCOMPILING)
|
|
||||||
set(CROSS_COMPILE_FLAGS "--target=${CMAKE_C_COMPILER_TARGET} --gcc-toolchain=${TOOLCHAIN_FILE}")
|
|
||||||
else()
|
|
||||||
set(CROSS_COMPILE_FLAGS "")
|
|
||||||
endif()
|
|
||||||
|
|
||||||
set(EMBED_TEMPLATE_FILE "${PROJECT_SOURCE_DIR}/programs/embed_binary.S.in")
|
set(EMBED_TEMPLATE_FILE "${PROJECT_SOURCE_DIR}/programs/embed_binary.S.in")
|
||||||
set(RESOURCE_OBJS)
|
|
||||||
foreach(RESOURCE_FILE ${EMBED_RESOURCES})
|
|
||||||
set(RESOURCE_OBJ "${RESOURCE_FILE}.o")
|
|
||||||
list(APPEND RESOURCE_OBJS "${RESOURCE_OBJ}")
|
|
||||||
|
|
||||||
# Normalize the name of the resource
|
foreach(RESOURCE_FILE ${EMBED_RESOURCES})
|
||||||
|
set(ASSEMBLY_FILE_NAME "${RESOURCE_FILE}.S")
|
||||||
set(BINARY_FILE_NAME "${RESOURCE_FILE}")
|
set(BINARY_FILE_NAME "${RESOURCE_FILE}")
|
||||||
|
|
||||||
|
# Normalize the name of the resource.
|
||||||
string(REGEX REPLACE "[\./-]" "_" SYMBOL_NAME "${RESOURCE_FILE}") # - must be last in regex
|
string(REGEX REPLACE "[\./-]" "_" SYMBOL_NAME "${RESOURCE_FILE}") # - must be last in regex
|
||||||
string(REPLACE "+" "_PLUS_" SYMBOL_NAME "${SYMBOL_NAME}")
|
string(REPLACE "+" "_PLUS_" SYMBOL_NAME "${SYMBOL_NAME}")
|
||||||
set(ASSEMBLY_FILE_NAME "${RESOURCE_FILE}.S")
|
|
||||||
|
|
||||||
# Put the configured assembly file in the output directory.
|
# Generate the configured assembly file in the output directory.
|
||||||
# This is so we can clean it up as usual, and we CD to the
|
|
||||||
# source directory before compiling, so that the assembly
|
|
||||||
# `.incbin` directive can find the file.
|
|
||||||
configure_file("${EMBED_TEMPLATE_FILE}" "${CMAKE_CURRENT_BINARY_DIR}/${ASSEMBLY_FILE_NAME}" @ONLY)
|
configure_file("${EMBED_TEMPLATE_FILE}" "${CMAKE_CURRENT_BINARY_DIR}/${ASSEMBLY_FILE_NAME}" @ONLY)
|
||||||
|
|
||||||
# Generate the output object file by compiling the assembly, in the directory of
|
# Set the include directory for relative paths specified for `.incbin` directive.
|
||||||
# the sources so that the resource file may also be found
|
set_property(SOURCE "${CMAKE_CURRENT_BINARY_DIR}/${ASSEMBLY_FILE_NAME}" APPEND PROPERTY INCLUDE_DIRECTORIES "${EMBED_RESOURCE_DIR}")
|
||||||
add_custom_command(
|
|
||||||
OUTPUT ${RESOURCE_OBJ}
|
|
||||||
COMMAND cd "${EMBED_RESOURCE_DIR}" &&
|
|
||||||
${CMAKE_C_COMPILER} "${CROSS_COMPILE_FLAGS}" -c -o
|
|
||||||
"${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ}"
|
|
||||||
"${CMAKE_CURRENT_BINARY_DIR}/${ASSEMBLY_FILE_NAME}"
|
|
||||||
)
|
|
||||||
set_source_files_properties("${RESOURCE_OBJ}" PROPERTIES EXTERNAL_OBJECT true GENERATED true)
|
|
||||||
endforeach()
|
|
||||||
|
|
||||||
add_library("${EMBED_TARGET}" STATIC ${RESOURCE_OBJS})
|
target_sources("${EMBED_TARGET}" PRIVATE "${CMAKE_CURRENT_BINARY_DIR}/${ASSEMBLY_FILE_NAME}")
|
||||||
set_target_properties("${EMBED_TARGET}" PROPERTIES LINKER_LANGUAGE C)
|
set_target_properties("${EMBED_TARGET}" PROPERTIES OBJECT_DEPENDS "${RESOURCE_FILE}")
|
||||||
|
endforeach()
|
||||||
endmacro()
|
endmacro()
|
||||||
|
@ -4,7 +4,6 @@ set (CMAKE_C_COMPILER_TARGET "aarch64-linux-gnu")
|
|||||||
set (CMAKE_CXX_COMPILER_TARGET "aarch64-linux-gnu")
|
set (CMAKE_CXX_COMPILER_TARGET "aarch64-linux-gnu")
|
||||||
set (CMAKE_ASM_COMPILER_TARGET "aarch64-linux-gnu")
|
set (CMAKE_ASM_COMPILER_TARGET "aarch64-linux-gnu")
|
||||||
set (CMAKE_SYSROOT "${CMAKE_CURRENT_LIST_DIR}/../toolchain/linux-aarch64/aarch64-linux-gnu/libc")
|
set (CMAKE_SYSROOT "${CMAKE_CURRENT_LIST_DIR}/../toolchain/linux-aarch64/aarch64-linux-gnu/libc")
|
||||||
get_filename_component (TOOLCHAIN_FILE "${CMAKE_TOOLCHAIN_FILE}" REALPATH)
|
|
||||||
|
|
||||||
# We don't use compiler from toolchain because it's gcc-8, and we provide support only for gcc-9.
|
# We don't use compiler from toolchain because it's gcc-8, and we provide support only for gcc-9.
|
||||||
set (CMAKE_AR "${CMAKE_CURRENT_LIST_DIR}/../toolchain/linux-aarch64/bin/aarch64-linux-gnu-ar" CACHE FILEPATH "" FORCE)
|
set (CMAKE_AR "${CMAKE_CURRENT_LIST_DIR}/../toolchain/linux-aarch64/bin/aarch64-linux-gnu-ar" CACHE FILEPATH "" FORCE)
|
||||||
|
1
contrib/CMakeLists.txt
vendored
1
contrib/CMakeLists.txt
vendored
@ -34,7 +34,6 @@ endif()
|
|||||||
set_property(DIRECTORY PROPERTY EXCLUDE_FROM_ALL 1)
|
set_property(DIRECTORY PROPERTY EXCLUDE_FROM_ALL 1)
|
||||||
|
|
||||||
add_subdirectory (abseil-cpp-cmake)
|
add_subdirectory (abseil-cpp-cmake)
|
||||||
add_subdirectory (antlr4-runtime-cmake)
|
|
||||||
add_subdirectory (boost-cmake)
|
add_subdirectory (boost-cmake)
|
||||||
add_subdirectory (cctz-cmake)
|
add_subdirectory (cctz-cmake)
|
||||||
add_subdirectory (consistent-hashing)
|
add_subdirectory (consistent-hashing)
|
||||||
|
1
contrib/antlr4-runtime
vendored
1
contrib/antlr4-runtime
vendored
@ -1 +0,0 @@
|
|||||||
Subproject commit 672643e9a427ef803abf13bc8cb4989606553d64
|
|
@ -1,156 +0,0 @@
|
|||||||
set (LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/antlr4-runtime")
|
|
||||||
|
|
||||||
set (SRCS
|
|
||||||
"${LIBRARY_DIR}/ANTLRErrorListener.cpp"
|
|
||||||
"${LIBRARY_DIR}/ANTLRErrorStrategy.cpp"
|
|
||||||
"${LIBRARY_DIR}/ANTLRFileStream.cpp"
|
|
||||||
"${LIBRARY_DIR}/ANTLRInputStream.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/AbstractPredicateTransition.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/ActionTransition.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/AmbiguityInfo.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/ArrayPredictionContext.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/ATN.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/ATNConfig.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/ATNConfigSet.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/ATNDeserializationOptions.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/ATNDeserializer.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/ATNSerializer.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/ATNSimulator.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/ATNState.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/AtomTransition.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/BasicBlockStartState.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/BasicState.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/BlockEndState.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/BlockStartState.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/ContextSensitivityInfo.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/DecisionEventInfo.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/DecisionInfo.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/DecisionState.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/EmptyPredictionContext.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/EpsilonTransition.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/ErrorInfo.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/LexerAction.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/LexerActionExecutor.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/LexerATNConfig.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/LexerATNSimulator.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/LexerChannelAction.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/LexerCustomAction.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/LexerIndexedCustomAction.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/LexerModeAction.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/LexerMoreAction.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/LexerPopModeAction.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/LexerPushModeAction.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/LexerSkipAction.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/LexerTypeAction.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/LL1Analyzer.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/LookaheadEventInfo.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/LoopEndState.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/NotSetTransition.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/OrderedATNConfigSet.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/ParseInfo.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/ParserATNSimulator.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/PlusBlockStartState.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/PlusLoopbackState.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/PrecedencePredicateTransition.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/PredicateEvalInfo.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/PredicateTransition.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/PredictionContext.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/PredictionMode.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/ProfilingATNSimulator.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/RangeTransition.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/RuleStartState.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/RuleStopState.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/RuleTransition.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/SemanticContext.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/SetTransition.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/SingletonPredictionContext.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/StarBlockStartState.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/StarLoopbackState.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/StarLoopEntryState.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/TokensStartState.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/Transition.cpp"
|
|
||||||
"${LIBRARY_DIR}/atn/WildcardTransition.cpp"
|
|
||||||
"${LIBRARY_DIR}/BailErrorStrategy.cpp"
|
|
||||||
"${LIBRARY_DIR}/BaseErrorListener.cpp"
|
|
||||||
"${LIBRARY_DIR}/BufferedTokenStream.cpp"
|
|
||||||
"${LIBRARY_DIR}/CharStream.cpp"
|
|
||||||
"${LIBRARY_DIR}/CommonToken.cpp"
|
|
||||||
"${LIBRARY_DIR}/CommonTokenFactory.cpp"
|
|
||||||
"${LIBRARY_DIR}/CommonTokenStream.cpp"
|
|
||||||
"${LIBRARY_DIR}/ConsoleErrorListener.cpp"
|
|
||||||
"${LIBRARY_DIR}/DefaultErrorStrategy.cpp"
|
|
||||||
"${LIBRARY_DIR}/dfa/DFA.cpp"
|
|
||||||
"${LIBRARY_DIR}/dfa/DFASerializer.cpp"
|
|
||||||
"${LIBRARY_DIR}/dfa/DFAState.cpp"
|
|
||||||
"${LIBRARY_DIR}/dfa/LexerDFASerializer.cpp"
|
|
||||||
"${LIBRARY_DIR}/DiagnosticErrorListener.cpp"
|
|
||||||
"${LIBRARY_DIR}/Exceptions.cpp"
|
|
||||||
"${LIBRARY_DIR}/FailedPredicateException.cpp"
|
|
||||||
"${LIBRARY_DIR}/InputMismatchException.cpp"
|
|
||||||
"${LIBRARY_DIR}/InterpreterRuleContext.cpp"
|
|
||||||
"${LIBRARY_DIR}/IntStream.cpp"
|
|
||||||
"${LIBRARY_DIR}/Lexer.cpp"
|
|
||||||
"${LIBRARY_DIR}/LexerInterpreter.cpp"
|
|
||||||
"${LIBRARY_DIR}/LexerNoViableAltException.cpp"
|
|
||||||
"${LIBRARY_DIR}/ListTokenSource.cpp"
|
|
||||||
"${LIBRARY_DIR}/misc/InterpreterDataReader.cpp"
|
|
||||||
"${LIBRARY_DIR}/misc/Interval.cpp"
|
|
||||||
"${LIBRARY_DIR}/misc/IntervalSet.cpp"
|
|
||||||
"${LIBRARY_DIR}/misc/MurmurHash.cpp"
|
|
||||||
"${LIBRARY_DIR}/misc/Predicate.cpp"
|
|
||||||
"${LIBRARY_DIR}/NoViableAltException.cpp"
|
|
||||||
"${LIBRARY_DIR}/Parser.cpp"
|
|
||||||
"${LIBRARY_DIR}/ParserInterpreter.cpp"
|
|
||||||
"${LIBRARY_DIR}/ParserRuleContext.cpp"
|
|
||||||
"${LIBRARY_DIR}/ProxyErrorListener.cpp"
|
|
||||||
"${LIBRARY_DIR}/RecognitionException.cpp"
|
|
||||||
"${LIBRARY_DIR}/Recognizer.cpp"
|
|
||||||
"${LIBRARY_DIR}/RuleContext.cpp"
|
|
||||||
"${LIBRARY_DIR}/RuleContextWithAltNum.cpp"
|
|
||||||
"${LIBRARY_DIR}/RuntimeMetaData.cpp"
|
|
||||||
"${LIBRARY_DIR}/support/Any.cpp"
|
|
||||||
"${LIBRARY_DIR}/support/Arrays.cpp"
|
|
||||||
"${LIBRARY_DIR}/support/CPPUtils.cpp"
|
|
||||||
"${LIBRARY_DIR}/support/guid.cpp"
|
|
||||||
"${LIBRARY_DIR}/support/StringUtils.cpp"
|
|
||||||
"${LIBRARY_DIR}/Token.cpp"
|
|
||||||
"${LIBRARY_DIR}/TokenSource.cpp"
|
|
||||||
"${LIBRARY_DIR}/TokenStream.cpp"
|
|
||||||
"${LIBRARY_DIR}/TokenStreamRewriter.cpp"
|
|
||||||
"${LIBRARY_DIR}/tree/ErrorNode.cpp"
|
|
||||||
"${LIBRARY_DIR}/tree/ErrorNodeImpl.cpp"
|
|
||||||
"${LIBRARY_DIR}/tree/IterativeParseTreeWalker.cpp"
|
|
||||||
"${LIBRARY_DIR}/tree/ParseTree.cpp"
|
|
||||||
"${LIBRARY_DIR}/tree/ParseTreeListener.cpp"
|
|
||||||
"${LIBRARY_DIR}/tree/ParseTreeVisitor.cpp"
|
|
||||||
"${LIBRARY_DIR}/tree/ParseTreeWalker.cpp"
|
|
||||||
"${LIBRARY_DIR}/tree/pattern/Chunk.cpp"
|
|
||||||
"${LIBRARY_DIR}/tree/pattern/ParseTreeMatch.cpp"
|
|
||||||
"${LIBRARY_DIR}/tree/pattern/ParseTreePattern.cpp"
|
|
||||||
"${LIBRARY_DIR}/tree/pattern/ParseTreePatternMatcher.cpp"
|
|
||||||
"${LIBRARY_DIR}/tree/pattern/RuleTagToken.cpp"
|
|
||||||
"${LIBRARY_DIR}/tree/pattern/TagChunk.cpp"
|
|
||||||
"${LIBRARY_DIR}/tree/pattern/TextChunk.cpp"
|
|
||||||
"${LIBRARY_DIR}/tree/pattern/TokenTagToken.cpp"
|
|
||||||
"${LIBRARY_DIR}/tree/TerminalNode.cpp"
|
|
||||||
"${LIBRARY_DIR}/tree/TerminalNodeImpl.cpp"
|
|
||||||
"${LIBRARY_DIR}/tree/Trees.cpp"
|
|
||||||
"${LIBRARY_DIR}/tree/xpath/XPath.cpp"
|
|
||||||
"${LIBRARY_DIR}/tree/xpath/XPathElement.cpp"
|
|
||||||
"${LIBRARY_DIR}/tree/xpath/XPathLexer.cpp"
|
|
||||||
"${LIBRARY_DIR}/tree/xpath/XPathLexerErrorListener.cpp"
|
|
||||||
"${LIBRARY_DIR}/tree/xpath/XPathRuleAnywhereElement.cpp"
|
|
||||||
"${LIBRARY_DIR}/tree/xpath/XPathRuleElement.cpp"
|
|
||||||
"${LIBRARY_DIR}/tree/xpath/XPathTokenAnywhereElement.cpp"
|
|
||||||
"${LIBRARY_DIR}/tree/xpath/XPathTokenElement.cpp"
|
|
||||||
"${LIBRARY_DIR}/tree/xpath/XPathWildcardAnywhereElement.cpp"
|
|
||||||
"${LIBRARY_DIR}/tree/xpath/XPathWildcardElement.cpp"
|
|
||||||
"${LIBRARY_DIR}/UnbufferedCharStream.cpp"
|
|
||||||
"${LIBRARY_DIR}/UnbufferedTokenStream.cpp"
|
|
||||||
"${LIBRARY_DIR}/Vocabulary.cpp"
|
|
||||||
"${LIBRARY_DIR}/WritableToken.cpp"
|
|
||||||
)
|
|
||||||
|
|
||||||
add_library (antlr4-runtime ${SRCS})
|
|
||||||
|
|
||||||
target_include_directories (antlr4-runtime SYSTEM PUBLIC ${LIBRARY_DIR})
|
|
@ -26,7 +26,7 @@ if (NOT USE_INTERNAL_CCTZ_LIBRARY)
|
|||||||
set_property (TARGET cctz PROPERTY IMPORTED_LOCATION ${LIBRARY_CCTZ})
|
set_property (TARGET cctz PROPERTY IMPORTED_LOCATION ${LIBRARY_CCTZ})
|
||||||
set_property (TARGET cctz PROPERTY INTERFACE_INCLUDE_DIRECTORIES ${INCLUDE_CCTZ})
|
set_property (TARGET cctz PROPERTY INTERFACE_INCLUDE_DIRECTORIES ${INCLUDE_CCTZ})
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
set(SYSTEM_STORAGE_TZ_FILE "${CMAKE_BINARY_DIR}/src/Storages/System/StorageSystemTimeZones.generated.cpp")
|
set(SYSTEM_STORAGE_TZ_FILE "${CMAKE_BINARY_DIR}/src/Storages/System/StorageSystemTimeZones.generated.cpp")
|
||||||
file(REMOVE ${SYSTEM_STORAGE_TZ_FILE})
|
file(REMOVE ${SYSTEM_STORAGE_TZ_FILE})
|
||||||
file(APPEND ${SYSTEM_STORAGE_TZ_FILE} "// autogenerated by ClickHouse/contrib/cctz-cmake/CMakeLists.txt\n")
|
file(APPEND ${SYSTEM_STORAGE_TZ_FILE} "// autogenerated by ClickHouse/contrib/cctz-cmake/CMakeLists.txt\n")
|
||||||
|
2
contrib/h3
vendored
2
contrib/h3
vendored
@ -1 +1 @@
|
|||||||
Subproject commit e209086ae1b5477307f545a0f6111780edc59940
|
Subproject commit c7f46cfd71fb60e2fefc90e28abe81657deff735
|
@ -3,21 +3,22 @@ set(H3_BINARY_DIR "${ClickHouse_BINARY_DIR}/contrib/h3/src/h3lib")
|
|||||||
|
|
||||||
set(SRCS
|
set(SRCS
|
||||||
"${H3_SOURCE_DIR}/lib/algos.c"
|
"${H3_SOURCE_DIR}/lib/algos.c"
|
||||||
"${H3_SOURCE_DIR}/lib/baseCells.c"
|
|
||||||
"${H3_SOURCE_DIR}/lib/bbox.c"
|
|
||||||
"${H3_SOURCE_DIR}/lib/coordijk.c"
|
"${H3_SOURCE_DIR}/lib/coordijk.c"
|
||||||
"${H3_SOURCE_DIR}/lib/faceijk.c"
|
"${H3_SOURCE_DIR}/lib/bbox.c"
|
||||||
"${H3_SOURCE_DIR}/lib/geoCoord.c"
|
|
||||||
"${H3_SOURCE_DIR}/lib/h3Index.c"
|
|
||||||
"${H3_SOURCE_DIR}/lib/h3UniEdge.c"
|
|
||||||
"${H3_SOURCE_DIR}/lib/linkedGeo.c"
|
|
||||||
"${H3_SOURCE_DIR}/lib/localij.c"
|
|
||||||
"${H3_SOURCE_DIR}/lib/mathExtensions.c"
|
|
||||||
"${H3_SOURCE_DIR}/lib/polygon.c"
|
"${H3_SOURCE_DIR}/lib/polygon.c"
|
||||||
|
"${H3_SOURCE_DIR}/lib/h3Index.c"
|
||||||
"${H3_SOURCE_DIR}/lib/vec2d.c"
|
"${H3_SOURCE_DIR}/lib/vec2d.c"
|
||||||
"${H3_SOURCE_DIR}/lib/vec3d.c"
|
"${H3_SOURCE_DIR}/lib/vec3d.c"
|
||||||
"${H3_SOURCE_DIR}/lib/vertex.c"
|
"${H3_SOURCE_DIR}/lib/vertex.c"
|
||||||
|
"${H3_SOURCE_DIR}/lib/linkedGeo.c"
|
||||||
|
"${H3_SOURCE_DIR}/lib/localij.c"
|
||||||
|
"${H3_SOURCE_DIR}/lib/latLng.c"
|
||||||
|
"${H3_SOURCE_DIR}/lib/directedEdge.c"
|
||||||
|
"${H3_SOURCE_DIR}/lib/mathExtensions.c"
|
||||||
|
"${H3_SOURCE_DIR}/lib/iterators.c"
|
||||||
"${H3_SOURCE_DIR}/lib/vertexGraph.c"
|
"${H3_SOURCE_DIR}/lib/vertexGraph.c"
|
||||||
|
"${H3_SOURCE_DIR}/lib/faceijk.c"
|
||||||
|
"${H3_SOURCE_DIR}/lib/baseCells.c"
|
||||||
)
|
)
|
||||||
|
|
||||||
configure_file("${H3_SOURCE_DIR}/include/h3api.h.in" "${H3_BINARY_DIR}/include/h3api.h")
|
configure_file("${H3_SOURCE_DIR}/include/h3api.h.in" "${H3_BINARY_DIR}/include/h3api.h")
|
||||||
|
2
contrib/libpq
vendored
2
contrib/libpq
vendored
@ -1 +1 @@
|
|||||||
Subproject commit c7624588ddd84f153dd5990e81b886e4568bddde
|
Subproject commit e071ea570f8985aa00e34f5b9d50a3cfe666327e
|
@ -8,7 +8,7 @@ set(SRCS
|
|||||||
"${LIBPQ_SOURCE_DIR}/fe-lobj.c"
|
"${LIBPQ_SOURCE_DIR}/fe-lobj.c"
|
||||||
"${LIBPQ_SOURCE_DIR}/fe-misc.c"
|
"${LIBPQ_SOURCE_DIR}/fe-misc.c"
|
||||||
"${LIBPQ_SOURCE_DIR}/fe-print.c"
|
"${LIBPQ_SOURCE_DIR}/fe-print.c"
|
||||||
"${LIBPQ_SOURCE_DIR}/fe-protocol2.c"
|
"${LIBPQ_SOURCE_DIR}/fe-trace.c"
|
||||||
"${LIBPQ_SOURCE_DIR}/fe-protocol3.c"
|
"${LIBPQ_SOURCE_DIR}/fe-protocol3.c"
|
||||||
"${LIBPQ_SOURCE_DIR}/fe-secure.c"
|
"${LIBPQ_SOURCE_DIR}/fe-secure.c"
|
||||||
"${LIBPQ_SOURCE_DIR}/fe-secure-common.c"
|
"${LIBPQ_SOURCE_DIR}/fe-secure-common.c"
|
||||||
@ -18,8 +18,12 @@ set(SRCS
|
|||||||
"${LIBPQ_SOURCE_DIR}/pqexpbuffer.c"
|
"${LIBPQ_SOURCE_DIR}/pqexpbuffer.c"
|
||||||
|
|
||||||
"${LIBPQ_SOURCE_DIR}/common/scram-common.c"
|
"${LIBPQ_SOURCE_DIR}/common/scram-common.c"
|
||||||
"${LIBPQ_SOURCE_DIR}/common/sha2_openssl.c"
|
"${LIBPQ_SOURCE_DIR}/common/sha2.c"
|
||||||
|
"${LIBPQ_SOURCE_DIR}/common/sha1.c"
|
||||||
"${LIBPQ_SOURCE_DIR}/common/md5.c"
|
"${LIBPQ_SOURCE_DIR}/common/md5.c"
|
||||||
|
"${LIBPQ_SOURCE_DIR}/common/md5_common.c"
|
||||||
|
"${LIBPQ_SOURCE_DIR}/common/hmac_openssl.c"
|
||||||
|
"${LIBPQ_SOURCE_DIR}/common/cryptohash.c"
|
||||||
"${LIBPQ_SOURCE_DIR}/common/saslprep.c"
|
"${LIBPQ_SOURCE_DIR}/common/saslprep.c"
|
||||||
"${LIBPQ_SOURCE_DIR}/common/unicode_norm.c"
|
"${LIBPQ_SOURCE_DIR}/common/unicode_norm.c"
|
||||||
"${LIBPQ_SOURCE_DIR}/common/ip.c"
|
"${LIBPQ_SOURCE_DIR}/common/ip.c"
|
||||||
|
2
contrib/libunwind
vendored
2
contrib/libunwind
vendored
@ -1 +1 @@
|
|||||||
Subproject commit a491c27b33109a842d577c0f7ac5f5f218859181
|
Subproject commit 6b816d2fba3991f8fd6aaec17d92f68947eab667
|
@ -1,7 +1,7 @@
|
|||||||
add_library(murmurhash
|
add_library(murmurhash
|
||||||
src/murmurhash2.cpp
|
src/MurmurHash2.cpp
|
||||||
src/murmurhash3.cpp
|
src/MurmurHash3.cpp
|
||||||
include/murmurhash2.h
|
include/MurmurHash2.h
|
||||||
include/murmurhash3.h)
|
include/MurmurHash3.h)
|
||||||
|
|
||||||
target_include_directories (murmurhash PUBLIC include)
|
target_include_directories (murmurhash PUBLIC include)
|
||||||
|
49
contrib/murmurhash/include/MurmurHash2.h
Normal file
49
contrib/murmurhash/include/MurmurHash2.h
Normal file
@ -0,0 +1,49 @@
|
|||||||
|
//-----------------------------------------------------------------------------
|
||||||
|
// MurmurHash2 was written by Austin Appleby, and is placed in the public
|
||||||
|
// domain. The author hereby disclaims copyright to this source code.
|
||||||
|
|
||||||
|
#ifndef MURMURHASH2_H
|
||||||
|
#define MURMURHASH2_H
|
||||||
|
|
||||||
|
#include <stddef.h>
|
||||||
|
|
||||||
|
//-----------------------------------------------------------------------------
|
||||||
|
// Platform-specific functions and macros
|
||||||
|
|
||||||
|
// Microsoft Visual Studio
|
||||||
|
|
||||||
|
#if defined(_MSC_VER) && (_MSC_VER < 1600)
|
||||||
|
|
||||||
|
typedef unsigned char uint8_t;
|
||||||
|
typedef unsigned int uint32_t;
|
||||||
|
typedef unsigned __int64 uint64_t;
|
||||||
|
|
||||||
|
// Other compilers
|
||||||
|
|
||||||
|
#else // defined(_MSC_VER)
|
||||||
|
|
||||||
|
#include <stdint.h>
|
||||||
|
|
||||||
|
#endif // !defined(_MSC_VER)
|
||||||
|
|
||||||
|
//-----------------------------------------------------------------------------
|
||||||
|
|
||||||
|
#ifdef __cplusplus
|
||||||
|
extern "C" {
|
||||||
|
#endif
|
||||||
|
|
||||||
|
uint32_t MurmurHash2 ( const void * key, size_t len, uint32_t seed );
|
||||||
|
uint64_t MurmurHash64A ( const void * key, size_t len, uint64_t seed );
|
||||||
|
uint64_t MurmurHash64B ( const void * key, size_t len, uint64_t seed );
|
||||||
|
uint32_t MurmurHash2A ( const void * key, size_t len, uint32_t seed );
|
||||||
|
uint32_t MurmurHashNeutral2 ( const void * key, size_t len, uint32_t seed );
|
||||||
|
uint32_t MurmurHashAligned2 ( const void * key, size_t len, uint32_t seed );
|
||||||
|
|
||||||
|
#ifdef __cplusplus
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
|
//-----------------------------------------------------------------------------
|
||||||
|
|
||||||
|
#endif // _MURMURHASH2_H_
|
||||||
|
|
@ -2,7 +2,10 @@
|
|||||||
// MurmurHash3 was written by Austin Appleby, and is placed in the public
|
// MurmurHash3 was written by Austin Appleby, and is placed in the public
|
||||||
// domain. The author hereby disclaims copyright to this source code.
|
// domain. The author hereby disclaims copyright to this source code.
|
||||||
|
|
||||||
#pragma once
|
#ifndef MURMURHASH3_H
|
||||||
|
#define MURMURHASH3_H
|
||||||
|
|
||||||
|
#include <stddef.h>
|
||||||
|
|
||||||
//-----------------------------------------------------------------------------
|
//-----------------------------------------------------------------------------
|
||||||
// Platform-specific functions and macros
|
// Platform-specific functions and macros
|
||||||
@ -23,20 +26,22 @@ typedef unsigned __int64 uint64_t;
|
|||||||
|
|
||||||
#endif // !defined(_MSC_VER)
|
#endif // !defined(_MSC_VER)
|
||||||
|
|
||||||
|
//-----------------------------------------------------------------------------
|
||||||
|
|
||||||
#ifdef __cplusplus
|
#ifdef __cplusplus
|
||||||
extern "C" {
|
extern "C" {
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
//-----------------------------------------------------------------------------
|
void MurmurHash3_x86_32 ( const void * key, size_t len, uint32_t seed, void * out );
|
||||||
|
|
||||||
void MurmurHash3_x86_32 ( const void * key, int len, uint32_t seed, void * out );
|
void MurmurHash3_x86_128 ( const void * key, size_t len, uint32_t seed, void * out );
|
||||||
|
|
||||||
void MurmurHash3_x86_128 ( const void * key, int len, uint32_t seed, void * out );
|
void MurmurHash3_x64_128 ( const void * key, size_t len, uint32_t seed, void * out );
|
||||||
|
|
||||||
void MurmurHash3_x64_128 ( const void * key, int len, uint32_t seed, void * out );
|
|
||||||
|
|
||||||
//-----------------------------------------------------------------------------
|
|
||||||
|
|
||||||
#ifdef __cplusplus
|
#ifdef __cplusplus
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
//-----------------------------------------------------------------------------
|
||||||
|
|
||||||
|
#endif // _MURMURHASH3_H_
|
@ -1,31 +0,0 @@
|
|||||||
//-----------------------------------------------------------------------------
|
|
||||||
// MurmurHash2 was written by Austin Appleby, and is placed in the public
|
|
||||||
// domain. The author hereby disclaims copyright to this source code.
|
|
||||||
|
|
||||||
#pragma once
|
|
||||||
|
|
||||||
//-----------------------------------------------------------------------------
|
|
||||||
// Platform-specific functions and macros
|
|
||||||
|
|
||||||
// Microsoft Visual Studio
|
|
||||||
|
|
||||||
#if defined(_MSC_VER) && (_MSC_VER < 1600)
|
|
||||||
|
|
||||||
typedef unsigned char uint8_t;
|
|
||||||
typedef unsigned int uint32_t;
|
|
||||||
typedef unsigned __int64 uint64_t;
|
|
||||||
|
|
||||||
// Other compilers
|
|
||||||
|
|
||||||
#else // defined(_MSC_VER)
|
|
||||||
|
|
||||||
#include <stdint.h>
|
|
||||||
|
|
||||||
#endif // !defined(_MSC_VER)
|
|
||||||
|
|
||||||
uint32_t MurmurHash2 (const void * key, int len, uint32_t seed);
|
|
||||||
uint64_t MurmurHash64A (const void * key, int len, uint64_t seed);
|
|
||||||
uint64_t MurmurHash64B (const void * key, int len, uint64_t seed);
|
|
||||||
uint32_t MurmurHash2A (const void * key, int len, uint32_t seed);
|
|
||||||
uint32_t MurmurHashNeutral2 (const void * key, int len, uint32_t seed);
|
|
||||||
uint32_t MurmurHashAligned2 (const void * key, int len, uint32_t seed);
|
|
523
contrib/murmurhash/src/MurmurHash2.cpp
Normal file
523
contrib/murmurhash/src/MurmurHash2.cpp
Normal file
@ -0,0 +1,523 @@
|
|||||||
|
//-----------------------------------------------------------------------------
|
||||||
|
// MurmurHash2 was written by Austin Appleby, and is placed in the public
|
||||||
|
// domain. The author hereby disclaims copyright to this source code.
|
||||||
|
|
||||||
|
// Note - This code makes a few assumptions about how your machine behaves -
|
||||||
|
|
||||||
|
// 1. We can read a 4-byte value from any address without crashing
|
||||||
|
// 2. sizeof(int) == 4
|
||||||
|
|
||||||
|
// And it has a few limitations -
|
||||||
|
|
||||||
|
// 1. It will not work incrementally.
|
||||||
|
// 2. It will not produce the same results on little-endian and big-endian
|
||||||
|
// machines.
|
||||||
|
|
||||||
|
#include "MurmurHash2.h"
|
||||||
|
|
||||||
|
//-----------------------------------------------------------------------------
|
||||||
|
// Platform-specific functions and macros
|
||||||
|
|
||||||
|
// Microsoft Visual Studio
|
||||||
|
|
||||||
|
#if defined(_MSC_VER)
|
||||||
|
|
||||||
|
#define BIG_CONSTANT(x) (x)
|
||||||
|
|
||||||
|
// Other compilers
|
||||||
|
|
||||||
|
#else // defined(_MSC_VER)
|
||||||
|
|
||||||
|
#define BIG_CONSTANT(x) (x##LLU)
|
||||||
|
|
||||||
|
#endif // !defined(_MSC_VER)
|
||||||
|
|
||||||
|
//-----------------------------------------------------------------------------
|
||||||
|
|
||||||
|
uint32_t MurmurHash2 ( const void * key, size_t len, uint32_t seed )
|
||||||
|
{
|
||||||
|
// 'm' and 'r' are mixing constants generated offline.
|
||||||
|
// They're not really 'magic', they just happen to work well.
|
||||||
|
|
||||||
|
const uint32_t m = 0x5bd1e995;
|
||||||
|
const int r = 24;
|
||||||
|
|
||||||
|
// Initialize the hash to a 'random' value
|
||||||
|
|
||||||
|
uint32_t h = seed ^ len;
|
||||||
|
|
||||||
|
// Mix 4 bytes at a time into the hash
|
||||||
|
|
||||||
|
const unsigned char * data = (const unsigned char *)key;
|
||||||
|
|
||||||
|
while(len >= 4)
|
||||||
|
{
|
||||||
|
uint32_t k = *(uint32_t*)data;
|
||||||
|
|
||||||
|
k *= m;
|
||||||
|
k ^= k >> r;
|
||||||
|
k *= m;
|
||||||
|
|
||||||
|
h *= m;
|
||||||
|
h ^= k;
|
||||||
|
|
||||||
|
data += 4;
|
||||||
|
len -= 4;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Handle the last few bytes of the input array
|
||||||
|
|
||||||
|
switch(len)
|
||||||
|
{
|
||||||
|
case 3: h ^= data[2] << 16;
|
||||||
|
case 2: h ^= data[1] << 8;
|
||||||
|
case 1: h ^= data[0];
|
||||||
|
h *= m;
|
||||||
|
};
|
||||||
|
|
||||||
|
// Do a few final mixes of the hash to ensure the last few
|
||||||
|
// bytes are well-incorporated.
|
||||||
|
|
||||||
|
h ^= h >> 13;
|
||||||
|
h *= m;
|
||||||
|
h ^= h >> 15;
|
||||||
|
|
||||||
|
return h;
|
||||||
|
}
|
||||||
|
|
||||||
|
//-----------------------------------------------------------------------------
|
||||||
|
// MurmurHash2, 64-bit versions, by Austin Appleby
|
||||||
|
|
||||||
|
// The same caveats as 32-bit MurmurHash2 apply here - beware of alignment
|
||||||
|
// and endian-ness issues if used across multiple platforms.
|
||||||
|
|
||||||
|
// 64-bit hash for 64-bit platforms
|
||||||
|
|
||||||
|
uint64_t MurmurHash64A ( const void * key, size_t len, uint64_t seed )
|
||||||
|
{
|
||||||
|
const uint64_t m = BIG_CONSTANT(0xc6a4a7935bd1e995);
|
||||||
|
const int r = 47;
|
||||||
|
|
||||||
|
uint64_t h = seed ^ (len * m);
|
||||||
|
|
||||||
|
const uint64_t * data = (const uint64_t *)key;
|
||||||
|
const uint64_t * end = data + (len/8);
|
||||||
|
|
||||||
|
while(data != end)
|
||||||
|
{
|
||||||
|
uint64_t k = *data++;
|
||||||
|
|
||||||
|
k *= m;
|
||||||
|
k ^= k >> r;
|
||||||
|
k *= m;
|
||||||
|
|
||||||
|
h ^= k;
|
||||||
|
h *= m;
|
||||||
|
}
|
||||||
|
|
||||||
|
const unsigned char * data2 = (const unsigned char*)data;
|
||||||
|
|
||||||
|
switch(len & 7)
|
||||||
|
{
|
||||||
|
case 7: h ^= uint64_t(data2[6]) << 48;
|
||||||
|
case 6: h ^= uint64_t(data2[5]) << 40;
|
||||||
|
case 5: h ^= uint64_t(data2[4]) << 32;
|
||||||
|
case 4: h ^= uint64_t(data2[3]) << 24;
|
||||||
|
case 3: h ^= uint64_t(data2[2]) << 16;
|
||||||
|
case 2: h ^= uint64_t(data2[1]) << 8;
|
||||||
|
case 1: h ^= uint64_t(data2[0]);
|
||||||
|
h *= m;
|
||||||
|
};
|
||||||
|
|
||||||
|
h ^= h >> r;
|
||||||
|
h *= m;
|
||||||
|
h ^= h >> r;
|
||||||
|
|
||||||
|
return h;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
// 64-bit hash for 32-bit platforms
|
||||||
|
|
||||||
|
uint64_t MurmurHash64B ( const void * key, size_t len, uint64_t seed )
|
||||||
|
{
|
||||||
|
const uint32_t m = 0x5bd1e995;
|
||||||
|
const int r = 24;
|
||||||
|
|
||||||
|
uint32_t h1 = uint32_t(seed) ^ len;
|
||||||
|
uint32_t h2 = uint32_t(seed >> 32);
|
||||||
|
|
||||||
|
const uint32_t * data = (const uint32_t *)key;
|
||||||
|
|
||||||
|
while(len >= 8)
|
||||||
|
{
|
||||||
|
uint32_t k1 = *data++;
|
||||||
|
k1 *= m; k1 ^= k1 >> r; k1 *= m;
|
||||||
|
h1 *= m; h1 ^= k1;
|
||||||
|
len -= 4;
|
||||||
|
|
||||||
|
uint32_t k2 = *data++;
|
||||||
|
k2 *= m; k2 ^= k2 >> r; k2 *= m;
|
||||||
|
h2 *= m; h2 ^= k2;
|
||||||
|
len -= 4;
|
||||||
|
}
|
||||||
|
|
||||||
|
if(len >= 4)
|
||||||
|
{
|
||||||
|
uint32_t k1 = *data++;
|
||||||
|
k1 *= m; k1 ^= k1 >> r; k1 *= m;
|
||||||
|
h1 *= m; h1 ^= k1;
|
||||||
|
len -= 4;
|
||||||
|
}
|
||||||
|
|
||||||
|
switch(len)
|
||||||
|
{
|
||||||
|
case 3: h2 ^= ((unsigned char*)data)[2] << 16;
|
||||||
|
case 2: h2 ^= ((unsigned char*)data)[1] << 8;
|
||||||
|
case 1: h2 ^= ((unsigned char*)data)[0];
|
||||||
|
h2 *= m;
|
||||||
|
};
|
||||||
|
|
||||||
|
h1 ^= h2 >> 18; h1 *= m;
|
||||||
|
h2 ^= h1 >> 22; h2 *= m;
|
||||||
|
h1 ^= h2 >> 17; h1 *= m;
|
||||||
|
h2 ^= h1 >> 19; h2 *= m;
|
||||||
|
|
||||||
|
uint64_t h = h1;
|
||||||
|
|
||||||
|
h = (h << 32) | h2;
|
||||||
|
|
||||||
|
return h;
|
||||||
|
}
|
||||||
|
|
||||||
|
//-----------------------------------------------------------------------------
|
||||||
|
// MurmurHash2A, by Austin Appleby
|
||||||
|
|
||||||
|
// This is a variant of MurmurHash2 modified to use the Merkle-Damgard
|
||||||
|
// construction. Bulk speed should be identical to Murmur2, small-key speed
|
||||||
|
// will be 10%-20% slower due to the added overhead at the end of the hash.
|
||||||
|
|
||||||
|
// This variant fixes a minor issue where null keys were more likely to
|
||||||
|
// collide with each other than expected, and also makes the function
|
||||||
|
// more amenable to incremental implementations.
|
||||||
|
|
||||||
|
#define mmix(h,k) { k *= m; k ^= k >> r; k *= m; h *= m; h ^= k; }
|
||||||
|
|
||||||
|
uint32_t MurmurHash2A ( const void * key, size_t len, uint32_t seed )
|
||||||
|
{
|
||||||
|
const uint32_t m = 0x5bd1e995;
|
||||||
|
const int r = 24;
|
||||||
|
uint32_t l = len;
|
||||||
|
|
||||||
|
const unsigned char * data = (const unsigned char *)key;
|
||||||
|
|
||||||
|
uint32_t h = seed;
|
||||||
|
|
||||||
|
while(len >= 4)
|
||||||
|
{
|
||||||
|
uint32_t k = *(uint32_t*)data;
|
||||||
|
|
||||||
|
mmix(h,k);
|
||||||
|
|
||||||
|
data += 4;
|
||||||
|
len -= 4;
|
||||||
|
}
|
||||||
|
|
||||||
|
uint32_t t = 0;
|
||||||
|
|
||||||
|
switch(len)
|
||||||
|
{
|
||||||
|
case 3: t ^= data[2] << 16;
|
||||||
|
case 2: t ^= data[1] << 8;
|
||||||
|
case 1: t ^= data[0];
|
||||||
|
};
|
||||||
|
|
||||||
|
mmix(h,t);
|
||||||
|
mmix(h,l);
|
||||||
|
|
||||||
|
h ^= h >> 13;
|
||||||
|
h *= m;
|
||||||
|
h ^= h >> 15;
|
||||||
|
|
||||||
|
return h;
|
||||||
|
}
|
||||||
|
|
||||||
|
//-----------------------------------------------------------------------------
|
||||||
|
// CMurmurHash2A, by Austin Appleby
|
||||||
|
|
||||||
|
// This is a sample implementation of MurmurHash2A designed to work
|
||||||
|
// incrementally.
|
||||||
|
|
||||||
|
// Usage -
|
||||||
|
|
||||||
|
// CMurmurHash2A hasher
|
||||||
|
// hasher.Begin(seed);
|
||||||
|
// hasher.Add(data1,size1);
|
||||||
|
// hasher.Add(data2,size2);
|
||||||
|
// ...
|
||||||
|
// hasher.Add(dataN,sizeN);
|
||||||
|
// uint32_t hash = hasher.End()
|
||||||
|
|
||||||
|
class CMurmurHash2A
|
||||||
|
{
|
||||||
|
public:
|
||||||
|
|
||||||
|
void Begin ( uint32_t seed = 0 )
|
||||||
|
{
|
||||||
|
m_hash = seed;
|
||||||
|
m_tail = 0;
|
||||||
|
m_count = 0;
|
||||||
|
m_size = 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
void Add ( const unsigned char * data, size_t len )
|
||||||
|
{
|
||||||
|
m_size += len;
|
||||||
|
|
||||||
|
MixTail(data,len);
|
||||||
|
|
||||||
|
while(len >= 4)
|
||||||
|
{
|
||||||
|
uint32_t k = *(uint32_t*)data;
|
||||||
|
|
||||||
|
mmix(m_hash,k);
|
||||||
|
|
||||||
|
data += 4;
|
||||||
|
len -= 4;
|
||||||
|
}
|
||||||
|
|
||||||
|
MixTail(data,len);
|
||||||
|
}
|
||||||
|
|
||||||
|
uint32_t End ( void )
|
||||||
|
{
|
||||||
|
mmix(m_hash,m_tail);
|
||||||
|
mmix(m_hash,m_size);
|
||||||
|
|
||||||
|
m_hash ^= m_hash >> 13;
|
||||||
|
m_hash *= m;
|
||||||
|
m_hash ^= m_hash >> 15;
|
||||||
|
|
||||||
|
return m_hash;
|
||||||
|
}
|
||||||
|
|
||||||
|
private:
|
||||||
|
|
||||||
|
static const uint32_t m = 0x5bd1e995;
|
||||||
|
static const int r = 24;
|
||||||
|
|
||||||
|
void MixTail ( const unsigned char * & data, size_t & len )
|
||||||
|
{
|
||||||
|
while( len && ((len<4) || m_count) )
|
||||||
|
{
|
||||||
|
m_tail |= (*data++) << (m_count * 8);
|
||||||
|
|
||||||
|
m_count++;
|
||||||
|
len--;
|
||||||
|
|
||||||
|
if(m_count == 4)
|
||||||
|
{
|
||||||
|
mmix(m_hash,m_tail);
|
||||||
|
m_tail = 0;
|
||||||
|
m_count = 0;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
uint32_t m_hash;
|
||||||
|
uint32_t m_tail;
|
||||||
|
uint32_t m_count;
|
||||||
|
uint32_t m_size;
|
||||||
|
};
|
||||||
|
|
||||||
|
//-----------------------------------------------------------------------------
|
||||||
|
// MurmurHashNeutral2, by Austin Appleby
|
||||||
|
|
||||||
|
// Same as MurmurHash2, but endian- and alignment-neutral.
|
||||||
|
// Half the speed though, alas.
|
||||||
|
|
||||||
|
uint32_t MurmurHashNeutral2 ( const void * key, size_t len, uint32_t seed )
|
||||||
|
{
|
||||||
|
const uint32_t m = 0x5bd1e995;
|
||||||
|
const int r = 24;
|
||||||
|
|
||||||
|
uint32_t h = seed ^ len;
|
||||||
|
|
||||||
|
const unsigned char * data = (const unsigned char *)key;
|
||||||
|
|
||||||
|
while(len >= 4)
|
||||||
|
{
|
||||||
|
uint32_t k;
|
||||||
|
|
||||||
|
k = data[0];
|
||||||
|
k |= data[1] << 8;
|
||||||
|
k |= data[2] << 16;
|
||||||
|
k |= data[3] << 24;
|
||||||
|
|
||||||
|
k *= m;
|
||||||
|
k ^= k >> r;
|
||||||
|
k *= m;
|
||||||
|
|
||||||
|
h *= m;
|
||||||
|
h ^= k;
|
||||||
|
|
||||||
|
data += 4;
|
||||||
|
len -= 4;
|
||||||
|
}
|
||||||
|
|
||||||
|
switch(len)
|
||||||
|
{
|
||||||
|
case 3: h ^= data[2] << 16;
|
||||||
|
case 2: h ^= data[1] << 8;
|
||||||
|
case 1: h ^= data[0];
|
||||||
|
h *= m;
|
||||||
|
};
|
||||||
|
|
||||||
|
h ^= h >> 13;
|
||||||
|
h *= m;
|
||||||
|
h ^= h >> 15;
|
||||||
|
|
||||||
|
return h;
|
||||||
|
}
|
||||||
|
|
||||||
|
//-----------------------------------------------------------------------------
|
||||||
|
// MurmurHashAligned2, by Austin Appleby
|
||||||
|
|
||||||
|
// Same algorithm as MurmurHash2, but only does aligned reads - should be safer
|
||||||
|
// on certain platforms.
|
||||||
|
|
||||||
|
// Performance will be lower than MurmurHash2
|
||||||
|
|
||||||
|
#define MIX(h,k,m) { k *= m; k ^= k >> r; k *= m; h *= m; h ^= k; }
|
||||||
|
|
||||||
|
|
||||||
|
uint32_t MurmurHashAligned2 ( const void * key, size_t len, uint32_t seed )
|
||||||
|
{
|
||||||
|
const uint32_t m = 0x5bd1e995;
|
||||||
|
const int r = 24;
|
||||||
|
|
||||||
|
const unsigned char * data = (const unsigned char *)key;
|
||||||
|
|
||||||
|
uint32_t h = seed ^ len;
|
||||||
|
|
||||||
|
size_t align = (uint64_t)data & 3;
|
||||||
|
|
||||||
|
if(align && (len >= 4))
|
||||||
|
{
|
||||||
|
// Pre-load the temp registers
|
||||||
|
|
||||||
|
uint32_t t = 0, d = 0;
|
||||||
|
|
||||||
|
switch(align)
|
||||||
|
{
|
||||||
|
case 1: t |= data[2] << 16;
|
||||||
|
case 2: t |= data[1] << 8;
|
||||||
|
case 3: t |= data[0];
|
||||||
|
}
|
||||||
|
|
||||||
|
t <<= (8 * align);
|
||||||
|
|
||||||
|
data += 4-align;
|
||||||
|
len -= 4-align;
|
||||||
|
|
||||||
|
int sl = 8 * (4-align);
|
||||||
|
int sr = 8 * align;
|
||||||
|
|
||||||
|
// Mix
|
||||||
|
|
||||||
|
while(len >= 4)
|
||||||
|
{
|
||||||
|
d = *(uint32_t *)data;
|
||||||
|
t = (t >> sr) | (d << sl);
|
||||||
|
|
||||||
|
uint32_t k = t;
|
||||||
|
|
||||||
|
MIX(h,k,m);
|
||||||
|
|
||||||
|
t = d;
|
||||||
|
|
||||||
|
data += 4;
|
||||||
|
len -= 4;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Handle leftover data in temp registers
|
||||||
|
|
||||||
|
d = 0;
|
||||||
|
|
||||||
|
if(len >= align)
|
||||||
|
{
|
||||||
|
switch(align)
|
||||||
|
{
|
||||||
|
case 3: d |= data[2] << 16;
|
||||||
|
case 2: d |= data[1] << 8;
|
||||||
|
case 1: d |= data[0];
|
||||||
|
}
|
||||||
|
|
||||||
|
uint32_t k = (t >> sr) | (d << sl);
|
||||||
|
MIX(h,k,m);
|
||||||
|
|
||||||
|
data += align;
|
||||||
|
len -= align;
|
||||||
|
|
||||||
|
//----------
|
||||||
|
// Handle tail bytes
|
||||||
|
|
||||||
|
switch(len)
|
||||||
|
{
|
||||||
|
case 3: h ^= data[2] << 16;
|
||||||
|
case 2: h ^= data[1] << 8;
|
||||||
|
case 1: h ^= data[0];
|
||||||
|
h *= m;
|
||||||
|
};
|
||||||
|
}
|
||||||
|
else
|
||||||
|
{
|
||||||
|
switch(len)
|
||||||
|
{
|
||||||
|
case 3: d |= data[2] << 16;
|
||||||
|
case 2: d |= data[1] << 8;
|
||||||
|
case 1: d |= data[0];
|
||||||
|
case 0: h ^= (t >> sr) | (d << sl);
|
||||||
|
h *= m;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
h ^= h >> 13;
|
||||||
|
h *= m;
|
||||||
|
h ^= h >> 15;
|
||||||
|
|
||||||
|
return h;
|
||||||
|
}
|
||||||
|
else
|
||||||
|
{
|
||||||
|
while(len >= 4)
|
||||||
|
{
|
||||||
|
uint32_t k = *(uint32_t *)data;
|
||||||
|
|
||||||
|
MIX(h,k,m);
|
||||||
|
|
||||||
|
data += 4;
|
||||||
|
len -= 4;
|
||||||
|
}
|
||||||
|
|
||||||
|
//----------
|
||||||
|
// Handle tail bytes
|
||||||
|
|
||||||
|
switch(len)
|
||||||
|
{
|
||||||
|
case 3: h ^= data[2] << 16;
|
||||||
|
case 2: h ^= data[1] << 8;
|
||||||
|
case 1: h ^= data[0];
|
||||||
|
h *= m;
|
||||||
|
};
|
||||||
|
|
||||||
|
h ^= h >> 13;
|
||||||
|
h *= m;
|
||||||
|
h ^= h >> 15;
|
||||||
|
|
||||||
|
return h;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
//-----------------------------------------------------------------------------
|
||||||
|
|
@ -1,3 +1,4 @@
|
|||||||
|
//-----------------------------------------------------------------------------
|
||||||
// MurmurHash3 was written by Austin Appleby, and is placed in the public
|
// MurmurHash3 was written by Austin Appleby, and is placed in the public
|
||||||
// domain. The author hereby disclaims copyright to this source code.
|
// domain. The author hereby disclaims copyright to this source code.
|
||||||
|
|
||||||
@ -6,8 +7,8 @@
|
|||||||
// compile and run any of them on any platform, but your performance with the
|
// compile and run any of them on any platform, but your performance with the
|
||||||
// non-native version will be less than optimal.
|
// non-native version will be less than optimal.
|
||||||
|
|
||||||
#include "murmurhash3.h"
|
#include "MurmurHash3.h"
|
||||||
#include <cstring>
|
#include <string.h>
|
||||||
|
|
||||||
//-----------------------------------------------------------------------------
|
//-----------------------------------------------------------------------------
|
||||||
// Platform-specific functions and macros
|
// Platform-specific functions and macros
|
||||||
@ -93,7 +94,7 @@ FORCE_INLINE uint64_t fmix64 ( uint64_t k )
|
|||||||
|
|
||||||
//-----------------------------------------------------------------------------
|
//-----------------------------------------------------------------------------
|
||||||
|
|
||||||
void MurmurHash3_x86_32 ( const void * key, int len,
|
void MurmurHash3_x86_32 ( const void * key, size_t len,
|
||||||
uint32_t seed, void * out )
|
uint32_t seed, void * out )
|
||||||
{
|
{
|
||||||
const uint8_t * data = (const uint8_t*)key;
|
const uint8_t * data = (const uint8_t*)key;
|
||||||
@ -149,7 +150,7 @@ void MurmurHash3_x86_32 ( const void * key, int len,
|
|||||||
|
|
||||||
//-----------------------------------------------------------------------------
|
//-----------------------------------------------------------------------------
|
||||||
|
|
||||||
void MurmurHash3_x86_128 ( const void * key, const int len,
|
void MurmurHash3_x86_128 ( const void * key, const size_t len,
|
||||||
uint32_t seed, void * out )
|
uint32_t seed, void * out )
|
||||||
{
|
{
|
||||||
const uint8_t * data = (const uint8_t*)key;
|
const uint8_t * data = (const uint8_t*)key;
|
||||||
@ -254,7 +255,7 @@ void MurmurHash3_x86_128 ( const void * key, const int len,
|
|||||||
|
|
||||||
//-----------------------------------------------------------------------------
|
//-----------------------------------------------------------------------------
|
||||||
|
|
||||||
void MurmurHash3_x64_128 ( const void * key, const int len,
|
void MurmurHash3_x64_128 ( const void * key, const size_t len,
|
||||||
const uint32_t seed, void * out )
|
const uint32_t seed, void * out )
|
||||||
{
|
{
|
||||||
const uint8_t * data = (const uint8_t*)key;
|
const uint8_t * data = (const uint8_t*)key;
|
||||||
@ -332,3 +333,6 @@ void MurmurHash3_x64_128 ( const void * key, const int len,
|
|||||||
((uint64_t*)out)[0] = h1;
|
((uint64_t*)out)[0] = h1;
|
||||||
((uint64_t*)out)[1] = h2;
|
((uint64_t*)out)[1] = h2;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
//-----------------------------------------------------------------------------
|
||||||
|
|
@ -1,423 +0,0 @@
|
|||||||
// MurmurHash2 was written by Austin Appleby, and is placed in the public
|
|
||||||
// domain. The author hereby disclaims copyright to this source code.
|
|
||||||
|
|
||||||
// Note - This code makes a few assumptions about how your machine behaves -
|
|
||||||
|
|
||||||
// 1. We can read a 4-byte value from any address without crashing
|
|
||||||
// 2. sizeof(int) == 4
|
|
||||||
|
|
||||||
// And it has a few limitations -
|
|
||||||
|
|
||||||
// 1. It will not work incrementally.
|
|
||||||
// 2. It will not produce the same results on little-endian and big-endian
|
|
||||||
// machines.
|
|
||||||
|
|
||||||
#include "murmurhash2.h"
|
|
||||||
#include <cstring>
|
|
||||||
|
|
||||||
// Platform-specific functions and macros
|
|
||||||
// Microsoft Visual Studio
|
|
||||||
|
|
||||||
#if defined(_MSC_VER)
|
|
||||||
|
|
||||||
#define BIG_CONSTANT(x) (x)
|
|
||||||
|
|
||||||
// Other compilers
|
|
||||||
|
|
||||||
#else // defined(_MSC_VER)
|
|
||||||
|
|
||||||
#define BIG_CONSTANT(x) (x##LLU)
|
|
||||||
|
|
||||||
#endif // !defined(_MSC_VER)
|
|
||||||
|
|
||||||
|
|
||||||
uint32_t MurmurHash2(const void * key, int len, uint32_t seed)
|
|
||||||
{
|
|
||||||
// 'm' and 'r' are mixing constants generated offline.
|
|
||||||
// They're not really 'magic', they just happen to work well.
|
|
||||||
|
|
||||||
const uint32_t m = 0x5bd1e995;
|
|
||||||
const int r = 24;
|
|
||||||
|
|
||||||
// Initialize the hash to a 'random' value
|
|
||||||
|
|
||||||
uint32_t h = seed ^ len;
|
|
||||||
|
|
||||||
// Mix 4 bytes at a time into the hash
|
|
||||||
|
|
||||||
const unsigned char * data = reinterpret_cast<const unsigned char *>(key);
|
|
||||||
|
|
||||||
while (len >= 4)
|
|
||||||
{
|
|
||||||
uint32_t k;
|
|
||||||
memcpy(&k, data, sizeof(k));
|
|
||||||
k *= m;
|
|
||||||
k ^= k >> r;
|
|
||||||
k *= m;
|
|
||||||
|
|
||||||
h *= m;
|
|
||||||
h ^= k;
|
|
||||||
|
|
||||||
data += 4;
|
|
||||||
len -= 4;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Handle the last few bytes of the input array
|
|
||||||
|
|
||||||
switch (len)
|
|
||||||
{
|
|
||||||
case 3: h ^= data[2] << 16;
|
|
||||||
case 2: h ^= data[1] << 8;
|
|
||||||
case 1: h ^= data[0];
|
|
||||||
h *= m;
|
|
||||||
};
|
|
||||||
|
|
||||||
// Do a few final mixes of the hash to ensure the last few
|
|
||||||
// bytes are well-incorporated.
|
|
||||||
|
|
||||||
h ^= h >> 13;
|
|
||||||
h *= m;
|
|
||||||
h ^= h >> 15;
|
|
||||||
|
|
||||||
return h;
|
|
||||||
}
|
|
||||||
|
|
||||||
// MurmurHash2, 64-bit versions, by Austin Appleby
|
|
||||||
|
|
||||||
// The same caveats as 32-bit MurmurHash2 apply here - beware of alignment
|
|
||||||
// and endian-ness issues if used across multiple platforms.
|
|
||||||
|
|
||||||
// 64-bit hash for 64-bit platforms
|
|
||||||
|
|
||||||
uint64_t MurmurHash64A(const void * key, int len, uint64_t seed)
|
|
||||||
{
|
|
||||||
const uint64_t m = BIG_CONSTANT(0xc6a4a7935bd1e995);
|
|
||||||
const int r = 47;
|
|
||||||
|
|
||||||
uint64_t h = seed ^ (len * m);
|
|
||||||
|
|
||||||
const uint64_t * data = reinterpret_cast<const uint64_t *>(key);
|
|
||||||
const uint64_t * end = data + (len/8);
|
|
||||||
|
|
||||||
while (data != end)
|
|
||||||
{
|
|
||||||
uint64_t k = *data++;
|
|
||||||
|
|
||||||
k *= m;
|
|
||||||
k ^= k >> r;
|
|
||||||
k *= m;
|
|
||||||
|
|
||||||
h ^= k;
|
|
||||||
h *= m;
|
|
||||||
}
|
|
||||||
|
|
||||||
const unsigned char * data2 = reinterpret_cast<const unsigned char *>(data);
|
|
||||||
|
|
||||||
switch (len & 7)
|
|
||||||
{
|
|
||||||
case 7: h ^= static_cast<uint64_t>(data2[6]) << 48;
|
|
||||||
case 6: h ^= static_cast<uint64_t>(data2[5]) << 40;
|
|
||||||
case 5: h ^= static_cast<uint64_t>(data2[4]) << 32;
|
|
||||||
case 4: h ^= static_cast<uint64_t>(data2[3]) << 24;
|
|
||||||
case 3: h ^= static_cast<uint64_t>(data2[2]) << 16;
|
|
||||||
case 2: h ^= static_cast<uint64_t>(data2[1]) << 8;
|
|
||||||
case 1: h ^= static_cast<uint64_t>(data2[0]);
|
|
||||||
h *= m;
|
|
||||||
};
|
|
||||||
|
|
||||||
h ^= h >> r;
|
|
||||||
h *= m;
|
|
||||||
h ^= h >> r;
|
|
||||||
|
|
||||||
return h;
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
// 64-bit hash for 32-bit platforms
|
|
||||||
|
|
||||||
uint64_t MurmurHash64B(const void * key, int len, uint64_t seed)
|
|
||||||
{
|
|
||||||
const uint32_t m = 0x5bd1e995;
|
|
||||||
const int r = 24;
|
|
||||||
|
|
||||||
uint32_t h1 = static_cast<uint32_t>(seed) ^ len;
|
|
||||||
uint32_t h2 = static_cast<uint32_t>(seed >> 32);
|
|
||||||
|
|
||||||
const uint32_t * data = reinterpret_cast<const uint32_t *>(key);
|
|
||||||
|
|
||||||
while (len >= 8)
|
|
||||||
{
|
|
||||||
uint32_t k1 = *data++;
|
|
||||||
k1 *= m; k1 ^= k1 >> r; k1 *= m;
|
|
||||||
h1 *= m; h1 ^= k1;
|
|
||||||
len -= 4;
|
|
||||||
|
|
||||||
uint32_t k2 = *data++;
|
|
||||||
k2 *= m; k2 ^= k2 >> r; k2 *= m;
|
|
||||||
h2 *= m; h2 ^= k2;
|
|
||||||
len -= 4;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (len >= 4)
|
|
||||||
{
|
|
||||||
uint32_t k1 = *data++;
|
|
||||||
k1 *= m; k1 ^= k1 >> r; k1 *= m;
|
|
||||||
h1 *= m; h1 ^= k1;
|
|
||||||
len -= 4;
|
|
||||||
}
|
|
||||||
|
|
||||||
switch (len)
|
|
||||||
{
|
|
||||||
case 3: h2 ^= reinterpret_cast<const unsigned char *>(data)[2] << 16;
|
|
||||||
case 2: h2 ^= reinterpret_cast<const unsigned char *>(data)[1] << 8;
|
|
||||||
case 1: h2 ^= reinterpret_cast<const unsigned char *>(data)[0];
|
|
||||||
h2 *= m;
|
|
||||||
};
|
|
||||||
|
|
||||||
h1 ^= h2 >> 18; h1 *= m;
|
|
||||||
h2 ^= h1 >> 22; h2 *= m;
|
|
||||||
h1 ^= h2 >> 17; h1 *= m;
|
|
||||||
h2 ^= h1 >> 19; h2 *= m;
|
|
||||||
|
|
||||||
uint64_t h = h1;
|
|
||||||
|
|
||||||
h = (h << 32) | h2;
|
|
||||||
|
|
||||||
return h;
|
|
||||||
}
|
|
||||||
|
|
||||||
// MurmurHash2A, by Austin Appleby
|
|
||||||
|
|
||||||
// This is a variant of MurmurHash2 modified to use the Merkle-Damgard
|
|
||||||
// construction. Bulk speed should be identical to Murmur2, small-key speed
|
|
||||||
// will be 10%-20% slower due to the added overhead at the end of the hash.
|
|
||||||
|
|
||||||
// This variant fixes a minor issue where null keys were more likely to
|
|
||||||
// collide with each other than expected, and also makes the function
|
|
||||||
// more amenable to incremental implementations.
|
|
||||||
|
|
||||||
#define mmix(h,k) { k *= m; k ^= k >> r; k *= m; h *= m; h ^= k; }
|
|
||||||
|
|
||||||
uint32_t MurmurHash2A(const void * key, int len, uint32_t seed)
|
|
||||||
{
|
|
||||||
const uint32_t m = 0x5bd1e995;
|
|
||||||
const int r = 24;
|
|
||||||
uint32_t l = len;
|
|
||||||
|
|
||||||
const unsigned char * data = reinterpret_cast<const unsigned char *>(key);
|
|
||||||
|
|
||||||
uint32_t h = seed;
|
|
||||||
|
|
||||||
while (len >= 4)
|
|
||||||
{
|
|
||||||
uint32_t k = *reinterpret_cast<const uint32_t *>(data);
|
|
||||||
mmix(h,k);
|
|
||||||
data += 4;
|
|
||||||
len -= 4;
|
|
||||||
}
|
|
||||||
|
|
||||||
uint32_t t = 0;
|
|
||||||
|
|
||||||
switch (len)
|
|
||||||
{
|
|
||||||
case 3: t ^= data[2] << 16;
|
|
||||||
case 2: t ^= data[1] << 8;
|
|
||||||
case 1: t ^= data[0];
|
|
||||||
};
|
|
||||||
|
|
||||||
mmix(h,t);
|
|
||||||
mmix(h,l);
|
|
||||||
|
|
||||||
h ^= h >> 13;
|
|
||||||
h *= m;
|
|
||||||
h ^= h >> 15;
|
|
||||||
|
|
||||||
return h;
|
|
||||||
}
|
|
||||||
|
|
||||||
// MurmurHashNeutral2, by Austin Appleby
|
|
||||||
|
|
||||||
// Same as MurmurHash2, but endian- and alignment-neutral.
|
|
||||||
// Half the speed though, alas.
|
|
||||||
|
|
||||||
uint32_t MurmurHashNeutral2(const void * key, int len, uint32_t seed)
|
|
||||||
{
|
|
||||||
const uint32_t m = 0x5bd1e995;
|
|
||||||
const int r = 24;
|
|
||||||
|
|
||||||
uint32_t h = seed ^ len;
|
|
||||||
|
|
||||||
const unsigned char * data = reinterpret_cast<const unsigned char *>(key);
|
|
||||||
|
|
||||||
while (len >= 4)
|
|
||||||
{
|
|
||||||
uint32_t k;
|
|
||||||
|
|
||||||
k = data[0];
|
|
||||||
k |= data[1] << 8;
|
|
||||||
k |= data[2] << 16;
|
|
||||||
k |= data[3] << 24;
|
|
||||||
|
|
||||||
k *= m;
|
|
||||||
k ^= k >> r;
|
|
||||||
k *= m;
|
|
||||||
|
|
||||||
h *= m;
|
|
||||||
h ^= k;
|
|
||||||
|
|
||||||
data += 4;
|
|
||||||
len -= 4;
|
|
||||||
}
|
|
||||||
|
|
||||||
switch (len)
|
|
||||||
{
|
|
||||||
case 3: h ^= data[2] << 16;
|
|
||||||
case 2: h ^= data[1] << 8;
|
|
||||||
case 1: h ^= data[0];
|
|
||||||
h *= m;
|
|
||||||
};
|
|
||||||
|
|
||||||
h ^= h >> 13;
|
|
||||||
h *= m;
|
|
||||||
h ^= h >> 15;
|
|
||||||
|
|
||||||
return h;
|
|
||||||
}
|
|
||||||
|
|
||||||
//-----------------------------------------------------------------------------
|
|
||||||
// MurmurHashAligned2, by Austin Appleby
|
|
||||||
|
|
||||||
// Same algorithm as MurmurHash2, but only does aligned reads - should be safer
|
|
||||||
// on certain platforms.
|
|
||||||
|
|
||||||
// Performance will be lower than MurmurHash2
|
|
||||||
|
|
||||||
#define MIX(h,k,m) { k *= m; k ^= k >> r; k *= m; h *= m; h ^= k; }
|
|
||||||
|
|
||||||
|
|
||||||
uint32_t MurmurHashAligned2(const void * key, int len, uint32_t seed)
|
|
||||||
{
|
|
||||||
const uint32_t m = 0x5bd1e995;
|
|
||||||
const int r = 24;
|
|
||||||
|
|
||||||
const unsigned char * data = reinterpret_cast<const unsigned char *>(key);
|
|
||||||
|
|
||||||
uint32_t h = seed ^ len;
|
|
||||||
|
|
||||||
int align = reinterpret_cast<uint64_t>(data) & 3;
|
|
||||||
|
|
||||||
if (align && (len >= 4))
|
|
||||||
{
|
|
||||||
// Pre-load the temp registers
|
|
||||||
|
|
||||||
uint32_t t = 0, d = 0;
|
|
||||||
|
|
||||||
switch (align)
|
|
||||||
{
|
|
||||||
case 1: t |= data[2] << 16;
|
|
||||||
case 2: t |= data[1] << 8;
|
|
||||||
case 3: t |= data[0];
|
|
||||||
}
|
|
||||||
|
|
||||||
t <<= (8 * align);
|
|
||||||
|
|
||||||
data += 4-align;
|
|
||||||
len -= 4-align;
|
|
||||||
|
|
||||||
int sl = 8 * (4-align);
|
|
||||||
int sr = 8 * align;
|
|
||||||
|
|
||||||
// Mix
|
|
||||||
|
|
||||||
while (len >= 4)
|
|
||||||
{
|
|
||||||
d = *(reinterpret_cast<const uint32_t *>(data));
|
|
||||||
t = (t >> sr) | (d << sl);
|
|
||||||
|
|
||||||
uint32_t k = t;
|
|
||||||
|
|
||||||
MIX(h,k,m);
|
|
||||||
|
|
||||||
t = d;
|
|
||||||
|
|
||||||
data += 4;
|
|
||||||
len -= 4;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Handle leftover data in temp registers
|
|
||||||
|
|
||||||
d = 0;
|
|
||||||
|
|
||||||
if (len >= align)
|
|
||||||
{
|
|
||||||
switch (align)
|
|
||||||
{
|
|
||||||
case 3: d |= data[2] << 16;
|
|
||||||
case 2: d |= data[1] << 8;
|
|
||||||
case 1: d |= data[0];
|
|
||||||
}
|
|
||||||
|
|
||||||
uint32_t k = (t >> sr) | (d << sl);
|
|
||||||
MIX(h,k,m);
|
|
||||||
|
|
||||||
data += align;
|
|
||||||
len -= align;
|
|
||||||
|
|
||||||
//----------
|
|
||||||
// Handle tail bytes
|
|
||||||
|
|
||||||
switch (len)
|
|
||||||
{
|
|
||||||
case 3: h ^= data[2] << 16;
|
|
||||||
case 2: h ^= data[1] << 8;
|
|
||||||
case 1: h ^= data[0];
|
|
||||||
h *= m;
|
|
||||||
};
|
|
||||||
}
|
|
||||||
else
|
|
||||||
{
|
|
||||||
switch (len)
|
|
||||||
{
|
|
||||||
case 3: d |= data[2] << 16;
|
|
||||||
case 2: d |= data[1] << 8;
|
|
||||||
case 1: d |= data[0];
|
|
||||||
case 0: h ^= (t >> sr) | (d << sl);
|
|
||||||
h *= m;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
h ^= h >> 13;
|
|
||||||
h *= m;
|
|
||||||
h ^= h >> 15;
|
|
||||||
|
|
||||||
return h;
|
|
||||||
}
|
|
||||||
else
|
|
||||||
{
|
|
||||||
while (len >= 4)
|
|
||||||
{
|
|
||||||
uint32_t k = *reinterpret_cast<const uint32_t *>(data);
|
|
||||||
|
|
||||||
MIX(h,k,m);
|
|
||||||
|
|
||||||
data += 4;
|
|
||||||
len -= 4;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Handle tail bytes
|
|
||||||
|
|
||||||
switch (len)
|
|
||||||
{
|
|
||||||
case 3: h ^= data[2] << 16;
|
|
||||||
case 2: h ^= data[1] << 8;
|
|
||||||
case 1: h ^= data[0];
|
|
||||||
h *= m;
|
|
||||||
};
|
|
||||||
|
|
||||||
h ^= h >> 13;
|
|
||||||
h *= m;
|
|
||||||
h ^= h >> 15;
|
|
||||||
|
|
||||||
return h;
|
|
||||||
}
|
|
||||||
}
|
|
4
debian/changelog
vendored
4
debian/changelog
vendored
@ -1,5 +1,5 @@
|
|||||||
clickhouse (21.7.1.1) unstable; urgency=low
|
clickhouse (21.9.1.1) unstable; urgency=low
|
||||||
|
|
||||||
* Modified source code
|
* Modified source code
|
||||||
|
|
||||||
-- clickhouse-release <clickhouse-release@yandex-team.ru> Thu, 20 May 2021 22:23:29 +0300
|
-- clickhouse-release <clickhouse-release@yandex-team.ru> Sat, 10 Jul 2021 08:22:49 +0300
|
||||||
|
25
debian/clickhouse-server.init
vendored
25
debian/clickhouse-server.init
vendored
@ -43,29 +43,6 @@ command -v flock >/dev/null && FLOCK=flock
|
|||||||
# Override defaults from optional config file
|
# Override defaults from optional config file
|
||||||
test -f /etc/default/clickhouse && . /etc/default/clickhouse
|
test -f /etc/default/clickhouse && . /etc/default/clickhouse
|
||||||
|
|
||||||
# On x86_64, check for required instruction set.
|
|
||||||
if uname -mpi | grep -q 'x86_64'; then
|
|
||||||
if ! grep -q 'sse4_2' /proc/cpuinfo; then
|
|
||||||
# On KVM, cpuinfo could falsely not report SSE 4.2 support, so skip the check.
|
|
||||||
if ! grep -q 'Common KVM processor' /proc/cpuinfo; then
|
|
||||||
|
|
||||||
# Some other VMs also report wrong flags in cpuinfo.
|
|
||||||
# Tricky way to test for instruction set:
|
|
||||||
# create temporary binary and run it;
|
|
||||||
# if it get caught illegal instruction signal,
|
|
||||||
# then required instruction set is not supported really.
|
|
||||||
#
|
|
||||||
# Generated this way:
|
|
||||||
# gcc -xc -Os -static -nostdlib - <<< 'void _start() { __asm__("pcmpgtq %%xmm0, %%xmm1; mov $0x3c, %%rax; xor %%rdi, %%rdi; syscall":::"memory"); }' && strip -R .note.gnu.build-id -R .comment -R .eh_frame -s ./a.out && gzip -c -9 ./a.out | base64 -w0; echo
|
|
||||||
|
|
||||||
if ! (echo -n 'H4sICAwAW1cCA2Eub3V0AKt39XFjYmRkgAEmBjsGEI+H0QHMd4CKGyCUAMUsGJiBJDNQNUiYlQEZOKDQclB9cnD9CmCSBYqJBRxQOvBpSQobGfqIAWn8FuYnPI4fsAGyPQz/87MeZtArziguKSpJTGLQK0mtKGGgGHADMSgoYH6AhTMPNHyE0NQzYuEzYzEXFr6CBPQDANAsXKTwAQAA' | base64 -d | gzip -d > /tmp/clickhouse_test_sse42 && chmod a+x /tmp/clickhouse_test_sse42 && /tmp/clickhouse_test_sse42); then
|
|
||||||
echo 'Warning! SSE 4.2 instruction set is not supported'
|
|
||||||
#exit 3
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
|
|
||||||
die()
|
die()
|
||||||
{
|
{
|
||||||
@ -116,7 +93,7 @@ forcestop()
|
|||||||
service_or_func()
|
service_or_func()
|
||||||
{
|
{
|
||||||
if [ -x "/bin/systemctl" ] && [ -f /etc/systemd/system/clickhouse-server.service ] && [ -d /run/systemd/system ]; then
|
if [ -x "/bin/systemctl" ] && [ -f /etc/systemd/system/clickhouse-server.service ] && [ -d /run/systemd/system ]; then
|
||||||
service $PROGRAM $1
|
systemctl $1 $PROGRAM
|
||||||
else
|
else
|
||||||
$1
|
$1
|
||||||
fi
|
fi
|
||||||
|
@ -12,7 +12,6 @@ mkdir root
|
|||||||
pushd root
|
pushd root
|
||||||
mkdir lib lib64 etc tmp root
|
mkdir lib lib64 etc tmp root
|
||||||
cp ${BUILD_DIR}/programs/clickhouse .
|
cp ${BUILD_DIR}/programs/clickhouse .
|
||||||
cp ${SRC_DIR}/programs/server/{config,users}.xml .
|
|
||||||
cp /lib/x86_64-linux-gnu/{libc.so.6,libdl.so.2,libm.so.6,libpthread.so.0,librt.so.1,libnss_dns.so.2,libresolv.so.2} lib
|
cp /lib/x86_64-linux-gnu/{libc.so.6,libdl.so.2,libm.so.6,libpthread.so.0,librt.so.1,libnss_dns.so.2,libresolv.so.2} lib
|
||||||
cp /lib64/ld-linux-x86-64.so.2 lib64
|
cp /lib64/ld-linux-x86-64.so.2 lib64
|
||||||
cp /etc/resolv.conf ./etc
|
cp /etc/resolv.conf ./etc
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
FROM ubuntu:18.04
|
FROM ubuntu:18.04
|
||||||
|
|
||||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||||
ARG version=21.7.1.*
|
ARG version=21.9.1.*
|
||||||
|
|
||||||
RUN apt-get update \
|
RUN apt-get update \
|
||||||
&& apt-get install --yes --no-install-recommends \
|
&& apt-get install --yes --no-install-recommends \
|
||||||
|
@ -72,7 +72,7 @@ RUN git clone https://github.com/tpoechtrager/apple-libtapi.git \
|
|||||||
&& cd .. \
|
&& cd .. \
|
||||||
&& rm -rf apple-libtapi
|
&& rm -rf apple-libtapi
|
||||||
|
|
||||||
# Build and install tools for cross-linking to Darwin
|
# Build and install tools for cross-linking to Darwin (x86-64)
|
||||||
RUN git clone https://github.com/tpoechtrager/cctools-port.git \
|
RUN git clone https://github.com/tpoechtrager/cctools-port.git \
|
||||||
&& cd cctools-port/cctools \
|
&& cd cctools-port/cctools \
|
||||||
&& ./configure --prefix=/cctools --with-libtapi=/cctools \
|
&& ./configure --prefix=/cctools --with-libtapi=/cctools \
|
||||||
@ -81,8 +81,17 @@ RUN git clone https://github.com/tpoechtrager/cctools-port.git \
|
|||||||
&& cd ../.. \
|
&& cd ../.. \
|
||||||
&& rm -rf cctools-port
|
&& rm -rf cctools-port
|
||||||
|
|
||||||
# Download toolchain for Darwin
|
# Build and install tools for cross-linking to Darwin (aarch64)
|
||||||
RUN wget -nv https://github.com/phracker/MacOSX-SDKs/releases/download/10.15/MacOSX10.15.sdk.tar.xz
|
RUN git clone https://github.com/tpoechtrager/cctools-port.git \
|
||||||
|
&& cd cctools-port/cctools \
|
||||||
|
&& ./configure --prefix=/cctools --with-libtapi=/cctools \
|
||||||
|
--target=aarch64-apple-darwin \
|
||||||
|
&& make install \
|
||||||
|
&& cd ../.. \
|
||||||
|
&& rm -rf cctools-port
|
||||||
|
|
||||||
|
# Download toolchain and SDK for Darwin
|
||||||
|
RUN wget -nv https://github.com/phracker/MacOSX-SDKs/releases/download/11.3/MacOSX11.0.sdk.tar.xz
|
||||||
|
|
||||||
# Download toolchain for ARM
|
# Download toolchain for ARM
|
||||||
# It contains all required headers and libraries. Note that it's named as "gcc" but actually we are using clang for cross compiling.
|
# It contains all required headers and libraries. Note that it's named as "gcc" but actually we are using clang for cross compiling.
|
||||||
|
@ -3,7 +3,9 @@
|
|||||||
set -x -e
|
set -x -e
|
||||||
|
|
||||||
mkdir -p build/cmake/toolchain/darwin-x86_64
|
mkdir -p build/cmake/toolchain/darwin-x86_64
|
||||||
tar xJf MacOSX10.15.sdk.tar.xz -C build/cmake/toolchain/darwin-x86_64 --strip-components=1
|
tar xJf MacOSX11.0.sdk.tar.xz -C build/cmake/toolchain/darwin-x86_64 --strip-components=1
|
||||||
|
|
||||||
|
ln -sf darwin-x86_64 build/cmake/toolchain/darwin-aarch64
|
||||||
|
|
||||||
mkdir -p build/cmake/toolchain/linux-aarch64
|
mkdir -p build/cmake/toolchain/linux-aarch64
|
||||||
tar xJf gcc-arm-8.3-2019.03-x86_64-aarch64-linux-gnu.tar.xz -C build/cmake/toolchain/linux-aarch64 --strip-components=1
|
tar xJf gcc-arm-8.3-2019.03-x86_64-aarch64-linux-gnu.tar.xz -C build/cmake/toolchain/linux-aarch64 --strip-components=1
|
||||||
|
@ -58,6 +58,7 @@ def run_docker_image_with_env(image_name, output, env_variables, ch_root, ccache
|
|||||||
def parse_env_variables(build_type, compiler, sanitizer, package_type, image_type, cache, distcc_hosts, unbundled, split_binary, clang_tidy, version, author, official, alien_pkgs, with_coverage, with_binaries):
|
def parse_env_variables(build_type, compiler, sanitizer, package_type, image_type, cache, distcc_hosts, unbundled, split_binary, clang_tidy, version, author, official, alien_pkgs, with_coverage, with_binaries):
|
||||||
CLANG_PREFIX = "clang"
|
CLANG_PREFIX = "clang"
|
||||||
DARWIN_SUFFIX = "-darwin"
|
DARWIN_SUFFIX = "-darwin"
|
||||||
|
DARWIN_ARM_SUFFIX = "-darwin-aarch64"
|
||||||
ARM_SUFFIX = "-aarch64"
|
ARM_SUFFIX = "-aarch64"
|
||||||
FREEBSD_SUFFIX = "-freebsd"
|
FREEBSD_SUFFIX = "-freebsd"
|
||||||
|
|
||||||
@ -66,9 +67,10 @@ def parse_env_variables(build_type, compiler, sanitizer, package_type, image_typ
|
|||||||
|
|
||||||
is_clang = compiler.startswith(CLANG_PREFIX)
|
is_clang = compiler.startswith(CLANG_PREFIX)
|
||||||
is_cross_darwin = compiler.endswith(DARWIN_SUFFIX)
|
is_cross_darwin = compiler.endswith(DARWIN_SUFFIX)
|
||||||
|
is_cross_darwin_arm = compiler.endswith(DARWIN_ARM_SUFFIX)
|
||||||
is_cross_arm = compiler.endswith(ARM_SUFFIX)
|
is_cross_arm = compiler.endswith(ARM_SUFFIX)
|
||||||
is_cross_freebsd = compiler.endswith(FREEBSD_SUFFIX)
|
is_cross_freebsd = compiler.endswith(FREEBSD_SUFFIX)
|
||||||
is_cross_compile = is_cross_darwin or is_cross_arm or is_cross_freebsd
|
is_cross_compile = is_cross_darwin or is_cross_darwin_arm or is_cross_arm or is_cross_freebsd
|
||||||
|
|
||||||
# Explicitly use LLD with Clang by default.
|
# Explicitly use LLD with Clang by default.
|
||||||
# Don't force linker for cross-compilation.
|
# Don't force linker for cross-compilation.
|
||||||
@ -82,6 +84,13 @@ def parse_env_variables(build_type, compiler, sanitizer, package_type, image_typ
|
|||||||
cmake_flags.append("-DCMAKE_RANLIB:FILEPATH=/cctools/bin/x86_64-apple-darwin-ranlib")
|
cmake_flags.append("-DCMAKE_RANLIB:FILEPATH=/cctools/bin/x86_64-apple-darwin-ranlib")
|
||||||
cmake_flags.append("-DLINKER_NAME=/cctools/bin/x86_64-apple-darwin-ld")
|
cmake_flags.append("-DLINKER_NAME=/cctools/bin/x86_64-apple-darwin-ld")
|
||||||
cmake_flags.append("-DCMAKE_TOOLCHAIN_FILE=/build/cmake/darwin/toolchain-x86_64.cmake")
|
cmake_flags.append("-DCMAKE_TOOLCHAIN_FILE=/build/cmake/darwin/toolchain-x86_64.cmake")
|
||||||
|
elif is_cross_darwin_arm:
|
||||||
|
cc = compiler[:-len(DARWIN_ARM_SUFFIX)]
|
||||||
|
cmake_flags.append("-DCMAKE_AR:FILEPATH=/cctools/bin/aarch64-apple-darwin-ar")
|
||||||
|
cmake_flags.append("-DCMAKE_INSTALL_NAME_TOOL=/cctools/bin/aarch64-apple-darwin-install_name_tool")
|
||||||
|
cmake_flags.append("-DCMAKE_RANLIB:FILEPATH=/cctools/bin/aarch64-apple-darwin-ranlib")
|
||||||
|
cmake_flags.append("-DLINKER_NAME=/cctools/bin/aarch64-apple-darwin-ld")
|
||||||
|
cmake_flags.append("-DCMAKE_TOOLCHAIN_FILE=/build/cmake/darwin/toolchain-aarch64.cmake")
|
||||||
elif is_cross_arm:
|
elif is_cross_arm:
|
||||||
cc = compiler[:-len(ARM_SUFFIX)]
|
cc = compiler[:-len(ARM_SUFFIX)]
|
||||||
cmake_flags.append("-DCMAKE_TOOLCHAIN_FILE=/build/cmake/linux/toolchain-aarch64.cmake")
|
cmake_flags.append("-DCMAKE_TOOLCHAIN_FILE=/build/cmake/linux/toolchain-aarch64.cmake")
|
||||||
@ -185,8 +194,8 @@ if __name__ == "__main__":
|
|||||||
parser.add_argument("--clickhouse-repo-path", default=os.path.join(os.path.dirname(os.path.abspath(__file__)), os.pardir, os.pardir))
|
parser.add_argument("--clickhouse-repo-path", default=os.path.join(os.path.dirname(os.path.abspath(__file__)), os.pardir, os.pardir))
|
||||||
parser.add_argument("--output-dir", required=True)
|
parser.add_argument("--output-dir", required=True)
|
||||||
parser.add_argument("--build-type", choices=("debug", ""), default="")
|
parser.add_argument("--build-type", choices=("debug", ""), default="")
|
||||||
parser.add_argument("--compiler", choices=("clang-11", "clang-11-darwin", "clang-11-aarch64", "clang-11-freebsd",
|
parser.add_argument("--compiler", choices=("clang-11", "clang-11-darwin", "clang-11-darwin-aarch64", "clang-11-aarch64",
|
||||||
"gcc-10"), default="clang-11")
|
"clang-11-freebsd", "gcc-10"), default="clang-11")
|
||||||
parser.add_argument("--sanitizer", choices=("address", "thread", "memory", "undefined", ""), default="")
|
parser.add_argument("--sanitizer", choices=("address", "thread", "memory", "undefined", ""), default="")
|
||||||
parser.add_argument("--unbundled", action="store_true")
|
parser.add_argument("--unbundled", action="store_true")
|
||||||
parser.add_argument("--split-binary", action="store_true")
|
parser.add_argument("--split-binary", action="store_true")
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
FROM ubuntu:20.04
|
FROM ubuntu:20.04
|
||||||
|
|
||||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||||
ARG version=21.7.1.*
|
ARG version=21.9.1.*
|
||||||
ARG gosu_ver=1.10
|
ARG gosu_ver=1.10
|
||||||
|
|
||||||
# set non-empty deb_location_url url to create a docker image
|
# set non-empty deb_location_url url to create a docker image
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
FROM ubuntu:18.04
|
FROM ubuntu:18.04
|
||||||
|
|
||||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||||
ARG version=21.7.1.*
|
ARG version=21.9.1.*
|
||||||
|
|
||||||
RUN apt-get update && \
|
RUN apt-get update && \
|
||||||
apt-get install -y apt-transport-https dirmngr && \
|
apt-get install -y apt-transport-https dirmngr && \
|
||||||
|
@ -46,6 +46,7 @@ RUN apt-get update \
|
|||||||
pigz \
|
pigz \
|
||||||
pkg-config \
|
pkg-config \
|
||||||
tzdata \
|
tzdata \
|
||||||
|
pv \
|
||||||
--yes --no-install-recommends
|
--yes --no-install-recommends
|
||||||
|
|
||||||
# Sanitizer options for services (clickhouse-server)
|
# Sanitizer options for services (clickhouse-server)
|
||||||
|
@ -113,6 +113,7 @@ function start_server
|
|||||||
echo "ClickHouse server pid '$server_pid' started and responded"
|
echo "ClickHouse server pid '$server_pid' started and responded"
|
||||||
|
|
||||||
echo "
|
echo "
|
||||||
|
set follow-fork-mode child
|
||||||
handle all noprint
|
handle all noprint
|
||||||
handle SIGSEGV stop print
|
handle SIGSEGV stop print
|
||||||
handle SIGBUS stop print
|
handle SIGBUS stop print
|
||||||
@ -159,7 +160,6 @@ function clone_submodules
|
|||||||
|
|
||||||
SUBMODULES_TO_UPDATE=(
|
SUBMODULES_TO_UPDATE=(
|
||||||
contrib/abseil-cpp
|
contrib/abseil-cpp
|
||||||
contrib/antlr4-runtime
|
|
||||||
contrib/boost
|
contrib/boost
|
||||||
contrib/zlib-ng
|
contrib/zlib-ng
|
||||||
contrib/libxml2
|
contrib/libxml2
|
||||||
@ -373,14 +373,11 @@ function run_tests
|
|||||||
# Depends on AWS
|
# Depends on AWS
|
||||||
01801_s3_cluster
|
01801_s3_cluster
|
||||||
|
|
||||||
# Depends on LLVM JIT
|
|
||||||
01072_nullable_jit
|
|
||||||
01852_jit_if
|
|
||||||
01865_jit_comparison_constant_result
|
|
||||||
01871_merge_tree_compile_expressions
|
|
||||||
|
|
||||||
# needs psql
|
# needs psql
|
||||||
01889_postgresql_protocol_null_fields
|
01889_postgresql_protocol_null_fields
|
||||||
|
|
||||||
|
# needs pv
|
||||||
|
01923_network_receive_time_metric_insert
|
||||||
)
|
)
|
||||||
|
|
||||||
time clickhouse-test --hung-check -j 8 --order=random --use-skip-list \
|
time clickhouse-test --hung-check -j 8 --order=random --use-skip-list \
|
||||||
|
@ -103,6 +103,7 @@ function fuzz
|
|||||||
kill -0 $server_pid
|
kill -0 $server_pid
|
||||||
|
|
||||||
echo "
|
echo "
|
||||||
|
set follow-fork-mode child
|
||||||
handle all noprint
|
handle all noprint
|
||||||
handle SIGSEGV stop print
|
handle SIGSEGV stop print
|
||||||
handle SIGBUS stop print
|
handle SIGBUS stop print
|
||||||
|
@ -1,6 +1,8 @@
|
|||||||
# docker build -t yandex/clickhouse-integration-test .
|
# docker build -t yandex/clickhouse-integration-test .
|
||||||
FROM yandex/clickhouse-test-base
|
FROM yandex/clickhouse-test-base
|
||||||
|
|
||||||
|
SHELL ["/bin/bash", "-c"]
|
||||||
|
|
||||||
RUN apt-get update \
|
RUN apt-get update \
|
||||||
&& env DEBIAN_FRONTEND=noninteractive apt-get -y install \
|
&& env DEBIAN_FRONTEND=noninteractive apt-get -y install \
|
||||||
tzdata \
|
tzdata \
|
||||||
@ -20,7 +22,9 @@ RUN apt-get update \
|
|||||||
krb5-user \
|
krb5-user \
|
||||||
iproute2 \
|
iproute2 \
|
||||||
lsof \
|
lsof \
|
||||||
g++
|
g++ \
|
||||||
|
default-jre
|
||||||
|
|
||||||
RUN rm -rf \
|
RUN rm -rf \
|
||||||
/var/lib/apt/lists/* \
|
/var/lib/apt/lists/* \
|
||||||
/var/cache/debconf \
|
/var/cache/debconf \
|
||||||
@ -30,6 +34,19 @@ RUN apt-get clean
|
|||||||
# Install MySQL ODBC driver
|
# Install MySQL ODBC driver
|
||||||
RUN curl 'https://cdn.mysql.com//Downloads/Connector-ODBC/8.0/mysql-connector-odbc-8.0.21-linux-glibc2.12-x86-64bit.tar.gz' --output 'mysql-connector.tar.gz' && tar -xzf mysql-connector.tar.gz && cd mysql-connector-odbc-8.0.21-linux-glibc2.12-x86-64bit/lib && mv * /usr/local/lib && ln -s /usr/local/lib/libmyodbc8a.so /usr/lib/x86_64-linux-gnu/odbc/libmyodbc.so
|
RUN curl 'https://cdn.mysql.com//Downloads/Connector-ODBC/8.0/mysql-connector-odbc-8.0.21-linux-glibc2.12-x86-64bit.tar.gz' --output 'mysql-connector.tar.gz' && tar -xzf mysql-connector.tar.gz && cd mysql-connector-odbc-8.0.21-linux-glibc2.12-x86-64bit/lib && mv * /usr/local/lib && ln -s /usr/local/lib/libmyodbc8a.so /usr/lib/x86_64-linux-gnu/odbc/libmyodbc.so
|
||||||
|
|
||||||
|
# Unfortunately this is required for a single test for conversion data from zookeeper to clickhouse-keeper.
|
||||||
|
# ZooKeeper is not started by default, but consumes some space in containers.
|
||||||
|
# 777 perms used to allow anybody to start/stop ZooKeeper
|
||||||
|
ENV ZOOKEEPER_VERSION='3.6.3'
|
||||||
|
RUN curl -O "https://mirrors.estointernet.in/apache/zookeeper/zookeeper-${ZOOKEEPER_VERSION}/apache-zookeeper-${ZOOKEEPER_VERSION}-bin.tar.gz"
|
||||||
|
RUN tar -zxvf apache-zookeeper-${ZOOKEEPER_VERSION}-bin.tar.gz && mv apache-zookeeper-${ZOOKEEPER_VERSION}-bin /opt/zookeeper && chmod -R 777 /opt/zookeeper && rm apache-zookeeper-${ZOOKEEPER_VERSION}-bin.tar.gz
|
||||||
|
RUN echo $'tickTime=2500 \n\
|
||||||
|
tickTime=2500 \n\
|
||||||
|
dataDir=/zookeeper \n\
|
||||||
|
clientPort=2181 \n\
|
||||||
|
maxClientCnxns=80' > /opt/zookeeper/conf/zoo.cfg
|
||||||
|
RUN mkdir /zookeeper && chmod -R 777 /zookeeper
|
||||||
|
|
||||||
ENV TZ=Europe/Moscow
|
ENV TZ=Europe/Moscow
|
||||||
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
|
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
|
||||||
|
|
||||||
|
@ -11,6 +11,7 @@ services:
|
|||||||
interval: 10s
|
interval: 10s
|
||||||
timeout: 5s
|
timeout: 5s
|
||||||
retries: 5
|
retries: 5
|
||||||
|
command: [ "postgres", "-c", "wal_level=logical", "-c", "max_replication_slots=2"]
|
||||||
networks:
|
networks:
|
||||||
default:
|
default:
|
||||||
aliases:
|
aliases:
|
||||||
|
@ -319,14 +319,14 @@ function get_profiles
|
|||||||
|
|
||||||
wait
|
wait
|
||||||
|
|
||||||
clickhouse-client --port $LEFT_SERVER_PORT --query "select * from system.query_log where type = 2 format TSVWithNamesAndTypes" > left-query-log.tsv ||: &
|
clickhouse-client --port $LEFT_SERVER_PORT --query "select * from system.query_log where type = 'QueryFinish' format TSVWithNamesAndTypes" > left-query-log.tsv ||: &
|
||||||
clickhouse-client --port $LEFT_SERVER_PORT --query "select * from system.query_thread_log format TSVWithNamesAndTypes" > left-query-thread-log.tsv ||: &
|
clickhouse-client --port $LEFT_SERVER_PORT --query "select * from system.query_thread_log format TSVWithNamesAndTypes" > left-query-thread-log.tsv ||: &
|
||||||
clickhouse-client --port $LEFT_SERVER_PORT --query "select * from system.trace_log format TSVWithNamesAndTypes" > left-trace-log.tsv ||: &
|
clickhouse-client --port $LEFT_SERVER_PORT --query "select * from system.trace_log format TSVWithNamesAndTypes" > left-trace-log.tsv ||: &
|
||||||
clickhouse-client --port $LEFT_SERVER_PORT --query "select arrayJoin(trace) addr, concat(splitByChar('/', addressToLine(addr))[-1], '#', demangle(addressToSymbol(addr)) ) name from system.trace_log group by addr format TSVWithNamesAndTypes" > left-addresses.tsv ||: &
|
clickhouse-client --port $LEFT_SERVER_PORT --query "select arrayJoin(trace) addr, concat(splitByChar('/', addressToLine(addr))[-1], '#', demangle(addressToSymbol(addr)) ) name from system.trace_log group by addr format TSVWithNamesAndTypes" > left-addresses.tsv ||: &
|
||||||
clickhouse-client --port $LEFT_SERVER_PORT --query "select * from system.metric_log format TSVWithNamesAndTypes" > left-metric-log.tsv ||: &
|
clickhouse-client --port $LEFT_SERVER_PORT --query "select * from system.metric_log format TSVWithNamesAndTypes" > left-metric-log.tsv ||: &
|
||||||
clickhouse-client --port $LEFT_SERVER_PORT --query "select * from system.asynchronous_metric_log format TSVWithNamesAndTypes" > left-async-metric-log.tsv ||: &
|
clickhouse-client --port $LEFT_SERVER_PORT --query "select * from system.asynchronous_metric_log format TSVWithNamesAndTypes" > left-async-metric-log.tsv ||: &
|
||||||
|
|
||||||
clickhouse-client --port $RIGHT_SERVER_PORT --query "select * from system.query_log where type = 2 format TSVWithNamesAndTypes" > right-query-log.tsv ||: &
|
clickhouse-client --port $RIGHT_SERVER_PORT --query "select * from system.query_log where type = 'QueryFinish' format TSVWithNamesAndTypes" > right-query-log.tsv ||: &
|
||||||
clickhouse-client --port $RIGHT_SERVER_PORT --query "select * from system.query_thread_log format TSVWithNamesAndTypes" > right-query-thread-log.tsv ||: &
|
clickhouse-client --port $RIGHT_SERVER_PORT --query "select * from system.query_thread_log format TSVWithNamesAndTypes" > right-query-thread-log.tsv ||: &
|
||||||
clickhouse-client --port $RIGHT_SERVER_PORT --query "select * from system.trace_log format TSVWithNamesAndTypes" > right-trace-log.tsv ||: &
|
clickhouse-client --port $RIGHT_SERVER_PORT --query "select * from system.trace_log format TSVWithNamesAndTypes" > right-trace-log.tsv ||: &
|
||||||
clickhouse-client --port $RIGHT_SERVER_PORT --query "select arrayJoin(trace) addr, concat(splitByChar('/', addressToLine(addr))[-1], '#', demangle(addressToSymbol(addr)) ) name from system.trace_log group by addr format TSVWithNamesAndTypes" > right-addresses.tsv ||: &
|
clickhouse-client --port $RIGHT_SERVER_PORT --query "select arrayJoin(trace) addr, concat(splitByChar('/', addressToLine(addr))[-1], '#', demangle(addressToSymbol(addr)) ) name from system.trace_log group by addr format TSVWithNamesAndTypes" > right-addresses.tsv ||: &
|
||||||
@ -409,10 +409,10 @@ create view right_query_log as select *
|
|||||||
'$(cat "right-query-log.tsv.columns")');
|
'$(cat "right-query-log.tsv.columns")');
|
||||||
|
|
||||||
create view query_logs as
|
create view query_logs as
|
||||||
select 0 version, query_id, ProfileEvents.Names, ProfileEvents.Values,
|
select 0 version, query_id, ProfileEvents,
|
||||||
query_duration_ms, memory_usage from left_query_log
|
query_duration_ms, memory_usage from left_query_log
|
||||||
union all
|
union all
|
||||||
select 1 version, query_id, ProfileEvents.Names, ProfileEvents.Values,
|
select 1 version, query_id, ProfileEvents,
|
||||||
query_duration_ms, memory_usage from right_query_log
|
query_duration_ms, memory_usage from right_query_log
|
||||||
;
|
;
|
||||||
|
|
||||||
@ -424,7 +424,7 @@ create table query_run_metric_arrays engine File(TSV, 'analyze/query-run-metric-
|
|||||||
with (
|
with (
|
||||||
-- sumMapState with the list of all keys with '-0.' values. Negative zero is because
|
-- sumMapState with the list of all keys with '-0.' values. Negative zero is because
|
||||||
-- sumMap removes keys with positive zeros.
|
-- sumMap removes keys with positive zeros.
|
||||||
with (select groupUniqArrayArray(ProfileEvents.Names) from query_logs) as all_names
|
with (select groupUniqArrayArray(mapKeys(ProfileEvents)) from query_logs) as all_names
|
||||||
select arrayReduce('sumMapState', [(all_names, arrayMap(x->-0., all_names))])
|
select arrayReduce('sumMapState', [(all_names, arrayMap(x->-0., all_names))])
|
||||||
) as all_metrics
|
) as all_metrics
|
||||||
select test, query_index, version, query_id,
|
select test, query_index, version, query_id,
|
||||||
@ -433,8 +433,8 @@ create table query_run_metric_arrays engine File(TSV, 'analyze/query-run-metric-
|
|||||||
[
|
[
|
||||||
all_metrics,
|
all_metrics,
|
||||||
arrayReduce('sumMapState',
|
arrayReduce('sumMapState',
|
||||||
[(ProfileEvents.Names,
|
[(mapKeys(ProfileEvents),
|
||||||
arrayMap(x->toFloat64(x), ProfileEvents.Values))]
|
arrayMap(x->toFloat64(x), mapValues(ProfileEvents)))]
|
||||||
),
|
),
|
||||||
arrayReduce('sumMapState', [(
|
arrayReduce('sumMapState', [(
|
||||||
['client_time', 'server_time', 'memory_usage'],
|
['client_time', 'server_time', 'memory_usage'],
|
||||||
@ -1003,10 +1003,11 @@ create view query_log as select *
|
|||||||
|
|
||||||
create table unstable_run_metrics engine File(TSVWithNamesAndTypes,
|
create table unstable_run_metrics engine File(TSVWithNamesAndTypes,
|
||||||
'unstable-run-metrics.$version.rep') as
|
'unstable-run-metrics.$version.rep') as
|
||||||
select
|
select test, query_index, query_id, value, metric
|
||||||
test, query_index, query_id,
|
from query_log
|
||||||
ProfileEvents.Values value, ProfileEvents.Names metric
|
array join
|
||||||
from query_log array join ProfileEvents
|
mapValues(ProfileEvents) as value,
|
||||||
|
mapKeys(ProfileEvents) as metric
|
||||||
join unstable_query_runs using (query_id)
|
join unstable_query_runs using (query_id)
|
||||||
;
|
;
|
||||||
|
|
||||||
@ -1177,11 +1178,11 @@ create view right_async_metric_log as
|
|||||||
-- Use the right log as time reference because it may have higher precision.
|
-- Use the right log as time reference because it may have higher precision.
|
||||||
create table metrics engine File(TSV, 'metrics/metrics.tsv') as
|
create table metrics engine File(TSV, 'metrics/metrics.tsv') as
|
||||||
with (select min(event_time) from right_async_metric_log) as min_time
|
with (select min(event_time) from right_async_metric_log) as min_time
|
||||||
select name metric, r.event_time - min_time event_time, l.value as left, r.value as right
|
select metric, r.event_time - min_time event_time, l.value as left, r.value as right
|
||||||
from right_async_metric_log r
|
from right_async_metric_log r
|
||||||
asof join file('left-async-metric-log.tsv', TSVWithNamesAndTypes,
|
asof join file('left-async-metric-log.tsv', TSVWithNamesAndTypes,
|
||||||
'$(cat left-async-metric-log.tsv.columns)') l
|
'$(cat left-async-metric-log.tsv.columns)') l
|
||||||
on l.name = r.name and r.event_time <= l.event_time
|
on l.metric = r.metric and r.event_time <= l.event_time
|
||||||
order by metric, event_time
|
order by metric, event_time
|
||||||
;
|
;
|
||||||
|
|
||||||
@ -1280,7 +1281,7 @@ create table ci_checks engine File(TSVWithNamesAndTypes, 'ci-checks.tsv')
|
|||||||
then
|
then
|
||||||
echo Database for test results is not specified, will not upload them.
|
echo Database for test results is not specified, will not upload them.
|
||||||
return 0
|
return 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
set +x # Don't show password in the log
|
set +x # Don't show password in the log
|
||||||
client=(clickhouse-client
|
client=(clickhouse-client
|
||||||
|
@ -23,6 +23,7 @@
|
|||||||
|
|
||||||
<!-- disable jit for perf tests -->
|
<!-- disable jit for perf tests -->
|
||||||
<compile_expressions>0</compile_expressions>
|
<compile_expressions>0</compile_expressions>
|
||||||
|
<compile_aggregate_expressions>0</compile_aggregate_expressions>
|
||||||
</default>
|
</default>
|
||||||
</profiles>
|
</profiles>
|
||||||
<users>
|
<users>
|
||||||
|
@ -561,7 +561,7 @@ if args.report == 'main':
|
|||||||
# Don't show mildly unstable queries, only the very unstable ones we
|
# Don't show mildly unstable queries, only the very unstable ones we
|
||||||
# treat as errors.
|
# treat as errors.
|
||||||
if very_unstable_queries:
|
if very_unstable_queries:
|
||||||
if very_unstable_queries > 3:
|
if very_unstable_queries > 5:
|
||||||
error_tests += very_unstable_queries
|
error_tests += very_unstable_queries
|
||||||
status = 'failure'
|
status = 'failure'
|
||||||
message_array.append(str(very_unstable_queries) + ' unstable')
|
message_array.append(str(very_unstable_queries) + ' unstable')
|
||||||
|
@ -35,7 +35,7 @@ if [ "$NUM_TRIES" -gt "1" ]; then
|
|||||||
# simpliest way to forward env variables to server
|
# simpliest way to forward env variables to server
|
||||||
sudo -E -u clickhouse /usr/bin/clickhouse-server --config /etc/clickhouse-server/config.xml --daemon
|
sudo -E -u clickhouse /usr/bin/clickhouse-server --config /etc/clickhouse-server/config.xml --daemon
|
||||||
else
|
else
|
||||||
service clickhouse-server start
|
sudo clickhouse start
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then
|
if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then
|
||||||
|
@ -1,4 +1,6 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# shellcheck disable=SC2094
|
||||||
|
# shellcheck disable=SC2086
|
||||||
|
|
||||||
set -x
|
set -x
|
||||||
|
|
||||||
@ -37,6 +39,17 @@ function stop()
|
|||||||
|
|
||||||
function start()
|
function start()
|
||||||
{
|
{
|
||||||
|
# Rename existing log file - it will be more convenient to read separate files for separate server runs.
|
||||||
|
if [ -f '/var/log/clickhouse-server/clickhouse-server.log' ]
|
||||||
|
then
|
||||||
|
log_file_counter=1
|
||||||
|
while [ -f "/var/log/clickhouse-server/clickhouse-server.log.${log_file_counter}" ]
|
||||||
|
do
|
||||||
|
log_file_counter=$((log_file_counter + 1))
|
||||||
|
done
|
||||||
|
mv '/var/log/clickhouse-server/clickhouse-server.log' "/var/log/clickhouse-server/clickhouse-server.log.${log_file_counter}"
|
||||||
|
fi
|
||||||
|
|
||||||
counter=0
|
counter=0
|
||||||
until clickhouse-client --query "SELECT 1"
|
until clickhouse-client --query "SELECT 1"
|
||||||
do
|
do
|
||||||
@ -55,6 +68,7 @@ function start()
|
|||||||
done
|
done
|
||||||
|
|
||||||
echo "
|
echo "
|
||||||
|
set follow-fork-mode child
|
||||||
handle all noprint
|
handle all noprint
|
||||||
handle SIGSEGV stop print
|
handle SIGSEGV stop print
|
||||||
handle SIGBUS stop print
|
handle SIGBUS stop print
|
||||||
@ -140,7 +154,11 @@ zgrep -Fa "########################################" /test_output/* > /dev/null
|
|||||||
&& echo -e 'Killed by signal (output files)\tFAIL' >> /test_output/test_results.tsv
|
&& echo -e 'Killed by signal (output files)\tFAIL' >> /test_output/test_results.tsv
|
||||||
|
|
||||||
# Put logs into /test_output/
|
# Put logs into /test_output/
|
||||||
pigz < /var/log/clickhouse-server/clickhouse-server.log > /test_output/clickhouse-server.log.gz
|
for log_file in /var/log/clickhouse-server/clickhouse-server.log*
|
||||||
|
do
|
||||||
|
pigz < "${log_file}" > /test_output/"$(basename ${log_file})".gz
|
||||||
|
done
|
||||||
|
|
||||||
tar -chf /test_output/coordination.tar /var/lib/clickhouse/coordination ||:
|
tar -chf /test_output/coordination.tar /var/lib/clickhouse/coordination ||:
|
||||||
mv /var/log/clickhouse-server/stderr.log /test_output/
|
mv /var/log/clickhouse-server/stderr.log /test_output/
|
||||||
tar -chf /test_output/query_log_dump.tar /var/lib/clickhouse/data/system/query_log ||:
|
tar -chf /test_output/query_log_dump.tar /var/lib/clickhouse/data/system/query_log ||:
|
||||||
|
@ -2,18 +2,16 @@
|
|||||||
|
|
||||||
## TL; DR How to make ClickHouse compile and link faster?
|
## TL; DR How to make ClickHouse compile and link faster?
|
||||||
|
|
||||||
Developer only! This command will likely fulfill most of your needs. Run before calling `ninja`.
|
Minimal ClickHouse build example:
|
||||||
|
|
||||||
```cmake
|
```bash
|
||||||
cmake .. \
|
cmake .. \
|
||||||
-DCMAKE_C_COMPILER=/bin/clang-10 \
|
-DCMAKE_C_COMPILER=$(which clang-11) \
|
||||||
-DCMAKE_CXX_COMPILER=/bin/clang++-10 \
|
-DCMAKE_CXX_COMPILER=$(which clang++-11) \
|
||||||
-DCMAKE_BUILD_TYPE=Debug \
|
-DCMAKE_BUILD_TYPE=Debug \
|
||||||
-DENABLE_CLICKHOUSE_ALL=OFF \
|
-DENABLE_CLICKHOUSE_ALL=OFF \
|
||||||
-DENABLE_CLICKHOUSE_SERVER=ON \
|
-DENABLE_CLICKHOUSE_SERVER=ON \
|
||||||
-DENABLE_CLICKHOUSE_CLIENT=ON \
|
-DENABLE_CLICKHOUSE_CLIENT=ON \
|
||||||
-DUSE_STATIC_LIBRARIES=OFF \
|
|
||||||
-DSPLIT_SHARED_LIBRARIES=ON \
|
|
||||||
-DENABLE_LIBRARIES=OFF \
|
-DENABLE_LIBRARIES=OFF \
|
||||||
-DUSE_UNWIND=ON \
|
-DUSE_UNWIND=ON \
|
||||||
-DENABLE_UTILS=OFF \
|
-DENABLE_UTILS=OFF \
|
||||||
|
6
docs/_includes/install/arm.sh
Normal file
6
docs/_includes/install/arm.sh
Normal file
@ -0,0 +1,6 @@
|
|||||||
|
# ARM (AArch64) build works on Amazon Graviton, Oracle Cloud, Huawei Cloud ARM machines.
|
||||||
|
# The support for AArch64 is pre-production ready.
|
||||||
|
|
||||||
|
wget 'https://builds.clickhouse.tech/master/aarch64/clickhouse'
|
||||||
|
chmod a+x ./clickhouse
|
||||||
|
sudo ./clickhouse install
|
3
docs/_includes/install/freebsd.sh
Normal file
3
docs/_includes/install/freebsd.sh
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
wget 'https://builds.clickhouse.tech/master/freebsd/clickhouse'
|
||||||
|
chmod a+x ./clickhouse
|
||||||
|
sudo ./clickhouse install
|
3
docs/_includes/install/mac-arm.sh
Normal file
3
docs/_includes/install/mac-arm.sh
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
wget 'https://builds.clickhouse.tech/master/macos-aarch64/clickhouse'
|
||||||
|
chmod a+x ./clickhouse
|
||||||
|
./clickhouse
|
3
docs/_includes/install/mac-x86.sh
Normal file
3
docs/_includes/install/mac-x86.sh
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
wget 'https://builds.clickhouse.tech/master/macos/clickhouse'
|
||||||
|
chmod a+x ./clickhouse
|
||||||
|
./clickhouse
|
@ -33,7 +33,7 @@ Reboot.
|
|||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
brew update
|
brew update
|
||||||
brew install cmake ninja libtool gettext llvm gcc
|
brew install cmake ninja libtool gettext llvm gcc binutils
|
||||||
```
|
```
|
||||||
|
|
||||||
## Checkout ClickHouse Sources {#checkout-clickhouse-sources}
|
## Checkout ClickHouse Sources {#checkout-clickhouse-sources}
|
||||||
|
@ -7,13 +7,13 @@ toc_title: Third-Party Libraries Used
|
|||||||
|
|
||||||
The list of third-party libraries can be obtained by the following query:
|
The list of third-party libraries can be obtained by the following query:
|
||||||
|
|
||||||
```
|
``` sql
|
||||||
SELECT library_name, license_type, license_path FROM system.licenses ORDER BY library_name COLLATE 'en'
|
SELECT library_name, license_type, license_path FROM system.licenses ORDER BY library_name COLLATE 'en'
|
||||||
```
|
```
|
||||||
|
|
||||||
[Example](https://gh-api.clickhouse.tech/play?user=play#U0VMRUNUIGxpYnJhcnlfbmFtZSwgbGljZW5zZV90eXBlLCBsaWNlbnNlX3BhdGggRlJPTSBzeXN0ZW0ubGljZW5zZXMgT1JERVIgQlkgbGlicmFyeV9uYW1lIENPTExBVEUgJ2VuJw==)
|
[Example](https://gh-api.clickhouse.tech/play?user=play#U0VMRUNUIGxpYnJhcnlfbmFtZSwgbGljZW5zZV90eXBlLCBsaWNlbnNlX3BhdGggRlJPTSBzeXN0ZW0ubGljZW5zZXMgT1JERVIgQlkgbGlicmFyeV9uYW1lIENPTExBVEUgJ2VuJw==)
|
||||||
|
|
||||||
| library_name | license_type | license_path |
|
| library_name | license_type | license_path |
|
||||||
|:-|:-|:-|
|
|:-|:-|:-|
|
||||||
| abseil-cpp | Apache | /contrib/abseil-cpp/LICENSE |
|
| abseil-cpp | Apache | /contrib/abseil-cpp/LICENSE |
|
||||||
| AMQP-CPP | Apache | /contrib/AMQP-CPP/LICENSE |
|
| AMQP-CPP | Apache | /contrib/AMQP-CPP/LICENSE |
|
||||||
@ -89,3 +89,15 @@ SELECT library_name, license_type, license_path FROM system.licenses ORDER BY li
|
|||||||
| xz | Public Domain | /contrib/xz/COPYING |
|
| xz | Public Domain | /contrib/xz/COPYING |
|
||||||
| zlib-ng | zLib | /contrib/zlib-ng/LICENSE.md |
|
| zlib-ng | zLib | /contrib/zlib-ng/LICENSE.md |
|
||||||
| zstd | BSD | /contrib/zstd/LICENSE |
|
| zstd | BSD | /contrib/zstd/LICENSE |
|
||||||
|
|
||||||
|
## Guidelines for adding new third-party libraries and maintaining custom changes in them {#adding-third-party-libraries}
|
||||||
|
|
||||||
|
1. All external third-party code should reside in the dedicated directories under `contrib` directory of ClickHouse repo. Prefer Git submodules, when available.
|
||||||
|
2. Fork/mirror the official repo in [Clickhouse-extras](https://github.com/ClickHouse-Extras). Prefer official GitHub repos, when available.
|
||||||
|
3. Branch from the branch you want to integrate, e.g., `master` -> `clickhouse/master`, or `release/vX.Y.Z` -> `clickhouse/release/vX.Y.Z`.
|
||||||
|
4. All forks in [Clickhouse-extras](https://github.com/ClickHouse-Extras) can be automatically synchronized with upstreams. `clickhouse/...` branches will remain unaffected, since virtually nobody is going to use that naming pattern in their upstream repos.
|
||||||
|
5. Add submodules under `contrib` of ClickHouse repo that refer the above forks/mirrors. Set the submodules to track the corresponding `clickhouse/...` branches.
|
||||||
|
6. Every time the custom changes have to be made in the library code, a dedicated branch should be created, like `clickhouse/my-fix`. Then this branch should be merged into the branch, that is tracked by the submodule, e.g., `clickhouse/master` or `clickhouse/release/vX.Y.Z`.
|
||||||
|
7. No code should be pushed in any branch of the forks in [Clickhouse-extras](https://github.com/ClickHouse-Extras), whose names do not follow `clickhouse/...` pattern.
|
||||||
|
8. Always write the custom changes with the official repo in mind. Once the PR is merged from (a feature/fix branch in) your personal fork into the fork in [Clickhouse-extras](https://github.com/ClickHouse-Extras), and the submodule is bumped in ClickHouse repo, consider opening another PR from (a feature/fix branch in) the fork in [Clickhouse-extras](https://github.com/ClickHouse-Extras) to the official repo of the library. This will make sure, that 1) the contribution has more than a single use case and importance, 2) others will also benefit from it, 3) the change will not remain a maintenance burden solely on ClickHouse developers.
|
||||||
|
9. When a submodule needs to start using a newer code from the original branch (e.g., `master`), and since the custom changes might be merged in the branch it is tracking (e.g., `clickhouse/master`) and so it may diverge from its original counterpart (i.e., `master`), a careful merge should be carried out first, i.e., `master` -> `clickhouse/master`, and only then the submodule can be bumped in ClickHouse.
|
||||||
|
@ -237,6 +237,8 @@ The description of ClickHouse architecture can be found here: https://clickhouse
|
|||||||
|
|
||||||
The Code Style Guide: https://clickhouse.tech/docs/en/development/style/
|
The Code Style Guide: https://clickhouse.tech/docs/en/development/style/
|
||||||
|
|
||||||
|
Adding third-party libraries: https://clickhouse.tech/docs/en/development/contrib/#adding-third-party-libraries
|
||||||
|
|
||||||
Writing tests: https://clickhouse.tech/docs/en/development/tests/
|
Writing tests: https://clickhouse.tech/docs/en/development/tests/
|
||||||
|
|
||||||
List of tasks: https://github.com/ClickHouse/ClickHouse/issues?q=is%3Aopen+is%3Aissue+label%3A%22easy+task%22
|
List of tasks: https://github.com/ClickHouse/ClickHouse/issues?q=is%3Aopen+is%3Aissue+label%3A%22easy+task%22
|
||||||
|
@ -628,7 +628,7 @@ If the class is not intended for polymorphic use, you do not need to make functi
|
|||||||
|
|
||||||
**18.** Encodings.
|
**18.** Encodings.
|
||||||
|
|
||||||
Use UTF-8 everywhere. Use `std::string`and`char *`. Do not use `std::wstring`and`wchar_t`.
|
Use UTF-8 everywhere. Use `std::string` and `char *`. Do not use `std::wstring` and `wchar_t`.
|
||||||
|
|
||||||
**19.** Logging.
|
**19.** Logging.
|
||||||
|
|
||||||
@ -749,17 +749,9 @@ If your code in the `master` branch is not buildable yet, exclude it from the bu
|
|||||||
|
|
||||||
**1.** The C++20 standard library is used (experimental extensions are allowed), as well as `boost` and `Poco` frameworks.
|
**1.** The C++20 standard library is used (experimental extensions are allowed), as well as `boost` and `Poco` frameworks.
|
||||||
|
|
||||||
**2.** If necessary, you can use any well-known libraries available in the OS package.
|
**2.** It is not allowed to use libraries from OS packages. It is also not allowed to use pre-installed libraries. All libraries should be placed in form of source code in `contrib` directory and built with ClickHouse.
|
||||||
|
|
||||||
If there is a good solution already available, then use it, even if it means you have to install another library.
|
**3.** Preference is always given to libraries that are already in use.
|
||||||
|
|
||||||
(But be prepared to remove bad libraries from code.)
|
|
||||||
|
|
||||||
**3.** You can install a library that isn’t in the packages, if the packages do not have what you need or have an outdated version or the wrong type of compilation.
|
|
||||||
|
|
||||||
**4.** If the library is small and does not have its own complex build system, put the source files in the `contrib` folder.
|
|
||||||
|
|
||||||
**5.** Preference is always given to libraries that are already in use.
|
|
||||||
|
|
||||||
## General Recommendations {#general-recommendations-1}
|
## General Recommendations {#general-recommendations-1}
|
||||||
|
|
||||||
|
@ -49,6 +49,7 @@ When working with the `MaterializeMySQL` database engine, [ReplacingMergeTree](.
|
|||||||
| DATE, NEWDATE | [Date](../../sql-reference/data-types/date.md) |
|
| DATE, NEWDATE | [Date](../../sql-reference/data-types/date.md) |
|
||||||
| DATETIME, TIMESTAMP | [DateTime](../../sql-reference/data-types/datetime.md) |
|
| DATETIME, TIMESTAMP | [DateTime](../../sql-reference/data-types/datetime.md) |
|
||||||
| DATETIME2, TIMESTAMP2 | [DateTime64](../../sql-reference/data-types/datetime64.md) |
|
| DATETIME2, TIMESTAMP2 | [DateTime64](../../sql-reference/data-types/datetime64.md) |
|
||||||
|
| ENUM | [Enum](../../sql-reference/data-types/enum.md) |
|
||||||
| STRING | [String](../../sql-reference/data-types/string.md) |
|
| STRING | [String](../../sql-reference/data-types/string.md) |
|
||||||
| VARCHAR, VAR_STRING | [String](../../sql-reference/data-types/string.md) |
|
| VARCHAR, VAR_STRING | [String](../../sql-reference/data-types/string.md) |
|
||||||
| BLOB | [String](../../sql-reference/data-types/string.md) |
|
| BLOB | [String](../../sql-reference/data-types/string.md) |
|
||||||
|
71
docs/en/engines/database-engines/materialized-postgresql.md
Normal file
71
docs/en/engines/database-engines/materialized-postgresql.md
Normal file
@ -0,0 +1,71 @@
|
|||||||
|
---
|
||||||
|
toc_priority: 30
|
||||||
|
toc_title: MaterializedPostgreSQL
|
||||||
|
---
|
||||||
|
|
||||||
|
# MaterializedPostgreSQL {#materialize-postgresql}
|
||||||
|
|
||||||
|
## Creating a Database {#creating-a-database}
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
CREATE DATABASE test_database
|
||||||
|
ENGINE = MaterializedPostgreSQL('postgres1:5432', 'postgres_database', 'postgres_user', 'postgres_password'
|
||||||
|
|
||||||
|
SELECT * FROM test_database.postgres_table;
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Settings {#settings}
|
||||||
|
|
||||||
|
1. `materialized_postgresql_max_block_size` - Number of rows collected before flushing data into table. Default: `65536`.
|
||||||
|
|
||||||
|
2. `materialized_postgresql_tables_list` - List of tables for MaterializedPostgreSQL database engine. Default: `whole database`.
|
||||||
|
|
||||||
|
3. `materialized_postgresql_allow_automatic_update` - Allow to reload table in the background, when schema changes are detected. Default: `0` (`false`).
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
CREATE DATABASE test_database
|
||||||
|
ENGINE = MaterializedPostgreSQL('postgres1:5432', 'postgres_database', 'postgres_user', 'postgres_password'
|
||||||
|
SETTINGS materialized_postgresql_max_block_size = 65536,
|
||||||
|
materialized_postgresql_tables_list = 'table1,table2,table3';
|
||||||
|
|
||||||
|
SELECT * FROM test_database.table1;
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Requirements {#requirements}
|
||||||
|
|
||||||
|
- Setting `wal_level`to `logical` and `max_replication_slots` to at least `2` in the postgresql config file.
|
||||||
|
|
||||||
|
- Each replicated table must have one of the following **replica identity**:
|
||||||
|
|
||||||
|
1. **default** (primary key)
|
||||||
|
|
||||||
|
2. **index**
|
||||||
|
|
||||||
|
``` bash
|
||||||
|
postgres# CREATE TABLE postgres_table (a Integer NOT NULL, b Integer, c Integer NOT NULL, d Integer, e Integer NOT NULL);
|
||||||
|
postgres# CREATE unique INDEX postgres_table_index on postgres_table(a, c, e);
|
||||||
|
postgres# ALTER TABLE postgres_table REPLICA IDENTITY USING INDEX postgres_table_index;
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
Primary key is always checked first. If it is absent, then index, defined as replica identity index, is checked.
|
||||||
|
If index is used as replica identity, there has to be only one such index in a table.
|
||||||
|
You can check what type is used for a specific table with the following command:
|
||||||
|
|
||||||
|
``` bash
|
||||||
|
postgres# SELECT CASE relreplident
|
||||||
|
WHEN 'd' THEN 'default'
|
||||||
|
WHEN 'n' THEN 'nothing'
|
||||||
|
WHEN 'f' THEN 'full'
|
||||||
|
WHEN 'i' THEN 'index'
|
||||||
|
END AS replica_identity
|
||||||
|
FROM pg_class
|
||||||
|
WHERE oid = 'postgres_table'::regclass;
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Warning {#warning}
|
||||||
|
|
||||||
|
1. **TOAST** values convertion is not supported. Default value for the data type will be used.
|
@ -0,0 +1,53 @@
|
|||||||
|
---
|
||||||
|
toc_priority: 12
|
||||||
|
toc_title: ExternalDistributed
|
||||||
|
---
|
||||||
|
|
||||||
|
# ExternalDistributed {#externaldistributed}
|
||||||
|
|
||||||
|
The `ExternalDistributed` engine allows to perform `SELECT` queries on data that is stored on a remote servers MySQL or PostgreSQL. Accepts [MySQL](../../../engines/table-engines/integrations/mysql.md) or [PostgreSQL](../../../engines/table-engines/integrations/postgresql.md) engines as an argument so sharding is possible.
|
||||||
|
|
||||||
|
## Creating a Table {#creating-a-table}
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
||||||
|
(
|
||||||
|
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1] [TTL expr1],
|
||||||
|
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2] [TTL expr2],
|
||||||
|
...
|
||||||
|
) ENGINE = ExternalDistributed('engine', 'host:port', 'database', 'table', 'user', 'password');
|
||||||
|
```
|
||||||
|
|
||||||
|
See a detailed description of the [CREATE TABLE](../../../sql-reference/statements/create/table.md#create-table-query) query.
|
||||||
|
|
||||||
|
The table structure can differ from the original table structure:
|
||||||
|
|
||||||
|
- Column names should be the same as in the original table, but you can use just some of these columns and in any order.
|
||||||
|
- Column types may differ from those in the original table. ClickHouse tries to [cast](../../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) values to the ClickHouse data types.
|
||||||
|
|
||||||
|
**Engine Parameters**
|
||||||
|
|
||||||
|
- `engine` — The table engine `MySQL` or `PostgreSQL`.
|
||||||
|
- `host:port` — MySQL or PostgreSQL server address.
|
||||||
|
- `database` — Remote database name.
|
||||||
|
- `table` — Remote table name.
|
||||||
|
- `user` — User name.
|
||||||
|
- `password` — User password.
|
||||||
|
|
||||||
|
## Implementation Details {#implementation-details}
|
||||||
|
|
||||||
|
Supports multiple replicas that must be listed by `|` and shards must be listed by `,`. For example:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE test_shards (id UInt32, name String, age UInt32, money UInt32) ENGINE = ExternalDistributed('MySQL', `mysql{1|2}:3306,mysql{3|4}:3306`, 'clickhouse', 'test_replicas', 'root', 'clickhouse');
|
||||||
|
```
|
||||||
|
|
||||||
|
When specifying replicas, one of the available replicas is selected for each of the shards when reading. If the connection fails, the next replica is selected, and so on for all the replicas. If the connection attempt fails for all the replicas, the attempt is repeated the same way several times.
|
||||||
|
|
||||||
|
You can specify any number of shards and any number of replicas for each shard.
|
||||||
|
|
||||||
|
**See Also**
|
||||||
|
|
||||||
|
- [MySQL table engine](../../../engines/table-engines/integrations/mysql.md)
|
||||||
|
- [PostgreSQL table engine](../../../engines/table-engines/integrations/postgresql.md)
|
||||||
|
- [Distributed table engine](../../../engines/table-engines/special/distributed.md)
|
@ -0,0 +1,46 @@
|
|||||||
|
---
|
||||||
|
toc_priority: 12
|
||||||
|
toc_title: MateriaziePostgreSQL
|
||||||
|
---
|
||||||
|
|
||||||
|
# MaterializedPostgreSQL {#materialize-postgresql}
|
||||||
|
|
||||||
|
## Creating a Table {#creating-a-table}
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
CREATE TABLE test.postgresql_replica (key UInt64, value UInt64)
|
||||||
|
ENGINE = MaterializedPostgreSQL('postgres1:5432', 'postgres_database', 'postgresql_replica', 'postgres_user', 'postgres_password')
|
||||||
|
PRIMARY KEY key;
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Requirements {#requirements}
|
||||||
|
|
||||||
|
- Setting `wal_level`to `logical` and `max_replication_slots` to at least `2` in the postgresql config file.
|
||||||
|
|
||||||
|
- A table with engine `MaterializedPostgreSQL` must have a primary key - the same as a replica identity index (default: primary key) of a postgres table (See [details on replica identity index](../../database-engines/materialized-postgresql.md#requirements)).
|
||||||
|
|
||||||
|
- Only database `Atomic` is allowed.
|
||||||
|
|
||||||
|
|
||||||
|
## Virtual columns {#creating-a-table}
|
||||||
|
|
||||||
|
- `_version` (`UInt64`)
|
||||||
|
|
||||||
|
- `_sign` (`Int8`)
|
||||||
|
|
||||||
|
These columns do not need to be added, when table is created. They are always accessible in `SELECT` query.
|
||||||
|
`_version` column equals `LSN` position in `WAL`, so it might be used to check how up-to-date replication is.
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
CREATE TABLE test.postgresql_replica (key UInt64, value UInt64)
|
||||||
|
ENGINE = MaterializedPostgreSQL('postgres1:5432', 'postgres_database', 'postgresql_replica', 'postgres_user', 'postgres_password')
|
||||||
|
PRIMARY KEY key;
|
||||||
|
|
||||||
|
SELECT key, value, _version FROM test.postgresql_replica;
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Warning {#warning}
|
||||||
|
|
||||||
|
1. **TOAST** values convertion is not supported. Default value for the data type will be used.
|
@ -28,8 +28,8 @@ See a detailed description of the [CREATE TABLE](../../../sql-reference/statemen
|
|||||||
The table structure can differ from the original MySQL table structure:
|
The table structure can differ from the original MySQL table structure:
|
||||||
|
|
||||||
- Column names should be the same as in the original MySQL table, but you can use just some of these columns and in any order.
|
- Column names should be the same as in the original MySQL table, but you can use just some of these columns and in any order.
|
||||||
- Column types may differ from those in the original MySQL table. ClickHouse tries to [cast](../../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) values to the ClickHouse data types.
|
- Column types may differ from those in the original MySQL table. ClickHouse tries to [cast](../../../engines/database-engines/mysql.md#data_types-support) values to the ClickHouse data types.
|
||||||
- Setting `external_table_functions_use_nulls` defines how to handle Nullable columns. Default is true, if false - table function will not make nullable columns and will insert default values instead of nulls. This is also applicable for null values inside array data types.
|
- The [external_table_functions_use_nulls](../../../operations/settings/settings.md#external-table-functions-use-nulls) setting defines how to handle Nullable columns. Default value: 1. If 0, the table function does not make Nullable columns and inserts default values instead of nulls. This is also applicable for NULL values inside arrays.
|
||||||
|
|
||||||
**Engine Parameters**
|
**Engine Parameters**
|
||||||
|
|
||||||
@ -55,6 +55,12 @@ Simple `WHERE` clauses such as `=, !=, >, >=, <, <=` are executed on the MySQL s
|
|||||||
|
|
||||||
The rest of the conditions and the `LIMIT` sampling constraint are executed in ClickHouse only after the query to MySQL finishes.
|
The rest of the conditions and the `LIMIT` sampling constraint are executed in ClickHouse only after the query to MySQL finishes.
|
||||||
|
|
||||||
|
Supports multiple replicas that must be listed by `|`. For example:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE test_replicas (id UInt32, name String, age UInt32, money UInt32) ENGINE = MySQL(`mysql{2|3|4}:3306`, 'clickhouse', 'test_replicas', 'root', 'clickhouse');
|
||||||
|
```
|
||||||
|
|
||||||
## Usage Example {#usage-example}
|
## Usage Example {#usage-example}
|
||||||
|
|
||||||
Table in MySQL:
|
Table in MySQL:
|
||||||
|
@ -29,7 +29,7 @@ The table structure can differ from the source table structure:
|
|||||||
|
|
||||||
- Column names should be the same as in the source table, but you can use just some of these columns and in any order.
|
- Column names should be the same as in the source table, but you can use just some of these columns and in any order.
|
||||||
- Column types may differ from those in the source table. ClickHouse tries to [cast](../../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) values to the ClickHouse data types.
|
- Column types may differ from those in the source table. ClickHouse tries to [cast](../../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) values to the ClickHouse data types.
|
||||||
- Setting `external_table_functions_use_nulls` defines how to handle Nullable columns. Default is true, if false - table function will not make nullable columns and will insert default values instead of nulls. This is also applicable for null values inside array data types.
|
- The [external_table_functions_use_nulls](../../../operations/settings/settings.md#external-table-functions-use-nulls) setting defines how to handle Nullable columns. Default value: 1. If 0, the table function does not make Nullable columns and inserts default values instead of nulls. This is also applicable for NULL values inside arrays.
|
||||||
|
|
||||||
**Engine Parameters**
|
**Engine Parameters**
|
||||||
|
|
||||||
|
@ -23,8 +23,8 @@ See a detailed description of the [CREATE TABLE](../../../sql-reference/statemen
|
|||||||
The table structure can differ from the original PostgreSQL table structure:
|
The table structure can differ from the original PostgreSQL table structure:
|
||||||
|
|
||||||
- Column names should be the same as in the original PostgreSQL table, but you can use just some of these columns and in any order.
|
- Column names should be the same as in the original PostgreSQL table, but you can use just some of these columns and in any order.
|
||||||
- Column types may differ from those in the original PostgreSQL table. ClickHouse tries to [cast](../../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) values to the ClickHouse data types.
|
- Column types may differ from those in the original PostgreSQL table. ClickHouse tries to [cast](../../../engines/database-engines/postgresql.md#data_types-support) values to the ClickHouse data types.
|
||||||
- Setting `external_table_functions_use_nulls` defines how to handle Nullable columns. Default is 1, if 0 - table function will not make nullable columns and will insert default values instead of nulls. This is also applicable for null values inside array data types.
|
- The [external_table_functions_use_nulls](../../../operations/settings/settings.md#external-table-functions-use-nulls) setting defines how to handle Nullable columns. Default value: 1. If 0, the table function does not make Nullable columns and inserts default values instead of nulls. This is also applicable for NULL values inside arrays.
|
||||||
|
|
||||||
**Engine Parameters**
|
**Engine Parameters**
|
||||||
|
|
||||||
@ -49,6 +49,12 @@ PostgreSQL `Array` types are converted into ClickHouse arrays.
|
|||||||
|
|
||||||
!!! info "Note"
|
!!! info "Note"
|
||||||
Be careful - in PostgreSQL an array data, created like a `type_name[]`, may contain multi-dimensional arrays of different dimensions in different table rows in same column. But in ClickHouse it is only allowed to have multidimensional arrays of the same count of dimensions in all table rows in same column.
|
Be careful - in PostgreSQL an array data, created like a `type_name[]`, may contain multi-dimensional arrays of different dimensions in different table rows in same column. But in ClickHouse it is only allowed to have multidimensional arrays of the same count of dimensions in all table rows in same column.
|
||||||
|
|
||||||
|
Supports multiple replicas that must be listed by `|`. For example:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE test_replicas (id UInt32, name String) ENGINE = PostgreSQL(`postgres{2|3|4}:5432`, 'clickhouse', 'test_replicas', 'postgres', 'mysecretpassword');
|
||||||
|
```
|
||||||
|
|
||||||
Replicas priority for PostgreSQL dictionary source is supported. The bigger the number in map, the less the priority. The highest priority is `0`.
|
Replicas priority for PostgreSQL dictionary source is supported. The bigger the number in map, the less the priority. The highest priority is `0`.
|
||||||
|
|
||||||
|
@ -65,7 +65,7 @@ By checking the row count:
|
|||||||
|
|
||||||
Query:
|
Query:
|
||||||
|
|
||||||
``` sq;
|
``` sql
|
||||||
SELECT count() FROM recipes;
|
SELECT count() FROM recipes;
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -94,11 +94,11 @@ For production environments, it’s recommended to use the latest `stable`-versi
|
|||||||
|
|
||||||
To run ClickHouse inside Docker follow the guide on [Docker Hub](https://hub.docker.com/r/yandex/clickhouse-server/). Those images use official `deb` packages inside.
|
To run ClickHouse inside Docker follow the guide on [Docker Hub](https://hub.docker.com/r/yandex/clickhouse-server/). Those images use official `deb` packages inside.
|
||||||
|
|
||||||
### Single Binary
|
### Single Binary {#from-single-binary}
|
||||||
|
|
||||||
You can install ClickHouse on Linux using single portable binary from the latest commit of the `master` branch: [https://builds.clickhouse.tech/master/amd64/clickhouse].
|
You can install ClickHouse on Linux using a single portable binary from the latest commit of the `master` branch: [https://builds.clickhouse.tech/master/amd64/clickhouse].
|
||||||
|
|
||||||
```
|
``` bash
|
||||||
curl -O 'https://builds.clickhouse.tech/master/amd64/clickhouse' && chmod a+x clickhouse
|
curl -O 'https://builds.clickhouse.tech/master/amd64/clickhouse' && chmod a+x clickhouse
|
||||||
sudo ./clickhouse install
|
sudo ./clickhouse install
|
||||||
```
|
```
|
||||||
@ -107,9 +107,10 @@ sudo ./clickhouse install
|
|||||||
|
|
||||||
For non-Linux operating systems and for AArch64 CPU arhitecture, ClickHouse builds are provided as a cross-compiled binary from the latest commit of the `master` branch (with a few hours delay).
|
For non-Linux operating systems and for AArch64 CPU arhitecture, ClickHouse builds are provided as a cross-compiled binary from the latest commit of the `master` branch (with a few hours delay).
|
||||||
|
|
||||||
- [macOS](https://builds.clickhouse.tech/master/macos/clickhouse) — `curl -O 'https://builds.clickhouse.tech/master/macos/clickhouse' && chmod a+x ./clickhouse`
|
- [MacOS x86_64](https://builds.clickhouse.tech/master/macos/clickhouse) — `curl -O 'https://builds.clickhouse.tech/master/macos/clickhouse' && chmod a+x ./clickhouse`
|
||||||
- [FreeBSD](https://builds.clickhouse.tech/master/freebsd/clickhouse) — `curl -O 'https://builds.clickhouse.tech/master/freebsd/clickhouse' && chmod a+x ./clickhouse`
|
- [MacOS Aarch64 (Apple Silicon)](https://builds.clickhouse.tech/master/macos-aarch64/clickhouse) — `curl -O 'https://builds.clickhouse.tech/master/macos-aarch64/clickhouse' && chmod a+x ./clickhouse`
|
||||||
- [AArch64](https://builds.clickhouse.tech/master/aarch64/clickhouse) — `curl -O 'https://builds.clickhouse.tech/master/aarch64/clickhouse' && chmod a+x ./clickhouse`
|
- [FreeBSD x86_64](https://builds.clickhouse.tech/master/freebsd/clickhouse) — `curl -O 'https://builds.clickhouse.tech/master/freebsd/clickhouse' && chmod a+x ./clickhouse`
|
||||||
|
- [Linux AArch64](https://builds.clickhouse.tech/master/aarch64/clickhouse) — `curl -O 'https://builds.clickhouse.tech/master/aarch64/clickhouse' && chmod a+x ./clickhouse`
|
||||||
|
|
||||||
After downloading, you can use the `clickhouse client` to connect to the server, or `clickhouse local` to process local data.
|
After downloading, you can use the `clickhouse client` to connect to the server, or `clickhouse local` to process local data.
|
||||||
|
|
||||||
|
BIN
docs/en/images/play.png
Normal file
BIN
docs/en/images/play.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 26 KiB |
@ -1302,6 +1302,7 @@ The table below shows supported data types and how they match ClickHouse [data t
|
|||||||
| `STRING`, `BINARY` | [String](../sql-reference/data-types/string.md) | `UTF8` |
|
| `STRING`, `BINARY` | [String](../sql-reference/data-types/string.md) | `UTF8` |
|
||||||
| `STRING`, `BINARY` | [FixedString](../sql-reference/data-types/fixedstring.md) | `UTF8` |
|
| `STRING`, `BINARY` | [FixedString](../sql-reference/data-types/fixedstring.md) | `UTF8` |
|
||||||
| `DECIMAL` | [Decimal](../sql-reference/data-types/decimal.md) | `DECIMAL` |
|
| `DECIMAL` | [Decimal](../sql-reference/data-types/decimal.md) | `DECIMAL` |
|
||||||
|
| `DECIMAL256` | [Decimal256](../sql-reference/data-types/decimal.md)| `DECIMAL256` |
|
||||||
| `LIST` | [Array](../sql-reference/data-types/array.md) | `LIST` |
|
| `LIST` | [Array](../sql-reference/data-types/array.md) | `LIST` |
|
||||||
|
|
||||||
Arrays can be nested and can have a value of the `Nullable` type as an argument.
|
Arrays can be nested and can have a value of the `Nullable` type as an argument.
|
||||||
|
@ -7,16 +7,21 @@ toc_title: HTTP Interface
|
|||||||
|
|
||||||
The HTTP interface lets you use ClickHouse on any platform from any programming language. We use it for working from Java and Perl, as well as shell scripts. In other departments, the HTTP interface is used from Perl, Python, and Go. The HTTP interface is more limited than the native interface, but it has better compatibility.
|
The HTTP interface lets you use ClickHouse on any platform from any programming language. We use it for working from Java and Perl, as well as shell scripts. In other departments, the HTTP interface is used from Perl, Python, and Go. The HTTP interface is more limited than the native interface, but it has better compatibility.
|
||||||
|
|
||||||
By default, clickhouse-server listens for HTTP on port 8123 (this can be changed in the config).
|
By default, `clickhouse-server` listens for HTTP on port 8123 (this can be changed in the config).
|
||||||
|
|
||||||
If you make a GET / request without parameters, it returns 200 response code and the string which defined in [http_server_default_response](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-http_server_default_response) default value “Ok.” (with a line feed at the end)
|
If you make a `GET /` request without parameters, it returns 200 response code and the string which defined in [http_server_default_response](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-http_server_default_response) default value “Ok.” (with a line feed at the end)
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ curl 'http://localhost:8123/'
|
$ curl 'http://localhost:8123/'
|
||||||
Ok.
|
Ok.
|
||||||
```
|
```
|
||||||
|
|
||||||
Use GET /ping request in health-check scripts. This handler always returns “Ok.” (with a line feed at the end). Available from version 18.12.13.
|
Web UI can be accessed here: `http://localhost:8123/play`.
|
||||||
|
|
||||||
|
![Web UI](../images/play.png)
|
||||||
|
|
||||||
|
|
||||||
|
In health-check scripts use `GET /ping` request. This handler always returns “Ok.” (with a line feed at the end). Available from version 18.12.13.
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ curl 'http://localhost:8123/ping'
|
$ curl 'http://localhost:8123/ping'
|
||||||
@ -51,8 +56,8 @@ X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","writ
|
|||||||
1
|
1
|
||||||
```
|
```
|
||||||
|
|
||||||
As you can see, curl is somewhat inconvenient in that spaces must be URL escaped.
|
As you can see, `curl` is somewhat inconvenient in that spaces must be URL escaped.
|
||||||
Although wget escapes everything itself, we do not recommend using it because it does not work well over HTTP 1.1 when using keep-alive and Transfer-Encoding: chunked.
|
Although `wget` escapes everything itself, we do not recommend using it because it does not work well over HTTP 1.1 when using keep-alive and Transfer-Encoding: chunked.
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ echo 'SELECT 1' | curl 'http://localhost:8123/' --data-binary @-
|
$ echo 'SELECT 1' | curl 'http://localhost:8123/' --data-binary @-
|
||||||
@ -75,7 +80,7 @@ ECT 1
|
|||||||
, expected One of: SHOW TABLES, SHOW DATABASES, SELECT, INSERT, CREATE, ATTACH, RENAME, DROP, DETACH, USE, SET, OPTIMIZE., e.what() = DB::Exception
|
, expected One of: SHOW TABLES, SHOW DATABASES, SELECT, INSERT, CREATE, ATTACH, RENAME, DROP, DETACH, USE, SET, OPTIMIZE., e.what() = DB::Exception
|
||||||
```
|
```
|
||||||
|
|
||||||
By default, data is returned in TabSeparated format (for more information, see the “Formats” section).
|
By default, data is returned in [TabSeparated](formats.md#tabseparated) format.
|
||||||
|
|
||||||
You use the FORMAT clause of the query to request any other format.
|
You use the FORMAT clause of the query to request any other format.
|
||||||
|
|
||||||
@ -90,9 +95,11 @@ $ echo 'SELECT 1 FORMAT Pretty' | curl 'http://localhost:8123/?' --data-binary @
|
|||||||
└───┘
|
└───┘
|
||||||
```
|
```
|
||||||
|
|
||||||
The POST method of transmitting data is necessary for INSERT queries. In this case, you can write the beginning of the query in the URL parameter, and use POST to pass the data to insert. The data to insert could be, for example, a tab-separated dump from MySQL. In this way, the INSERT query replaces LOAD DATA LOCAL INFILE from MySQL.
|
The POST method of transmitting data is necessary for `INSERT` queries. In this case, you can write the beginning of the query in the URL parameter, and use POST to pass the data to insert. The data to insert could be, for example, a tab-separated dump from MySQL. In this way, the `INSERT` query replaces `LOAD DATA LOCAL INFILE` from MySQL.
|
||||||
|
|
||||||
Examples: Creating a table:
|
**Examples**
|
||||||
|
|
||||||
|
Creating a table:
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ echo 'CREATE TABLE t (a UInt8) ENGINE = Memory' | curl 'http://localhost:8123/' --data-binary @-
|
$ echo 'CREATE TABLE t (a UInt8) ENGINE = Memory' | curl 'http://localhost:8123/' --data-binary @-
|
||||||
@ -498,7 +505,7 @@ Return a message.
|
|||||||
<response_content>Say Hi!</response_content>
|
<response_content>Say Hi!</response_content>
|
||||||
</handler>
|
</handler>
|
||||||
</rule>
|
</rule>
|
||||||
<http_handlers>
|
</http_handlers>
|
||||||
```
|
```
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
@ -632,6 +639,4 @@ $ curl -vv -H 'XXX:xxx' 'http://localhost:8123/get_relative_path_static_handler'
|
|||||||
<
|
<
|
||||||
<html><body>Relative Path File</body></html>
|
<html><body>Relative Path File</body></html>
|
||||||
* Connection #0 to host localhost left intact
|
* Connection #0 to host localhost left intact
|
||||||
```
|
```
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/interfaces/http_interface/) <!--hide-->
|
|
@ -59,6 +59,7 @@ toc_title: Adopters
|
|||||||
| <a href="https://www.huya.com/" class="favicon">HUYA</a> | Video Streaming | Analytics | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/7.%20ClickHouse万亿数据分析实践%20李本旺(sundy-li)%20虎牙.pdf) |
|
| <a href="https://www.huya.com/" class="favicon">HUYA</a> | Video Streaming | Analytics | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/7.%20ClickHouse万亿数据分析实践%20李本旺(sundy-li)%20虎牙.pdf) |
|
||||||
| <a href="https://www.the-ica.com/" class="favicon">ICA</a> | FinTech | Risk Management | — | — | [Blog Post in English, Sep 2020](https://altinity.com/blog/clickhouse-vs-redshift-performance-for-fintech-risk-management?utm_campaign=ClickHouse%20vs%20RedShift&utm_content=143520807&utm_medium=social&utm_source=twitter&hss_channel=tw-3894792263) |
|
| <a href="https://www.the-ica.com/" class="favicon">ICA</a> | FinTech | Risk Management | — | — | [Blog Post in English, Sep 2020](https://altinity.com/blog/clickhouse-vs-redshift-performance-for-fintech-risk-management?utm_campaign=ClickHouse%20vs%20RedShift&utm_content=143520807&utm_medium=social&utm_source=twitter&hss_channel=tw-3894792263) |
|
||||||
| <a href="https://www.idealista.com" class="favicon">Idealista</a> | Real Estate | Analytics | — | — | [Blog Post in English, April 2019](https://clickhouse.tech/blog/en/clickhouse-meetup-in-madrid-on-april-2-2019) |
|
| <a href="https://www.idealista.com" class="favicon">Idealista</a> | Real Estate | Analytics | — | — | [Blog Post in English, April 2019](https://clickhouse.tech/blog/en/clickhouse-meetup-in-madrid-on-april-2-2019) |
|
||||||
|
| <a href="https://infobaleen.com" class="favicon">Infobaleen</a> | AI markting tool | Analytics | — | — | [Official site](https://infobaleen.com) |
|
||||||
| <a href="https://www.infovista.com/" class="favicon">Infovista</a> | Networks | Analytics | — | — | [Slides in English, October 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup30/infovista.pdf) |
|
| <a href="https://www.infovista.com/" class="favicon">Infovista</a> | Networks | Analytics | — | — | [Slides in English, October 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup30/infovista.pdf) |
|
||||||
| <a href="https://www.innogames.com" class="favicon">InnoGames</a> | Games | Metrics, Logging | — | — | [Slides in Russian, September 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup28/graphite_and_clickHouse.pdf) |
|
| <a href="https://www.innogames.com" class="favicon">InnoGames</a> | Games | Metrics, Logging | — | — | [Slides in Russian, September 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup28/graphite_and_clickHouse.pdf) |
|
||||||
| <a href="https://instabug.com/" class="favicon">Instabug</a> | APM Platform | Main product | — | — | [A quote from Co-Founder](https://altinity.com/) |
|
| <a href="https://instabug.com/" class="favicon">Instabug</a> | APM Platform | Main product | — | — | [A quote from Co-Founder](https://altinity.com/) |
|
||||||
@ -110,7 +111,7 @@ toc_title: Adopters
|
|||||||
| <a href="https://www.semrush.com/" class="favicon">SEMrush</a> | Marketing | Main product | — | — | [Slides in Russian, August 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup17/5_semrush.pdf) |
|
| <a href="https://www.semrush.com/" class="favicon">SEMrush</a> | Marketing | Main product | — | — | [Slides in Russian, August 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup17/5_semrush.pdf) |
|
||||||
| <a href="https://sentry.io/" class="favicon">Sentry</a> | Software Development | Main product | — | — | [Blog Post in English, May 2019](https://blog.sentry.io/2019/05/16/introducing-snuba-sentrys-new-search-infrastructure) |
|
| <a href="https://sentry.io/" class="favicon">Sentry</a> | Software Development | Main product | — | — | [Blog Post in English, May 2019](https://blog.sentry.io/2019/05/16/introducing-snuba-sentrys-new-search-infrastructure) |
|
||||||
| <a href="https://seo.do/" class="favicon">seo.do</a> | Analytics | Main product | — | — | [Slides in English, November 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup35/CH%20Presentation-%20Metehan%20Çetinkaya.pdf) |
|
| <a href="https://seo.do/" class="favicon">seo.do</a> | Analytics | Main product | — | — | [Slides in English, November 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup35/CH%20Presentation-%20Metehan%20Çetinkaya.pdf) |
|
||||||
| <a href="http://www.sgk.gov.tr/wps/portal/sgk/tr" class="favicon">SGK</a> | Goverment Social Security | Analytics | — | — | [Slides in English, November 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup35/ClickHouse%20Meetup-Ramazan%20POLAT.pdf) |
|
| <a href="http://www.sgk.gov.tr/wps/portal/sgk/tr" class="favicon">SGK</a> | Government Social Security | Analytics | — | — | [Slides in English, November 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup35/ClickHouse%20Meetup-Ramazan%20POLAT.pdf) |
|
||||||
| <a href="http://english.sina.com/index.html" class="favicon">Sina</a> | News | — | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/6.%20ClickHouse最佳实践%20高鹏_新浪.pdf) |
|
| <a href="http://english.sina.com/index.html" class="favicon">Sina</a> | News | — | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/6.%20ClickHouse最佳实践%20高鹏_新浪.pdf) |
|
||||||
| <a href="https://smi2.ru/" class="favicon">SMI2</a> | News | Analytics | — | — | [Blog Post in Russian, November 2017](https://habr.com/ru/company/smi2/blog/314558/) |
|
| <a href="https://smi2.ru/" class="favicon">SMI2</a> | News | Analytics | — | — | [Blog Post in Russian, November 2017](https://habr.com/ru/company/smi2/blog/314558/) |
|
||||||
| <a href="https://www.spark.co.nz/" class="favicon">Spark New Zealand</a> | Telecommunications | Security Operations | — | — | [Blog Post, Feb 2020](https://blog.n0p.me/2020/02/2020-02-05-dnsmonster/) |
|
| <a href="https://www.spark.co.nz/" class="favicon">Spark New Zealand</a> | Telecommunications | Security Operations | — | — | [Blog Post, Feb 2020](https://blog.n0p.me/2020/02/2020-02-05-dnsmonster/) |
|
||||||
@ -154,5 +155,7 @@ toc_title: Adopters
|
|||||||
| <a href="https://www.hydrolix.io/" class="favicon">Hydrolix</a> | Cloud data platform | Main product | — | — | [Documentation](https://docs.hydrolix.io/guide/query) |
|
| <a href="https://www.hydrolix.io/" class="favicon">Hydrolix</a> | Cloud data platform | Main product | — | — | [Documentation](https://docs.hydrolix.io/guide/query) |
|
||||||
| <a href="https://www.argedor.com/en/clickhouse/" class="favicon">Argedor</a> | ClickHouse support | — | — | — | [Official website](https://www.argedor.com/en/clickhouse/) |
|
| <a href="https://www.argedor.com/en/clickhouse/" class="favicon">Argedor</a> | ClickHouse support | — | — | — | [Official website](https://www.argedor.com/en/clickhouse/) |
|
||||||
| <a href="https://signoz.io/" class="favicon">SigNoz</a> | Observability Platform | Main Product | — | — | [Source code](https://github.com/SigNoz/signoz) |
|
| <a href="https://signoz.io/" class="favicon">SigNoz</a> | Observability Platform | Main Product | — | — | [Source code](https://github.com/SigNoz/signoz) |
|
||||||
|
| <a href="https://chelpipegroup.com/" class="favicon">ChelPipe Group</a> | Analytics | — | — | — | [Blog post, June 2021](https://vc.ru/trade/253172-tyazhelomu-proizvodstvu-user-friendly-sayt-internet-magazin-trub-dlya-chtpz) |
|
||||||
|
| <a href="https://zagravagames.com/en/" class="favicon">Zagrava Trading</a> | — | — | — | — | [Job offer, May 2021](https://twitter.com/datastackjobs/status/1394707267082063874) |
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/introduction/adopters/) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/introduction/adopters/) <!--hide-->
|
||||||
|
114
docs/en/operations/clickhouse-keeper.md
Normal file
114
docs/en/operations/clickhouse-keeper.md
Normal file
@ -0,0 +1,114 @@
|
|||||||
|
---
|
||||||
|
toc_priority: 66
|
||||||
|
toc_title: ClickHouse Keeper
|
||||||
|
---
|
||||||
|
|
||||||
|
# [pre-production] clickhouse-keeper
|
||||||
|
|
||||||
|
ClickHouse server use [ZooKeeper](https://zookeeper.apache.org/) coordination system for data [replication](../engines/table-engines/mergetree-family/replication.md) and [distributed DDL](../sql-reference/distributed-ddl.md) queries execution. ClickHouse Keeper is an alternative coordination system compatible with ZooKeeper.
|
||||||
|
|
||||||
|
!!! warning "Warning"
|
||||||
|
This feature currently in pre-production stage. We test it in our CI and on small internal installations.
|
||||||
|
|
||||||
|
## Implemetation details
|
||||||
|
|
||||||
|
ZooKeeper is one of the first well-known open-source coordination systems. It's implemented in Java, has quite a simple and powerful data model. ZooKeeper's coordination algorithm called ZAB (ZooKeeper Atomic Broadcast) doesn't provide linearizability guarantees for reads, because each ZooKeeper node serves reads locally. Unlike ZooKeeper `clickhouse-keeper` written in C++ and use [RAFT algorithm](https://raft.github.io/) [implementation](https://github.com/eBay/NuRaft). This algorithm allows to have linearizability for reads and writes, has several open-source implementations in different languages.
|
||||||
|
|
||||||
|
By default, `clickhouse-keeper` provides the same guarantees as ZooKeeper (linearizable writes, non-linearizable reads). It has a compatible client-server protocol, so any standard ZooKeeper client can be used to interact with `clickhouse-keeper`. Snapshots and logs have an incompatible format with ZooKeeper, but `clickhouse-keeper-converter` tool allows to convert ZooKeeper data to `clickhouse-keeper` snapshot. Interserver protocol in `clickhouse-keeper` also incompatible with ZooKeeper so mixed ZooKeeper/clickhouse-keeper cluster is impossible.
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
`clickhouse-keeper` can be used as a standalone replacement for ZooKeeper or as an internal part of the `clickhouse-server`, but in both cases configuration is almost the same `.xml` file. The main `clickhouse-keeper` configuration tag is `<keeper_server>`. Keeper configuration has the following parameters:
|
||||||
|
|
||||||
|
- `tcp_port` — the port for a client to connect (default for ZooKeeper is `2181`)
|
||||||
|
- `tcp_port_secure` — the secure port for a client to connect
|
||||||
|
- `server_id` — unique server id, each participant of the clickhouse-keeper cluster must have a unique number (1, 2, 3, and so on)
|
||||||
|
- `log_storage_path` — path to coordination logs, better to store logs on the non-busy device (same for ZooKeeper)
|
||||||
|
- `snapshot_storage_path` — path to coordination snapshots
|
||||||
|
|
||||||
|
Other common parameters are inherited from clickhouse-server config (`listen_host`, `logger` and so on).
|
||||||
|
|
||||||
|
Internal coordination settings are located in `<keeper_server>.<coordination_settings>` section:
|
||||||
|
|
||||||
|
- `operation_timeout_ms` — timeout for a single client operation
|
||||||
|
- `session_timeout_ms` — timeout for client session
|
||||||
|
- `dead_session_check_period_ms` — how often clickhouse-keeper check dead sessions and remove them
|
||||||
|
- `heart_beat_interval_ms` — how often a clickhouse-keeper leader will send heartbeats to followers
|
||||||
|
- `election_timeout_lower_bound_ms` — if follower didn't receive heartbeats from the leader in this interval, then it can initiate leader election
|
||||||
|
- `election_timeout_upper_bound_ms` — if follower didn't receive heartbeats from the leader in this interval, then it must initiate leader election
|
||||||
|
- `rotate_log_storage_interval` — how many logs to store in a single file
|
||||||
|
- `reserved_log_items` — how many coordination logs to store before compaction
|
||||||
|
- `snapshot_distance` — how often clickhouse-keeper will create new snapshots (in the number of logs)
|
||||||
|
- `snapshots_to_keep` — how many snapshots to keep
|
||||||
|
- `stale_log_gap` — the threshold when leader consider follower as stale and send snapshot to it instead of logs
|
||||||
|
- `force_sync` — call `fsync` on each write to coordination log
|
||||||
|
- `raft_logs_level` — text logging level about coordination (trace, debug, and so on)
|
||||||
|
- `shutdown_timeout` — wait to finish internal connections and shutdown
|
||||||
|
- `startup_timeout` — if the server doesn't connect to other quorum participants in the specified timeout it will terminate
|
||||||
|
|
||||||
|
Quorum configuration is located in `<keeper_server>.<raft_configuration>` section and contain servers description. The only parameter for the whole quorum is `secure`, which enables encrypted connection for communication between quorum participants. The main parameters for each `<server>` are:
|
||||||
|
|
||||||
|
- `id` — server_id in quorum
|
||||||
|
- `hostname` — hostname where this server placed
|
||||||
|
- `port` — port where this server listen for connections
|
||||||
|
|
||||||
|
|
||||||
|
Examples of configuration for quorum with three nodes can be found in [integration tests](https://github.com/ClickHouse/ClickHouse/tree/master/tests/integration) with `test_keeper_` prefix. Example configuration for server #1:
|
||||||
|
|
||||||
|
```xml
|
||||||
|
<keeper_server>
|
||||||
|
<tcp_port>2181</tcp_port>
|
||||||
|
<server_id>1</server_id>
|
||||||
|
<log_storage_path>/var/lib/clickhouse/coordination/log</log_storage_path>
|
||||||
|
<snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path>
|
||||||
|
|
||||||
|
<coordination_settings>
|
||||||
|
<operation_timeout_ms>10000</operation_timeout_ms>
|
||||||
|
<session_timeout_ms>30000</session_timeout_ms>
|
||||||
|
<raft_logs_level>trace</raft_logs_level>
|
||||||
|
</coordination_settings>
|
||||||
|
|
||||||
|
<raft_configuration>
|
||||||
|
<server>
|
||||||
|
<id>1</id>
|
||||||
|
<hostname>zoo1</hostname>
|
||||||
|
<port>9444</port>
|
||||||
|
</server>
|
||||||
|
<server>
|
||||||
|
<id>2</id>
|
||||||
|
<hostname>zoo2</hostname>
|
||||||
|
<port>9444</port>
|
||||||
|
</server>
|
||||||
|
<server>
|
||||||
|
<id>3</id>
|
||||||
|
<hostname>zoo3</hostname>
|
||||||
|
<port>9444</port>
|
||||||
|
</server>
|
||||||
|
</raft_configuration>
|
||||||
|
</keeper_server>
|
||||||
|
```
|
||||||
|
|
||||||
|
## How to run
|
||||||
|
|
||||||
|
`clickhouse-keeper` is bundled into `clickhouse-server` package, just add configuration of `<keeper_server>` and start clickhouse-server as always. If you want to run standalone `clickhouse-keeper` you can start it in a similar way with:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
clickhouse-keeper --config /etc/your_path_to_config/config.xml --daemon
|
||||||
|
```
|
||||||
|
|
||||||
|
## [experimental] Migration from ZooKeeper
|
||||||
|
|
||||||
|
Seamlessly migration from ZooKeeper to `clickhouse-keeper` is impossible you have to stop your ZooKeeper cluster, convert data and start `clickhouse-keeper`. `clickhouse-keeper-converter` tool allows to convert ZooKeeper logs and snapshots to `clickhouse-keeper` snapshot. It works only with ZooKeeper > 3.4. Steps for migration:
|
||||||
|
|
||||||
|
1. Stop all ZooKeeper nodes.
|
||||||
|
|
||||||
|
2. [optional, but recommended] Found ZooKeeper leader node, start and stop it again. It will force ZooKeeper to create consistent snapshot.
|
||||||
|
|
||||||
|
3. Run `clickhouse-keeper-converter` on leader, example
|
||||||
|
|
||||||
|
```bash
|
||||||
|
clickhouse-keeper-converter --zookeeper-logs-dir /var/lib/zookeeper/version-2 --zookeeper-snapshots-dir /var/lib/zookeeper/version-2 --output-dir /path/to/clickhouse/keeper/snapshots
|
||||||
|
```
|
||||||
|
|
||||||
|
4. Copy snapshot to `clickhouse-server` nodes with configured `keeper` or start `clickhouse-keeper` instead of ZooKeeper. Snapshot must persist only on leader node, leader will sync it automatically to other nodes.
|
||||||
|
|
@ -22,6 +22,23 @@ Some settings specified in the main configuration file can be overridden in othe
|
|||||||
|
|
||||||
The config can also define “substitutions”. If an element has the `incl` attribute, the corresponding substitution from the file will be used as the value. By default, the path to the file with substitutions is `/etc/metrika.xml`. This can be changed in the [include_from](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-include_from) element in the server config. The substitution values are specified in `/yandex/substitution_name` elements in this file. If a substitution specified in `incl` does not exist, it is recorded in the log. To prevent ClickHouse from logging missing substitutions, specify the `optional="true"` attribute (for example, settings for [macros](../operations/server-configuration-parameters/settings.md)).
|
The config can also define “substitutions”. If an element has the `incl` attribute, the corresponding substitution from the file will be used as the value. By default, the path to the file with substitutions is `/etc/metrika.xml`. This can be changed in the [include_from](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-include_from) element in the server config. The substitution values are specified in `/yandex/substitution_name` elements in this file. If a substitution specified in `incl` does not exist, it is recorded in the log. To prevent ClickHouse from logging missing substitutions, specify the `optional="true"` attribute (for example, settings for [macros](../operations/server-configuration-parameters/settings.md)).
|
||||||
|
|
||||||
|
If you want to replace an entire element with a substitution use `include` as element name.
|
||||||
|
|
||||||
|
XML substitution example:
|
||||||
|
|
||||||
|
```xml
|
||||||
|
<yandex>
|
||||||
|
<!-- Appends XML subtree found at `/profiles-in-zookeeper` ZK path to `<profiles>` element. -->
|
||||||
|
<profiles from_zk="/profiles-in-zookeeper" />
|
||||||
|
|
||||||
|
<users>
|
||||||
|
<!-- Replaces `include` element with the subtree found at `/users-in-zookeeper` ZK path. -->
|
||||||
|
<include from_zk="/users-in-zookeeper" />
|
||||||
|
<include from_zk="/other-users-in-zookeeper" />
|
||||||
|
</users>
|
||||||
|
</yandex>
|
||||||
|
```
|
||||||
|
|
||||||
Substitutions can also be performed from ZooKeeper. To do this, specify the attribute `from_zk = "/path/to/node"`. The element value is replaced with the contents of the node at `/path/to/node` in ZooKeeper. You can also put an entire XML subtree on the ZooKeeper node and it will be fully inserted into the source element.
|
Substitutions can also be performed from ZooKeeper. To do this, specify the attribute `from_zk = "/path/to/node"`. The element value is replaced with the contents of the node at `/path/to/node` in ZooKeeper. You can also put an entire XML subtree on the ZooKeeper node and it will be fully inserted into the source element.
|
||||||
|
|
||||||
## User Settings {#user-settings}
|
## User Settings {#user-settings}
|
||||||
@ -32,6 +49,8 @@ Users configuration can be splitted into separate files similar to `config.xml`
|
|||||||
Directory name is defined as `users_config` setting without `.xml` postfix concatenated with `.d`.
|
Directory name is defined as `users_config` setting without `.xml` postfix concatenated with `.d`.
|
||||||
Directory `users.d` is used by default, as `users_config` defaults to `users.xml`.
|
Directory `users.d` is used by default, as `users_config` defaults to `users.xml`.
|
||||||
|
|
||||||
|
Note that configuration files are first merged taking into account [Override](#override) settings and includes are processed after that.
|
||||||
|
|
||||||
## XML example {#example}
|
## XML example {#example}
|
||||||
|
|
||||||
For example, you can have separate config file for each user like this:
|
For example, you can have separate config file for each user like this:
|
||||||
|
@ -379,7 +379,7 @@ Default value: `1`.
|
|||||||
|
|
||||||
## insert_null_as_default {#insert_null_as_default}
|
## insert_null_as_default {#insert_null_as_default}
|
||||||
|
|
||||||
Enables or disables the insertion of [default values](../../sql-reference/statements/create/table.md#create-default-values) instead of [NULL](../../sql-reference/syntax.md#null-literal) into columns with not [nullable](../../sql-reference/data-types/nullable.md#data_type-nullable) data type.
|
Enables or disables the insertion of [default values](../../sql-reference/statements/create/table.md#create-default-values) instead of [NULL](../../sql-reference/syntax.md#null-literal) into columns with not [nullable](../../sql-reference/data-types/nullable.md#data_type-nullable) data type.
|
||||||
If column type is not nullable and this setting is disabled, then inserting `NULL` causes an exception. If column type is nullable, then `NULL` values are inserted as is, regardless of this setting.
|
If column type is not nullable and this setting is disabled, then inserting `NULL` causes an exception. If column type is nullable, then `NULL` values are inserted as is, regardless of this setting.
|
||||||
|
|
||||||
This setting is applicable to [INSERT ... SELECT](../../sql-reference/statements/insert-into.md#insert_query_insert-select) queries. Note that `SELECT` subqueries may be concatenated with `UNION ALL` clause.
|
This setting is applicable to [INSERT ... SELECT](../../sql-reference/statements/insert-into.md#insert_query_insert-select) queries. Note that `SELECT` subqueries may be concatenated with `UNION ALL` clause.
|
||||||
@ -1182,7 +1182,7 @@ Possible values:
|
|||||||
|
|
||||||
Default value: `1`.
|
Default value: `1`.
|
||||||
|
|
||||||
**Additional Info**
|
**Additional Info**
|
||||||
|
|
||||||
This setting is useful for replicated tables with a sampling key. A query may be processed faster if it is executed on several servers in parallel. But the query performance may degrade in the following cases:
|
This setting is useful for replicated tables with a sampling key. A query may be processed faster if it is executed on several servers in parallel. But the query performance may degrade in the following cases:
|
||||||
|
|
||||||
@ -1194,21 +1194,22 @@ This setting is useful for replicated tables with a sampling key. A query may be
|
|||||||
!!! warning "Warning"
|
!!! warning "Warning"
|
||||||
This setting will produce incorrect results when joins or subqueries are involved, and all tables don't meet certain requirements. See [Distributed Subqueries and max_parallel_replicas](../../sql-reference/operators/in.md#max_parallel_replica-subqueries) for more details.
|
This setting will produce incorrect results when joins or subqueries are involved, and all tables don't meet certain requirements. See [Distributed Subqueries and max_parallel_replicas](../../sql-reference/operators/in.md#max_parallel_replica-subqueries) for more details.
|
||||||
|
|
||||||
## compile {#compile}
|
## compile_expressions {#compile-expressions}
|
||||||
|
|
||||||
Enable compilation of queries. By default, 0 (disabled).
|
Enables or disables compilation of frequently used simple functions and operators to native code with LLVM at runtime.
|
||||||
|
|
||||||
The compilation is only used for part of the query-processing pipeline: for the first stage of aggregation (GROUP BY).
|
Possible values:
|
||||||
If this portion of the pipeline was compiled, the query may run faster due to the deployment of short cycles and inlining aggregate function calls. The maximum performance improvement (up to four times faster in rare cases) is seen for queries with multiple simple aggregate functions. Typically, the performance gain is insignificant. In very rare cases, it may slow down query execution.
|
|
||||||
|
|
||||||
## min_count_to_compile {#min-count-to-compile}
|
- 0 — Disabled.
|
||||||
|
- 1 — Enabled.
|
||||||
|
|
||||||
How many times to potentially use a compiled chunk of code before running compilation. By default, 3.
|
Default value: `1`.
|
||||||
For testing, the value can be set to 0: compilation runs synchronously and the query waits for the end of the compilation process before continuing execution. For all other cases, use values starting with 1. Compilation normally takes about 5-10 seconds.
|
|
||||||
If the value is 1 or more, compilation occurs asynchronously in a separate thread. The result will be used as soon as it is ready, including queries that are currently running.
|
|
||||||
|
|
||||||
Compiled code is required for each different combination of aggregate functions used in the query and the type of keys in the GROUP BY clause.
|
## min_count_to_compile_expression {#min-count-to-compile-expression}
|
||||||
The results of the compilation are saved in the build directory in the form of .so files. There is no restriction on the number of compilation results since they do not use very much space. Old results will be used after server restarts, except in the case of a server upgrade – in this case, the old results are deleted.
|
|
||||||
|
Minimum count of executing same expression before it is get compiled.
|
||||||
|
|
||||||
|
Default value: `3`.
|
||||||
|
|
||||||
## output_format_json_quote_64bit_integers {#session_settings-output_format_json_quote_64bit_integers}
|
## output_format_json_quote_64bit_integers {#session_settings-output_format_json_quote_64bit_integers}
|
||||||
|
|
||||||
@ -1558,7 +1559,7 @@ Possible values:
|
|||||||
|
|
||||||
- 0 — Disabled (final query processing is done on the initiator node).
|
- 0 — Disabled (final query processing is done on the initiator node).
|
||||||
- 1 - Do not merge aggregation states from different servers for distributed query processing (query completelly processed on the shard, initiator only proxy the data), can be used in case it is for certain that there are different keys on different shards.
|
- 1 - Do not merge aggregation states from different servers for distributed query processing (query completelly processed on the shard, initiator only proxy the data), can be used in case it is for certain that there are different keys on different shards.
|
||||||
- 2 - Same as `1` but applies `ORDER BY` and `LIMIT` (it is not possilbe when the query processed completelly on the remote node, like for `distributed_group_by_no_merge=1`) on the initiator (can be used for queries with `ORDER BY` and/or `LIMIT`).
|
- 2 - Same as `1` but applies `ORDER BY` and `LIMIT` (it is not possible when the query processed completelly on the remote node, like for `distributed_group_by_no_merge=1`) on the initiator (can be used for queries with `ORDER BY` and/or `LIMIT`).
|
||||||
|
|
||||||
**Example**
|
**Example**
|
||||||
|
|
||||||
@ -1622,7 +1623,7 @@ Possible values:
|
|||||||
|
|
||||||
Default value: 0
|
Default value: 0
|
||||||
|
|
||||||
## optimize_skip_unused_shards_rewrite_in {#optimize-skip-unused-shardslrewrite-in}
|
## optimize_skip_unused_shards_rewrite_in {#optimize-skip-unused-shards-rewrite-in}
|
||||||
|
|
||||||
Rewrite IN in query for remote shards to exclude values that does not belong to the shard (requires optimize_skip_unused_shards).
|
Rewrite IN in query for remote shards to exclude values that does not belong to the shard (requires optimize_skip_unused_shards).
|
||||||
|
|
||||||
@ -1727,6 +1728,28 @@ Possible values:
|
|||||||
|
|
||||||
Default value: 0.
|
Default value: 0.
|
||||||
|
|
||||||
|
## optimize_functions_to_subcolumns {#optimize-functions-to-subcolumns}
|
||||||
|
|
||||||
|
Enables or disables optimization by transforming some functions to reading subcolumns. This reduces the amount of data to read.
|
||||||
|
|
||||||
|
These functions can be transformed:
|
||||||
|
|
||||||
|
- [length](../../sql-reference/functions/array-functions.md#array_functions-length) to read the [size0](../../sql-reference/data-types/array.md#array-size) subcolumn.
|
||||||
|
- [empty](../../sql-reference/functions/array-functions.md#function-empty) to read the [size0](../../sql-reference/data-types/array.md#array-size) subcolumn.
|
||||||
|
- [notEmpty](../../sql-reference/functions/array-functions.md#function-notempty) to read the [size0](../../sql-reference/data-types/array.md#array-size) subcolumn.
|
||||||
|
- [isNull](../../sql-reference/operators/index.md#operator-is-null) to read the [null](../../sql-reference/data-types/nullable.md#finding-null) subcolumn.
|
||||||
|
- [isNotNull](../../sql-reference/operators/index.md#is-not-null) to read the [null](../../sql-reference/data-types/nullable.md#finding-null) subcolumn.
|
||||||
|
- [count](../../sql-reference/aggregate-functions/reference/count.md) to read the [null](../../sql-reference/data-types/nullable.md#finding-null) subcolumn.
|
||||||
|
- [mapKeys](../../sql-reference/functions/tuple-map-functions.md#mapkeys) to read the [keys](../../sql-reference/data-types/map.md#map-subcolumns) subcolumn.
|
||||||
|
- [mapValues](../../sql-reference/functions/tuple-map-functions.md#mapvalues) to read the [values](../../sql-reference/data-types/map.md#map-subcolumns) subcolumn.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- 0 — Optimization disabled.
|
||||||
|
- 1 — Optimization enabled.
|
||||||
|
|
||||||
|
Default value: `0`.
|
||||||
|
|
||||||
## distributed_replica_error_half_life {#settings-distributed_replica_error_half_life}
|
## distributed_replica_error_half_life {#settings-distributed_replica_error_half_life}
|
||||||
|
|
||||||
- Type: seconds
|
- Type: seconds
|
||||||
@ -1802,6 +1825,27 @@ Possible values:
|
|||||||
|
|
||||||
Default value: 0.
|
Default value: 0.
|
||||||
|
|
||||||
|
## distributed_directory_monitor_split_batch_on_failure {#distributed_directory_monitor_split_batch_on_failure}
|
||||||
|
|
||||||
|
Enables/disables splitting batches on failures.
|
||||||
|
|
||||||
|
Sometimes sending particular batch to the remote shard may fail, because of some complex pipeline after (i.e. `MATERIALIZED VIEW` with `GROUP BY`) due to `Memory limit exceeded` or similar errors. In this case, retrying will not help (and this will stuck distributed sends for the table) but sending files from that batch one by one may succeed INSERT.
|
||||||
|
|
||||||
|
So installing this setting to `1` will disable batching for such batches (i.e. temporary disables `distributed_directory_monitor_batch_inserts` for failed batches).
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- 1 — Enabled.
|
||||||
|
- 0 — Disabled.
|
||||||
|
|
||||||
|
Default value: 0.
|
||||||
|
|
||||||
|
!!! note "Note"
|
||||||
|
This setting also affects broken batches (that may appears because of abnormal server (machine) termination and no `fsync_after_insert`/`fsync_directories` for [Distributed](../../engines/table-engines/special/distributed.md) table engine).
|
||||||
|
|
||||||
|
!!! warning "Warning"
|
||||||
|
You should not rely on automatic batch splitting, since this may hurt performance.
|
||||||
|
|
||||||
## os_thread_priority {#setting-os-thread-priority}
|
## os_thread_priority {#setting-os-thread-priority}
|
||||||
|
|
||||||
Sets the priority ([nice](https://en.wikipedia.org/wiki/Nice_(Unix))) for threads that execute queries. The OS scheduler considers this priority when choosing the next thread to run on each available CPU core.
|
Sets the priority ([nice](https://en.wikipedia.org/wiki/Nice_(Unix))) for threads that execute queries. The OS scheduler considers this priority when choosing the next thread to run on each available CPU core.
|
||||||
@ -2085,7 +2129,7 @@ Default value: 128.
|
|||||||
|
|
||||||
## background_fetches_pool_size {#background_fetches_pool_size}
|
## background_fetches_pool_size {#background_fetches_pool_size}
|
||||||
|
|
||||||
Sets the number of threads performing background fetches for [replicated](../../engines/table-engines/mergetree-family/replication.md) tables. This setting is applied at the ClickHouse server start and can’t be changed in a user session. For production usage with frequent small insertions or slow ZooKeeper cluster is recomended to use default value.
|
Sets the number of threads performing background fetches for [replicated](../../engines/table-engines/mergetree-family/replication.md) tables. This setting is applied at the ClickHouse server start and can’t be changed in a user session. For production usage with frequent small insertions or slow ZooKeeper cluster is recommended to use default value.
|
||||||
|
|
||||||
Possible values:
|
Possible values:
|
||||||
|
|
||||||
@ -2672,7 +2716,7 @@ Default value: `0`.
|
|||||||
## aggregate_functions_null_for_empty {#aggregate_functions_null_for_empty}
|
## aggregate_functions_null_for_empty {#aggregate_functions_null_for_empty}
|
||||||
|
|
||||||
Enables or disables rewriting all aggregate functions in a query, adding [-OrNull](../../sql-reference/aggregate-functions/combinators.md#agg-functions-combinator-ornull) suffix to them. Enable it for SQL standard compatibility.
|
Enables or disables rewriting all aggregate functions in a query, adding [-OrNull](../../sql-reference/aggregate-functions/combinators.md#agg-functions-combinator-ornull) suffix to them. Enable it for SQL standard compatibility.
|
||||||
It is implemented via query rewrite (similar to [count_distinct_implementation](#settings-count_distinct_implementation) setting) to get consistent results for distributed queries.
|
It is implemented via query rewrite (similar to [count_distinct_implementation](#settings-count_distinct_implementation) setting) to get consistent results for distributed queries.
|
||||||
|
|
||||||
Possible values:
|
Possible values:
|
||||||
|
|
||||||
@ -2856,7 +2900,7 @@ Default value: `0`.
|
|||||||
|
|
||||||
## database_atomic_wait_for_drop_and_detach_synchronously {#database_atomic_wait_for_drop_and_detach_synchronously}
|
## database_atomic_wait_for_drop_and_detach_synchronously {#database_atomic_wait_for_drop_and_detach_synchronously}
|
||||||
|
|
||||||
Adds a modifier `SYNC` to all `DROP` and `DETACH` queries.
|
Adds a modifier `SYNC` to all `DROP` and `DETACH` queries.
|
||||||
|
|
||||||
Possible values:
|
Possible values:
|
||||||
|
|
||||||
@ -2962,7 +3006,7 @@ Enables or disables using the original column names instead of aliases in query
|
|||||||
Possible values:
|
Possible values:
|
||||||
|
|
||||||
- 0 — The column name is substituted with the alias.
|
- 0 — The column name is substituted with the alias.
|
||||||
- 1 — The column name is not substituted with the alias.
|
- 1 — The column name is not substituted with the alias.
|
||||||
|
|
||||||
Default value: `0`.
|
Default value: `0`.
|
||||||
|
|
||||||
@ -3075,7 +3119,7 @@ SELECT
|
|||||||
sum(a),
|
sum(a),
|
||||||
sumCount(b).1,
|
sumCount(b).1,
|
||||||
sumCount(b).2,
|
sumCount(b).2,
|
||||||
(sumCount(b).1) / (sumCount(b).2)
|
(sumCount(b).1) / (sumCount(b).2)
|
||||||
FROM fuse_tbl
|
FROM fuse_tbl
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -3144,4 +3188,17 @@ SETTINGS index_granularity = 8192 │
|
|||||||
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
|
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/settings/settings/) <!-- hide -->
|
## external_table_functions_use_nulls {#external-table-functions-use-nulls}
|
||||||
|
|
||||||
|
Defines how [mysql](../../sql-reference/table-functions/mysql.md), [postgresql](../../sql-reference/table-functions/postgresql.md) and [odbc](../../sql-reference/table-functions/odbc.md)] table functions use Nullable columns.
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- 0 — The table function explicitly uses Nullable columns.
|
||||||
|
- 1 — The table function implicitly uses Nullable columns.
|
||||||
|
|
||||||
|
Default value: `1`.
|
||||||
|
|
||||||
|
**Usage**
|
||||||
|
|
||||||
|
If the setting is set to `0`, the table function does not make Nullable columns and inserts default values instead of NULL. This is also applicable for NULL values inside arrays.
|
||||||
|
@ -36,4 +36,4 @@ SELECT * FROM system.asynchronous_metric_log LIMIT 10
|
|||||||
- [system.asynchronous_metrics](../system-tables/asynchronous_metrics.md) — Contains metrics, calculated periodically in the background.
|
- [system.asynchronous_metrics](../system-tables/asynchronous_metrics.md) — Contains metrics, calculated periodically in the background.
|
||||||
- [system.metric_log](../system-tables/metric_log.md) — Contains history of metrics values from tables `system.metrics` and `system.events`, periodically flushed to disk.
|
- [system.metric_log](../system-tables/metric_log.md) — Contains history of metrics values from tables `system.metrics` and `system.events`, periodically flushed to disk.
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/asynchronous_metric_log) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/asynchronous_metric_log) <!--hide-->
|
||||||
|
@ -33,6 +33,6 @@ SELECT * FROM system.asynchronous_metrics LIMIT 10
|
|||||||
- [Monitoring](../../operations/monitoring.md) — Base concepts of ClickHouse monitoring.
|
- [Monitoring](../../operations/monitoring.md) — Base concepts of ClickHouse monitoring.
|
||||||
- [system.metrics](../../operations/system-tables/metrics.md#system_tables-metrics) — Contains instantly calculated metrics.
|
- [system.metrics](../../operations/system-tables/metrics.md#system_tables-metrics) — Contains instantly calculated metrics.
|
||||||
- [system.events](../../operations/system-tables/events.md#system_tables-events) — Contains a number of events that have occurred.
|
- [system.events](../../operations/system-tables/events.md#system_tables-events) — Contains a number of events that have occurred.
|
||||||
- [system.metric_log](../../operations/system-tables/metric_log.md#system_tables-metric_log) — Contains a history of metrics values from tables `system.metrics` и `system.events`.
|
- [system.metric_log](../../operations/system-tables/metric_log.md#system_tables-metric_log) — Contains a history of metrics values from tables `system.metrics` and `system.events`.
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/asynchronous_metrics) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/asynchronous_metrics) <!--hide-->
|
||||||
|
@ -68,4 +68,4 @@ estimated_recovery_time: 0
|
|||||||
- [distributed_replica_error_cap setting](../../operations/settings/settings.md#settings-distributed_replica_error_cap)
|
- [distributed_replica_error_cap setting](../../operations/settings/settings.md#settings-distributed_replica_error_cap)
|
||||||
- [distributed_replica_error_half_life setting](../../operations/settings/settings.md#settings-distributed_replica_error_half_life)
|
- [distributed_replica_error_half_life setting](../../operations/settings/settings.md#settings-distributed_replica_error_half_life)
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/clusters) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/clusters) <!--hide-->
|
||||||
|
@ -69,4 +69,21 @@ is_in_sampling_key: 0
|
|||||||
compression_codec:
|
compression_codec:
|
||||||
```
|
```
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/columns) <!--hide-->
|
The `system.columns` table contains the following columns (the column type is shown in brackets):
|
||||||
|
|
||||||
|
- `database` (String) — Database name.
|
||||||
|
- `table` (String) — Table name.
|
||||||
|
- `name` (String) — Column name.
|
||||||
|
- `type` (String) — Column type.
|
||||||
|
- `default_kind` (String) — Expression type (`DEFAULT`, `MATERIALIZED`, `ALIAS`) for the default value, or an empty string if it is not defined.
|
||||||
|
- `default_expression` (String) — Expression for the default value, or an empty string if it is not defined.
|
||||||
|
- `data_compressed_bytes` (UInt64) — The size of compressed data, in bytes.
|
||||||
|
- `data_uncompressed_bytes` (UInt64) — The size of decompressed data, in bytes.
|
||||||
|
- `marks_bytes` (UInt64) — The size of marks, in bytes.
|
||||||
|
- `comment` (String) — Comment on the column, or an empty string if it is not defined.
|
||||||
|
- `is_in_partition_key` (UInt8) — Flag that indicates whether the column is in the partition expression.
|
||||||
|
- `is_in_sorting_key` (UInt8) — Flag that indicates whether the column is in the sorting key expression.
|
||||||
|
- `is_in_primary_key` (UInt8) — Flag that indicates whether the column is in the primary key expression.
|
||||||
|
- `is_in_sampling_key` (UInt8) — Flag that indicates whether the column is in the sampling key expression.
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/columns) <!--hide-->
|
||||||
|
@ -38,4 +38,4 @@ SELECT * FROM system.contributors WHERE name = 'Olga Khvostikova'
|
|||||||
│ Olga Khvostikova │
|
│ Olga Khvostikova │
|
||||||
└──────────────────┘
|
└──────────────────┘
|
||||||
```
|
```
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/contributors) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/contributors) <!--hide-->
|
||||||
|
@ -8,4 +8,4 @@ Columns:
|
|||||||
- `with_admin_option` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Flag that shows whether `current_role` is a role with `ADMIN OPTION` privilege.
|
- `with_admin_option` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Flag that shows whether `current_role` is a role with `ADMIN OPTION` privilege.
|
||||||
- `is_default` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Flag that shows whether `current_role` is a default role.
|
- `is_default` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Flag that shows whether `current_role` is a default role.
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/current-roles) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/current-roles) <!--hide-->
|
||||||
|
39
docs/en/operations/system-tables/data_skipping_indices.md
Normal file
39
docs/en/operations/system-tables/data_skipping_indices.md
Normal file
@ -0,0 +1,39 @@
|
|||||||
|
# system.data_skipping_indices {#system-data-skipping-indices}
|
||||||
|
|
||||||
|
Contains information about existing data skipping indices in all the tables.
|
||||||
|
|
||||||
|
Columns:
|
||||||
|
|
||||||
|
- `database` ([String](../../sql-reference/data-types/string.md)) — Database name.
|
||||||
|
- `table` ([String](../../sql-reference/data-types/string.md)) — Table name.
|
||||||
|
- `name` ([String](../../sql-reference/data-types/string.md)) — Index name.
|
||||||
|
- `type` ([String](../../sql-reference/data-types/string.md)) — Index type.
|
||||||
|
- `expr` ([String](../../sql-reference/data-types/string.md)) — Expression used to calculate the index.
|
||||||
|
- `granularity` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Number of granules in the block.
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT * FROM system.data_skipping_indices LIMIT 2 FORMAT Vertical;
|
||||||
|
```
|
||||||
|
|
||||||
|
```text
|
||||||
|
Row 1:
|
||||||
|
──────
|
||||||
|
database: default
|
||||||
|
table: user_actions
|
||||||
|
name: clicks_idx
|
||||||
|
type: minmax
|
||||||
|
expr: clicks
|
||||||
|
granularity: 1
|
||||||
|
|
||||||
|
Row 2:
|
||||||
|
──────
|
||||||
|
database: default
|
||||||
|
table: users
|
||||||
|
name: contacts_null_idx
|
||||||
|
type: minmax
|
||||||
|
expr: assumeNotNull(contacts_null)
|
||||||
|
granularity: 1
|
||||||
|
```
|
@ -33,4 +33,4 @@ SELECT * FROM system.data_type_families WHERE alias_to = 'String'
|
|||||||
|
|
||||||
- [Syntax](../../sql-reference/syntax.md) — Information about supported syntax.
|
- [Syntax](../../sql-reference/syntax.md) — Information about supported syntax.
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/data_type_families) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/data_type_families) <!--hide-->
|
||||||
|
@ -35,4 +35,4 @@ SELECT * FROM system.databases
|
|||||||
└────────────────────────────────┴────────┴────────────────────────────┴─────────────────────────────────────────────────────────────────────┴──────────────────────────────────────┘
|
└────────────────────────────────┴────────┴────────────────────────────┴─────────────────────────────────────────────────────────────────────┴──────────────────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/databases) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/databases) <!--hide-->
|
||||||
|
@ -8,4 +8,4 @@ For the description of other columns, see [system.parts](../../operations/system
|
|||||||
|
|
||||||
If part name is invalid, values of some columns may be `NULL`. Such parts can be deleted with [ALTER TABLE DROP DETACHED PART](../../sql-reference/statements/alter/partition.md#alter_drop-detached).
|
If part name is invalid, values of some columns may be `NULL`. Such parts can be deleted with [ALTER TABLE DROP DETACHED PART](../../sql-reference/statements/alter/partition.md#alter_drop-detached).
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/detached_parts) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/detached_parts) <!--hide-->
|
||||||
|
@ -61,4 +61,4 @@ SELECT * FROM system.dictionaries
|
|||||||
└──────────┴──────┴────────┴─────────────┴──────┴────────┴──────────────────────────────────────┴─────────────────────┴─────────────────┴─────────────┴──────────┴───────────────┴───────────────────────┴────────────────────────────┴──────────────┴──────────────┴─────────────────────┴──────────────────────────────┘───────────────────────┴────────────────┘
|
└──────────┴──────┴────────┴─────────────┴──────┴────────┴──────────────────────────────────────┴─────────────────────┴─────────────────┴─────────────┴──────────┴───────────────┴───────────────────────┴────────────────────────────┴──────────────┴──────────────┴─────────────────────┴──────────────────────────────┘───────────────────────┴────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/dictionaries) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/dictionaries) <!--hide-->
|
||||||
|
@ -10,9 +10,6 @@ Columns:
|
|||||||
- `total_space` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Disk volume in bytes.
|
- `total_space` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Disk volume in bytes.
|
||||||
- `keep_free_space` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Amount of disk space that should stay free on disk in bytes. Defined in the `keep_free_space_bytes` parameter of disk configuration.
|
- `keep_free_space` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Amount of disk space that should stay free on disk in bytes. Defined in the `keep_free_space_bytes` parameter of disk configuration.
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/disks) <!--hide-->
|
|
||||||
|
|
||||||
|
|
||||||
**Example**
|
**Example**
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
@ -27,5 +24,4 @@ Columns:
|
|||||||
1 rows in set. Elapsed: 0.001 sec.
|
1 rows in set. Elapsed: 0.001 sec.
|
||||||
```
|
```
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/disks) <!--hide-->
|
||||||
|
|
||||||
|
@ -9,4 +9,4 @@ Columns:
|
|||||||
- `is_current` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Flag that shows whether `enabled_role` is a current role of a current user.
|
- `is_current` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Flag that shows whether `enabled_role` is a current role of a current user.
|
||||||
- `is_default` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Flag that shows whether `enabled_role` is a default role.
|
- `is_default` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Flag that shows whether `enabled_role` is a default role.
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/enabled-roles) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/enabled-roles) <!--hide-->
|
||||||
|
@ -31,4 +31,4 @@ SELECT * FROM system.events LIMIT 5
|
|||||||
- [system.metric_log](../../operations/system-tables/metric_log.md#system_tables-metric_log) — Contains a history of metrics values from tables `system.metrics` и `system.events`.
|
- [system.metric_log](../../operations/system-tables/metric_log.md#system_tables-metric_log) — Contains a history of metrics values from tables `system.metrics` и `system.events`.
|
||||||
- [Monitoring](../../operations/monitoring.md) — Base concepts of ClickHouse monitoring.
|
- [Monitoring](../../operations/monitoring.md) — Base concepts of ClickHouse monitoring.
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/events) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/events) <!--hide-->
|
||||||
|
@ -7,8 +7,6 @@ Columns:
|
|||||||
- `name`(`String`) – The name of the function.
|
- `name`(`String`) – The name of the function.
|
||||||
- `is_aggregate`(`UInt8`) — Whether the function is aggregate.
|
- `is_aggregate`(`UInt8`) — Whether the function is aggregate.
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/functions) <!--hide-->
|
|
||||||
|
|
||||||
**Example**
|
**Example**
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
@ -30,4 +28,6 @@ Columns:
|
|||||||
└──────────────────────────┴──────────────┴──────────────────┴──────────┘
|
└──────────────────────────┴──────────────┴──────────────────┴──────────┘
|
||||||
|
|
||||||
10 rows in set. Elapsed: 0.002 sec.
|
10 rows in set. Elapsed: 0.002 sec.
|
||||||
```
|
```
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/functions) <!--hide-->
|
||||||
|
@ -21,4 +21,4 @@ Columns:
|
|||||||
|
|
||||||
- `grant_option` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Permission is granted `WITH GRANT OPTION`, see [GRANT](../../sql-reference/statements/grant.md#grant-privigele-syntax).
|
- `grant_option` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Permission is granted `WITH GRANT OPTION`, see [GRANT](../../sql-reference/statements/grant.md#grant-privigele-syntax).
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/grants) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/grants) <!--hide-->
|
||||||
|
@ -14,4 +14,4 @@ Columns:
|
|||||||
- `Tables.database` (Array(String)) - Array of names of database tables that use the `config_name` parameter.
|
- `Tables.database` (Array(String)) - Array of names of database tables that use the `config_name` parameter.
|
||||||
- `Tables.table` (Array(String)) - Array of table names that use the `config_name` parameter.
|
- `Tables.table` (Array(String)) - Array of table names that use the `config_name` parameter.
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/graphite_retentions) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/graphite_retentions) <!--hide-->
|
||||||
|
@ -36,4 +36,4 @@ SELECT library_name, license_type, license_path FROM system.licenses LIMIT 15
|
|||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/licenses) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/licenses) <!--hide-->
|
||||||
|
@ -51,4 +51,4 @@ type: SettingUInt64
|
|||||||
4 rows in set. Elapsed: 0.001 sec.
|
4 rows in set. Elapsed: 0.001 sec.
|
||||||
```
|
```
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/merge_tree_settings) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/merge_tree_settings) <!--hide-->
|
||||||
|
@ -22,4 +22,4 @@ Columns:
|
|||||||
- `merge_type` — The type of current merge. Empty if it's an mutation.
|
- `merge_type` — The type of current merge. Empty if it's an mutation.
|
||||||
- `merge_algorithm` — The algorithm used in current merge. Empty if it's an mutation.
|
- `merge_algorithm` — The algorithm used in current merge. Empty if it's an mutation.
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/merges) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/merges) <!--hide-->
|
||||||
|
@ -48,4 +48,4 @@ CurrentMetric_DistributedFilesToInsert: 0
|
|||||||
- [system.metrics](../../operations/system-tables/metrics.md) — Contains instantly calculated metrics.
|
- [system.metrics](../../operations/system-tables/metrics.md) — Contains instantly calculated metrics.
|
||||||
- [Monitoring](../../operations/monitoring.md) — Base concepts of ClickHouse monitoring.
|
- [Monitoring](../../operations/monitoring.md) — Base concepts of ClickHouse monitoring.
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/metric_log) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/metric_log) <!--hide-->
|
||||||
|
@ -38,4 +38,4 @@ SELECT * FROM system.metrics LIMIT 10
|
|||||||
- [system.metric_log](../../operations/system-tables/metric_log.md#system_tables-metric_log) — Contains a history of metrics values from tables `system.metrics` и `system.events`.
|
- [system.metric_log](../../operations/system-tables/metric_log.md#system_tables-metric_log) — Contains a history of metrics values from tables `system.metrics` и `system.events`.
|
||||||
- [Monitoring](../../operations/monitoring.md) — Base concepts of ClickHouse monitoring.
|
- [Monitoring](../../operations/monitoring.md) — Base concepts of ClickHouse monitoring.
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/metrics) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/metrics) <!--hide-->
|
||||||
|
@ -45,4 +45,4 @@ If there were problems with mutating some data parts, the following columns cont
|
|||||||
- [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) table engine
|
- [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) table engine
|
||||||
- [ReplicatedMergeTree](../../engines/table-engines/mergetree-family/replication.md) family
|
- [ReplicatedMergeTree](../../engines/table-engines/mergetree-family/replication.md) family
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/mutations) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/mutations) <!--hide-->
|
||||||
|
@ -29,4 +29,4 @@ Reads from this table are not parallelized.
|
|||||||
10 rows in set. Elapsed: 0.001 sec.
|
10 rows in set. Elapsed: 0.001 sec.
|
||||||
```
|
```
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/numbers) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/numbers) <!--hide-->
|
||||||
|
@ -27,4 +27,4 @@ Used for tests.
|
|||||||
10 rows in set. Elapsed: 0.001 sec.
|
10 rows in set. Elapsed: 0.001 sec.
|
||||||
```
|
```
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/numbers_mt) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/numbers_mt) <!--hide-->
|
||||||
|
@ -20,4 +20,4 @@ This is similar to the `DUAL` table found in other DBMSs.
|
|||||||
1 rows in set. Elapsed: 0.001 sec.
|
1 rows in set. Elapsed: 0.001 sec.
|
||||||
```
|
```
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/one) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/one) <!--hide-->
|
||||||
|
@ -66,4 +66,4 @@ error: 0
|
|||||||
exception:
|
exception:
|
||||||
```
|
```
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/part_log) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/part_log) <!--hide-->
|
||||||
|
@ -155,4 +155,4 @@ move_ttl_info.max: []
|
|||||||
- [MergeTree family](../../engines/table-engines/mergetree-family/mergetree.md)
|
- [MergeTree family](../../engines/table-engines/mergetree-family/mergetree.md)
|
||||||
- [TTL for Columns and Tables](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-ttl)
|
- [TTL for Columns and Tables](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-ttl)
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/parts) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/parts) <!--hide-->
|
||||||
|
@ -14,7 +14,6 @@ Columns:
|
|||||||
- `query` (String) – The query text. For `INSERT`, it does not include the data to insert.
|
- `query` (String) – The query text. For `INSERT`, it does not include the data to insert.
|
||||||
- `query_id` (String) – Query ID, if defined.
|
- `query_id` (String) – Query ID, if defined.
|
||||||
|
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
:) SELECT * FROM system.processes LIMIT 10 FORMAT Vertical;
|
:) SELECT * FROM system.processes LIMIT 10 FORMAT Vertical;
|
||||||
```
|
```
|
||||||
@ -34,14 +33,14 @@ initial_port: 47588
|
|||||||
interface: 1
|
interface: 1
|
||||||
os_user: bharatnc
|
os_user: bharatnc
|
||||||
client_hostname: tower
|
client_hostname: tower
|
||||||
client_name: ClickHouse
|
client_name: ClickHouse
|
||||||
client_revision: 54437
|
client_revision: 54437
|
||||||
client_version_major: 20
|
client_version_major: 20
|
||||||
client_version_minor: 7
|
client_version_minor: 7
|
||||||
client_version_patch: 2
|
client_version_patch: 2
|
||||||
http_method: 0
|
http_method: 0
|
||||||
http_user_agent:
|
http_user_agent:
|
||||||
quota_key:
|
quota_key:
|
||||||
elapsed: 0.000582537
|
elapsed: 0.000582537
|
||||||
is_cancelled: 0
|
is_cancelled: 0
|
||||||
read_rows: 0
|
read_rows: 0
|
||||||
@ -53,12 +52,10 @@ memory_usage: 0
|
|||||||
peak_memory_usage: 0
|
peak_memory_usage: 0
|
||||||
query: SELECT * from system.processes LIMIT 10 FORMAT Vertical;
|
query: SELECT * from system.processes LIMIT 10 FORMAT Vertical;
|
||||||
thread_ids: [67]
|
thread_ids: [67]
|
||||||
ProfileEvents.Names: ['Query','SelectQuery','ReadCompressedBytes','CompressedReadBufferBlocks','CompressedReadBufferBytes','IOBufferAllocs','IOBufferAllocBytes','ContextLock','RWLockAcquiredReadLocks']
|
ProfileEvents: {'Query':1,'SelectQuery':1,'ReadCompressedBytes':36,'CompressedReadBufferBlocks':1,'CompressedReadBufferBytes':10,'IOBufferAllocs':1,'IOBufferAllocBytes':89,'ContextLock':15,'RWLockAcquiredReadLocks':1}
|
||||||
ProfileEvents.Values: [1,1,36,1,10,1,89,16,1]
|
Settings: {'background_pool_size':'32','load_balancing':'random','allow_suspicious_low_cardinality_types':'1','distributed_aggregation_memory_efficient':'1','skip_unavailable_shards':'1','log_queries':'1','max_bytes_before_external_group_by':'20000000000','max_bytes_before_external_sort':'20000000000','allow_introspection_functions':'1'}
|
||||||
Settings.Names: ['use_uncompressed_cache','load_balancing','log_queries','max_memory_usage']
|
|
||||||
Settings.Values: ['0','in_order','1','10000000000']
|
|
||||||
|
|
||||||
1 rows in set. Elapsed: 0.002 sec.
|
1 rows in set. Elapsed: 0.002 sec.
|
||||||
```
|
```
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/processes) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/processes) <!--hide-->
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user