mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-29 02:52:13 +00:00
Merge branch 'master' of https://github.com/ClickHouse/ClickHouse into pg-ch-replica
This commit is contained in:
commit
a78c9e63e7
4
.gitmodules
vendored
4
.gitmodules
vendored
@ -228,3 +228,7 @@
|
|||||||
[submodule "contrib/datasketches-cpp"]
|
[submodule "contrib/datasketches-cpp"]
|
||||||
path = contrib/datasketches-cpp
|
path = contrib/datasketches-cpp
|
||||||
url = https://github.com/ClickHouse-Extras/datasketches-cpp.git
|
url = https://github.com/ClickHouse-Extras/datasketches-cpp.git
|
||||||
|
|
||||||
|
[submodule "contrib/yaml-cpp"]
|
||||||
|
path = contrib/yaml-cpp
|
||||||
|
url = https://github.com/ClickHouse-Extras/yaml-cpp.git
|
||||||
|
139
CHANGELOG.md
139
CHANGELOG.md
@ -1,3 +1,142 @@
|
|||||||
|
## ClickHouse release 21.5, 2021-05-20
|
||||||
|
|
||||||
|
#### Backward Incompatible Change
|
||||||
|
|
||||||
|
* Change comparison of integers and floating point numbers when integer is not exactly representable in the floating point data type. In new version comparison will return false as the rounding error will occur. Example: `9223372036854775808.0 != 9223372036854775808`, because the number `9223372036854775808` is not representable as floating point number exactly (and `9223372036854775808.0` is rounded to `9223372036854776000.0`). But in previous version the comparison will return as the numbers are equal, because if the floating point number `9223372036854776000.0` get converted back to UInt64, it will yield `9223372036854775808`. For the reference, the Python programming language also treats these numbers as equal. But this behaviour was dependend on CPU model (different results on AMD64 and AArch64 for some out-of-range numbers), so we make the comparison more precise. It will treat int and float numbers equal only if int is represented in floating point type exactly. [#22595](https://github.com/ClickHouse/ClickHouse/pull/22595) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Remove support for `argMin` and `argMax` for single `Tuple` argument. The code was not memory-safe. The feature was added by mistake and it is confusing for people. These functions can be reintroduced under different names later. This fixes [#22384](https://github.com/ClickHouse/ClickHouse/issues/22384) and reverts [#17359](https://github.com/ClickHouse/ClickHouse/issues/17359). [#23393](https://github.com/ClickHouse/ClickHouse/pull/23393) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
|
||||||
|
#### New Feature
|
||||||
|
|
||||||
|
* Added functions `dictGetChildren(dictionary, key)`, `dictGetDescendants(dictionary, key, level)`. Function `dictGetChildren` return all children as an array if indexes. It is a inverse transformation for `dictGetHierarchy`. Function `dictGetDescendants` return all descendants as if `dictGetChildren` was applied `level` times recursively. Zero `level` value is equivalent to infinity. Improved performance of `dictGetHierarchy`, `dictIsIn` functions. Closes [#14656](https://github.com/ClickHouse/ClickHouse/issues/14656). [#22096](https://github.com/ClickHouse/ClickHouse/pull/22096) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Added function `dictGetOrNull`. It works like `dictGet`, but return `Null` in case key was not found in dictionary. Closes [#22375](https://github.com/ClickHouse/ClickHouse/issues/22375). [#22413](https://github.com/ClickHouse/ClickHouse/pull/22413) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Added a table function `s3Cluster`, which allows to process files from `s3` in parallel on every node of a specified cluster. [#22012](https://github.com/ClickHouse/ClickHouse/pull/22012) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Added support for replicas and shards in MySQL/PostgreSQL table engine / table function. You can write `SELECT * FROM mysql('host{1,2}-{1|2}', ...)`. Closes [#20969](https://github.com/ClickHouse/ClickHouse/issues/20969). [#22217](https://github.com/ClickHouse/ClickHouse/pull/22217) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Added `ALTER TABLE ... FETCH PART ...` query. It's similar to `FETCH PARTITION`, but fetches only one part. [#22706](https://github.com/ClickHouse/ClickHouse/pull/22706) ([turbo jason](https://github.com/songenjie)).
|
||||||
|
* Added a setting `max_distributed_depth` that limits the depth of recursive queries to `Distributed` tables. Closes [#20229](https://github.com/ClickHouse/ClickHouse/issues/20229). [#21942](https://github.com/ClickHouse/ClickHouse/pull/21942) ([flynn](https://github.com/ucasFL)).
|
||||||
|
|
||||||
|
#### Performance Improvement
|
||||||
|
|
||||||
|
* Improved performance of `intDiv` by dynamic dispatch for AVX2. This closes [#22314](https://github.com/ClickHouse/ClickHouse/issues/22314). [#23000](https://github.com/ClickHouse/ClickHouse/pull/23000) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Improved performance of reading from `ArrowStream` input format for sources other then local file (e.g. URL). [#22673](https://github.com/ClickHouse/ClickHouse/pull/22673) ([nvartolomei](https://github.com/nvartolomei)).
|
||||||
|
* Disabled compression by default when interacting with localhost (with clickhouse-client or server to server with distributed queries) via native protocol. It may improve performance of some import/export operations. This closes [#22234](https://github.com/ClickHouse/ClickHouse/issues/22234). [#22237](https://github.com/ClickHouse/ClickHouse/pull/22237) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Exclude values that does not belong to the shard from right part of IN section for distributed queries (under `optimize_skip_unused_shards_rewrite_in`, enabled by default, since it still requires `optimize_skip_unused_shards`). [#21511](https://github.com/ClickHouse/ClickHouse/pull/21511) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Improved performance of reading a subset of columns with File-like table engine and column-oriented format like Parquet, Arrow or ORC. This closes [#issue:20129](https://github.com/ClickHouse/ClickHouse/issues/20129). [#21302](https://github.com/ClickHouse/ClickHouse/pull/21302) ([keenwolf](https://github.com/keen-wolf)).
|
||||||
|
* Allow to move more conditions to `PREWHERE` as it was before version 21.1 (adjustment of internal heuristics). Insufficient number of moved condtions could lead to worse performance. [#23397](https://github.com/ClickHouse/ClickHouse/pull/23397) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Improved performance of ODBC connections and fixed all the outstanding issues from the backlog. Using `nanodbc` library instead of `Poco::ODBC`. Closes [#9678](https://github.com/ClickHouse/ClickHouse/issues/9678). Add support for DateTime64 and Decimal* for ODBC table engine. Closes [#21961](https://github.com/ClickHouse/ClickHouse/issues/21961). Fixed issue with cyrillic text being truncated. Closes [#16246](https://github.com/ClickHouse/ClickHouse/issues/16246). Added connection pools for odbc bridge. [#21972](https://github.com/ClickHouse/ClickHouse/pull/21972) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
|
||||||
|
* Increase `max_uri_size` (the maximum size of URL in HTTP interface) to 1 MiB by default. This closes [#21197](https://github.com/ClickHouse/ClickHouse/issues/21197). [#22997](https://github.com/ClickHouse/ClickHouse/pull/22997) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Set `background_fetches_pool_size` to `8` that is better for production usage with frequent small insertions or slow ZooKeeper cluster. [#22945](https://github.com/ClickHouse/ClickHouse/pull/22945) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* FlatDictionary added `initial_array_size`, `max_array_size` options. [#22521](https://github.com/ClickHouse/ClickHouse/pull/22521) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Add new setting `non_replicated_deduplication_window` for non-replicated MergeTree inserts deduplication. [#22514](https://github.com/ClickHouse/ClickHouse/pull/22514) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Update paths to the `CatBoost` model configs in config reloading. [#22434](https://github.com/ClickHouse/ClickHouse/pull/22434) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Added `Decimal256` type support in dictionaries. `Decimal256` is experimental feature. Closes [#20979](https://github.com/ClickHouse/ClickHouse/issues/20979). [#22960](https://github.com/ClickHouse/ClickHouse/pull/22960) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Enabled `async_socket_for_remote` by default (using less amount of OS threads for distributed queries). [#23683](https://github.com/ClickHouse/ClickHouse/pull/23683) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fixed `quantile(s)TDigest`. Added special handling of singleton centroids according to tdunning/t-digest 3.2+. Also a bug with over-compression of centroids in implementation of earlier version of the algorithm was fixed. [#23314](https://github.com/ClickHouse/ClickHouse/pull/23314) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
* Make function name `unhex` case insensitive for compatibility with MySQL. [#23229](https://github.com/ClickHouse/ClickHouse/pull/23229) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Implement functions `arrayHasAny`, `arrayHasAll`, `has`, `indexOf`, `countEqual` for generic case when types of array elements are different. In previous versions the functions `arrayHasAny`, `arrayHasAll` returned false and `has`, `indexOf`, `countEqual` thrown exception. Also add support for `Decimal` and big integer types in functions `has` and similar. This closes [#20272](https://github.com/ClickHouse/ClickHouse/issues/20272). [#23044](https://github.com/ClickHouse/ClickHouse/pull/23044) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Raised the threshold on max number of matches in result of the function `extractAllGroupsHorizontal`. [#23036](https://github.com/ClickHouse/ClickHouse/pull/23036) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||||
|
* Do not perform `optimize_skip_unused_shards` for cluster with one node. [#22999](https://github.com/ClickHouse/ClickHouse/pull/22999) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Added ability to run clickhouse-keeper (experimental drop-in replacement to ZooKeeper) with SSL. Config settings `keeper_server.tcp_port_secure` can be used for secure interaction between client and keeper-server. `keeper_server.raft_configuration.secure` can be used to enable internal secure communication between nodes. [#22992](https://github.com/ClickHouse/ClickHouse/pull/22992) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Added ability to flush buffer only in background for `Buffer` tables. [#22986](https://github.com/ClickHouse/ClickHouse/pull/22986) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* When selecting from MergeTree table with NULL in WHERE condition, in rare cases, exception was thrown. This closes [#20019](https://github.com/ClickHouse/ClickHouse/issues/20019). [#22978](https://github.com/ClickHouse/ClickHouse/pull/22978) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix error handling in Poco HTTP Client for AWS. [#22973](https://github.com/ClickHouse/ClickHouse/pull/22973) ([kreuzerkrieg](https://github.com/kreuzerkrieg)).
|
||||||
|
* Respect `max_part_removal_threads` for `ReplicatedMergeTree`. [#22971](https://github.com/ClickHouse/ClickHouse/pull/22971) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix obscure corner case of MergeTree settings inactive_parts_to_throw_insert = 0 with inactive_parts_to_delay_insert > 0. [#22947](https://github.com/ClickHouse/ClickHouse/pull/22947) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* `dateDiff` now works with `DateTime64` arguments (even for values outside of `DateTime` range) [#22931](https://github.com/ClickHouse/ClickHouse/pull/22931) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||||
|
* MaterializeMySQL (experimental feature): added an ability to replicate MySQL databases containing views without failing. This is accomplished by ignoring the views. [#22760](https://github.com/ClickHouse/ClickHouse/pull/22760) ([Christian](https://github.com/cfroystad)).
|
||||||
|
* Allow RBAC row policy via postgresql protocol. Closes [#22658](https://github.com/ClickHouse/ClickHouse/issues/22658). PostgreSQL protocol is enabled in configuration by default. [#22755](https://github.com/ClickHouse/ClickHouse/pull/22755) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Add metric to track how much time is spend during waiting for Buffer layer lock. [#22725](https://github.com/ClickHouse/ClickHouse/pull/22725) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Allow to use CTE in VIEW definition. This closes [#22491](https://github.com/ClickHouse/ClickHouse/issues/22491). [#22657](https://github.com/ClickHouse/ClickHouse/pull/22657) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Clear the rest of the screen and show cursor in `clickhouse-client` if previous program has left garbage in terminal. This closes [#16518](https://github.com/ClickHouse/ClickHouse/issues/16518). [#22634](https://github.com/ClickHouse/ClickHouse/pull/22634) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Make `round` function to behave consistently on non-x86_64 platforms. Rounding half to nearest even (Banker's rounding) is used. [#22582](https://github.com/ClickHouse/ClickHouse/pull/22582) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Correctly check structure of blocks of data that are sending by Distributed tables. [#22325](https://github.com/ClickHouse/ClickHouse/pull/22325) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Allow publishing Kafka errors to a virtual column of Kafka engine, controlled by the `kafka_handle_error_mode` setting. [#21850](https://github.com/ClickHouse/ClickHouse/pull/21850) ([fastio](https://github.com/fastio)).
|
||||||
|
* Add aliases `simpleJSONExtract/simpleJSONHas` to `visitParam/visitParamExtract{UInt, Int, Bool, Float, Raw, String}`. Fixes [#21383](https://github.com/ClickHouse/ClickHouse/issues/21383). [#21519](https://github.com/ClickHouse/ClickHouse/pull/21519) ([fastio](https://github.com/fastio)).
|
||||||
|
* Add `clickhouse-library-bridge` for library dictionary source. Closes [#9502](https://github.com/ClickHouse/ClickHouse/issues/9502). [#21509](https://github.com/ClickHouse/ClickHouse/pull/21509) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Forbid to drop a column if it's referenced by materialized view. Closes [#21164](https://github.com/ClickHouse/ClickHouse/issues/21164). [#21303](https://github.com/ClickHouse/ClickHouse/pull/21303) ([flynn](https://github.com/ucasFL)).
|
||||||
|
* Support dynamic interserver credentials (rotating credentials without downtime). [#14113](https://github.com/ClickHouse/ClickHouse/pull/14113) ([johnskopis](https://github.com/johnskopis)).
|
||||||
|
* Add support for Kafka storage with `Arrow` and `ArrowStream` format messages. [#23415](https://github.com/ClickHouse/ClickHouse/pull/23415) ([Chao Ma](https://github.com/godliness)).
|
||||||
|
* Fixed missing semicolon in exception message. The user may find this exception message unpleasant to read. [#23208](https://github.com/ClickHouse/ClickHouse/pull/23208) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fixed missing whitespace in some exception messages about `LowCardinality` type. [#23207](https://github.com/ClickHouse/ClickHouse/pull/23207) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Some values were formatted with alignment in center in table cells in `Markdown` format. Not anymore. [#23096](https://github.com/ClickHouse/ClickHouse/pull/23096) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Remove non-essential details from suggestions in clickhouse-client. This closes [#22158](https://github.com/ClickHouse/ClickHouse/issues/22158). [#23040](https://github.com/ClickHouse/ClickHouse/pull/23040) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Correct calculation of `bytes_allocated` field in system.dictionaries for sparse_hashed dictionaries. [#22867](https://github.com/ClickHouse/ClickHouse/pull/22867) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fixed approximate total rows accounting for reverse reading from MergeTree. [#22726](https://github.com/ClickHouse/ClickHouse/pull/22726) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix the case when it was possible to configure dictionary with clickhouse source that was looking to itself that leads to infinite loop. Closes [#14314](https://github.com/ClickHouse/ClickHouse/issues/14314). [#22479](https://github.com/ClickHouse/ClickHouse/pull/22479) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Multiple fixes for hedged requests. Fixed an error `Can't initialize pipeline with empty pipe` for queries with `GLOBAL IN/JOIN` when the setting `use_hedged_requests` is enabled. Fixes [#23431](https://github.com/ClickHouse/ClickHouse/issues/23431). [#23805](https://github.com/ClickHouse/ClickHouse/pull/23805) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). Fixed a race condition in hedged connections which leads to crash. This fixes [#22161](https://github.com/ClickHouse/ClickHouse/issues/22161). [#22443](https://github.com/ClickHouse/ClickHouse/pull/22443) ([Kruglov Pavel](https://github.com/Avogar)). Fix possible crash in case if `unknown packet` was received from remote query (with `async_socket_for_remote` enabled). Fixes [#21167](https://github.com/ClickHouse/ClickHouse/issues/21167). [#23309](https://github.com/ClickHouse/ClickHouse/pull/23309) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fixed the behavior when disabling `input_format_with_names_use_header ` setting discards all the input with CSVWithNames format. This fixes [#22406](https://github.com/ClickHouse/ClickHouse/issues/22406). [#23202](https://github.com/ClickHouse/ClickHouse/pull/23202) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Fixed remote JDBC bridge timeout connection issue. Closes [#9609](https://github.com/ClickHouse/ClickHouse/issues/9609). [#23771](https://github.com/ClickHouse/ClickHouse/pull/23771) ([Maksim Kita](https://github.com/kitaisreal), [alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix the logic of initial load of `complex_key_hashed` if `update_field` is specified. Closes [#23800](https://github.com/ClickHouse/ClickHouse/issues/23800). [#23824](https://github.com/ClickHouse/ClickHouse/pull/23824) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Fixed crash when `PREWHERE` and row policy filter are both in effect with empty result. [#23763](https://github.com/ClickHouse/ClickHouse/pull/23763) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Avoid possible "Cannot schedule a task" error (in case some exception had been occurred) on INSERT into Distributed. [#23744](https://github.com/ClickHouse/ClickHouse/pull/23744) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Added an exception in case of completely the same values in both samples in aggregate function `mannWhitneyUTest`. This fixes [#23646](https://github.com/ClickHouse/ClickHouse/issues/23646). [#23654](https://github.com/ClickHouse/ClickHouse/pull/23654) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Fixed server fault when inserting data through HTTP caused an exception. This fixes [#23512](https://github.com/ClickHouse/ClickHouse/issues/23512). [#23643](https://github.com/ClickHouse/ClickHouse/pull/23643) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Fixed misinterpretation of some `LIKE` expressions with escape sequences. [#23610](https://github.com/ClickHouse/ClickHouse/pull/23610) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fixed restart / stop command hanging. Closes [#20214](https://github.com/ClickHouse/ClickHouse/issues/20214). [#23552](https://github.com/ClickHouse/ClickHouse/pull/23552) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* Fixed `COLUMNS` matcher in case of multiple JOINs in select query. Closes [#22736](https://github.com/ClickHouse/ClickHouse/issues/22736). [#23501](https://github.com/ClickHouse/ClickHouse/pull/23501) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Fixed a crash when modifying column's default value when a column itself is used as `ReplacingMergeTree`'s parameter. [#23483](https://github.com/ClickHouse/ClickHouse/pull/23483) ([hexiaoting](https://github.com/hexiaoting)).
|
||||||
|
* Fixed corner cases in vertical merges with `ReplacingMergeTree`. In rare cases they could lead to fails of merges with exceptions like `Incomplete granules are not allowed while blocks are granules size`. [#23459](https://github.com/ClickHouse/ClickHouse/pull/23459) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fixed bug that does not allow cast from empty array literal, to array with dimensions greater than 1, e.g. `CAST([] AS Array(Array(String)))`. Closes [#14476](https://github.com/ClickHouse/ClickHouse/issues/14476). [#23456](https://github.com/ClickHouse/ClickHouse/pull/23456) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Fixed a bug when `deltaSum` aggregate function produced incorrect result after resetting the counter. [#23437](https://github.com/ClickHouse/ClickHouse/pull/23437) ([Russ Frank](https://github.com/rf)).
|
||||||
|
* Fixed `Cannot unlink file` error on unsuccessful creation of ReplicatedMergeTree table with multidisk configuration. This closes [#21755](https://github.com/ClickHouse/ClickHouse/issues/21755). [#23433](https://github.com/ClickHouse/ClickHouse/pull/23433) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fixed incompatible constant expression generation during partition pruning based on virtual columns. This fixes https://github.com/ClickHouse/ClickHouse/pull/21401#discussion_r611888913. [#23366](https://github.com/ClickHouse/ClickHouse/pull/23366) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fixed a crash when setting join_algorithm is set to 'auto' and Join is performed with a Dictionary. Close [#23002](https://github.com/ClickHouse/ClickHouse/issues/23002). [#23312](https://github.com/ClickHouse/ClickHouse/pull/23312) ([Vladimir](https://github.com/vdimir)).
|
||||||
|
* Don't relax NOT conditions during partition pruning. This fixes [#23305](https://github.com/ClickHouse/ClickHouse/issues/23305) and [#21539](https://github.com/ClickHouse/ClickHouse/issues/21539). [#23310](https://github.com/ClickHouse/ClickHouse/pull/23310) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fixed very rare race condition on background cleanup of old blocks. It might cause a block not to be deduplicated if it's too close to the end of deduplication window. [#23301](https://github.com/ClickHouse/ClickHouse/pull/23301) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fixed very rare (distributed) race condition between creation and removal of ReplicatedMergeTree tables. It might cause exceptions like `node doesn't exist` on attempt to create replicated table. Fixes [#21419](https://github.com/ClickHouse/ClickHouse/issues/21419). [#23294](https://github.com/ClickHouse/ClickHouse/pull/23294) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fixed simple key dictionary from DDL creation if primary key is not first attribute. Fixes [#23236](https://github.com/ClickHouse/ClickHouse/issues/23236). [#23262](https://github.com/ClickHouse/ClickHouse/pull/23262) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Fixed reading from ODBC when there are many long column names in a table. Closes [#8853](https://github.com/ClickHouse/ClickHouse/issues/8853). [#23215](https://github.com/ClickHouse/ClickHouse/pull/23215) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* MaterializeMySQL (experimental feature): fixed `Not found column` error when selecting from `MaterializeMySQL` with condition on key column. Fixes [#22432](https://github.com/ClickHouse/ClickHouse/issues/22432). [#23200](https://github.com/ClickHouse/ClickHouse/pull/23200) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Correct aliases handling if subquery was optimized to constant. Fixes [#22924](https://github.com/ClickHouse/ClickHouse/issues/22924). Fixes [#10401](https://github.com/ClickHouse/ClickHouse/issues/10401). [#23191](https://github.com/ClickHouse/ClickHouse/pull/23191) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Server might fail to start if `data_type_default_nullable` setting is enabled in default profile, it's fixed. Fixes [#22573](https://github.com/ClickHouse/ClickHouse/issues/22573). [#23185](https://github.com/ClickHouse/ClickHouse/pull/23185) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fixed a crash on shutdown which happened because of wrong accounting of current connections. [#23154](https://github.com/ClickHouse/ClickHouse/pull/23154) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* Fixed `Table .inner_id... doesn't exist` error when selecting from Materialized View after detaching it from Atomic database and attaching back. [#23047](https://github.com/ClickHouse/ClickHouse/pull/23047) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix error `Cannot find column in ActionsDAG result` which may happen if subquery uses `untuple`. Fixes [#22290](https://github.com/ClickHouse/ClickHouse/issues/22290). [#22991](https://github.com/ClickHouse/ClickHouse/pull/22991) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix usage of constant columns of type `Map` with nullable values. [#22939](https://github.com/ClickHouse/ClickHouse/pull/22939) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* fixed `formatDateTime()` on `DateTime64` and "%C" format specifier fixed `toDateTime64()` for large values and non-zero scale. [#22937](https://github.com/ClickHouse/ClickHouse/pull/22937) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||||
|
* Fixed a crash when using `mannWhitneyUTest` and `rankCorr` with window functions. This fixes [#22728](https://github.com/ClickHouse/ClickHouse/issues/22728). [#22876](https://github.com/ClickHouse/ClickHouse/pull/22876) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* LIVE VIEW (experimental feature): fixed possible hanging in concurrent DROP/CREATE of TEMPORARY LIVE VIEW in `TemporaryLiveViewCleaner`, [see](https://gist.github.com/vzakaznikov/0c03195960fc86b56bfe2bc73a90019e). [#22858](https://github.com/ClickHouse/ClickHouse/pull/22858) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* Fixed pushdown of `HAVING` in case, when filter column is used in aggregation. [#22763](https://github.com/ClickHouse/ClickHouse/pull/22763) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fixed possible hangs in Zookeeper requests in case of OOM exception. Fixes [#22438](https://github.com/ClickHouse/ClickHouse/issues/22438). [#22684](https://github.com/ClickHouse/ClickHouse/pull/22684) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fixed wait for mutations on several replicas for ReplicatedMergeTree table engines. Previously, mutation/alter query may finish before mutation actually executed on other replicas. [#22669](https://github.com/ClickHouse/ClickHouse/pull/22669) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fixed exception for Log with nested types without columns in the SELECT clause. [#22654](https://github.com/ClickHouse/ClickHouse/pull/22654) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix unlimited wait for auxiliary AWS requests. [#22594](https://github.com/ClickHouse/ClickHouse/pull/22594) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
* Fixed a crash when client closes connection very early [#22579](https://github.com/ClickHouse/ClickHouse/issues/22579). [#22591](https://github.com/ClickHouse/ClickHouse/pull/22591) ([nvartolomei](https://github.com/nvartolomei)).
|
||||||
|
* `Map` data type (experimental feature): fixed an incorrect formatting of function `map` in distributed queries. [#22588](https://github.com/ClickHouse/ClickHouse/pull/22588) ([foolchi](https://github.com/foolchi)).
|
||||||
|
* Fixed deserialization of empty string without newline at end of TSV format. This closes [#20244](https://github.com/ClickHouse/ClickHouse/issues/20244). Possible workaround without version update: set `input_format_null_as_default` to zero. It was zero in old versions. [#22527](https://github.com/ClickHouse/ClickHouse/pull/22527) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fixed wrong cast of a column of `LowCardinality` type in Merge Join algorithm. Close [#22386](https://github.com/ClickHouse/ClickHouse/issues/22386), close [#22388](https://github.com/ClickHouse/ClickHouse/issues/22388). [#22510](https://github.com/ClickHouse/ClickHouse/pull/22510) ([Vladimir](https://github.com/vdimir)).
|
||||||
|
* Buffer overflow (on read) was possible in `tokenbf_v1` full text index. The excessive bytes are not used but the read operation may lead to crash in rare cases. This closes [#19233](https://github.com/ClickHouse/ClickHouse/issues/19233). [#22421](https://github.com/ClickHouse/ClickHouse/pull/22421) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Do not limit HTTP chunk size. Fixes [#21907](https://github.com/ClickHouse/ClickHouse/issues/21907). [#22322](https://github.com/ClickHouse/ClickHouse/pull/22322) ([Ivan](https://github.com/abyss7)).
|
||||||
|
* Fixed a bug, which leads to underaggregation of data in case of enabled `optimize_aggregation_in_order` and many parts in table. Slightly improve performance of aggregation with enabled `optimize_aggregation_in_order`. [#21889](https://github.com/ClickHouse/ClickHouse/pull/21889) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Check if table function view is used as a column. This complements #20350. [#21465](https://github.com/ClickHouse/ClickHouse/pull/21465) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix "unknown column" error for tables with `Merge` engine in queris with `JOIN` and aggregation. Closes [#18368](https://github.com/ClickHouse/ClickHouse/issues/18368), close [#22226](https://github.com/ClickHouse/ClickHouse/issues/22226). [#21370](https://github.com/ClickHouse/ClickHouse/pull/21370) ([Vladimir](https://github.com/vdimir)).
|
||||||
|
* Fixed name clashes in pushdown optimization. It caused incorrect `WHERE` filtration after FULL JOIN. Close [#20497](https://github.com/ClickHouse/ClickHouse/issues/20497). [#20622](https://github.com/ClickHouse/ClickHouse/pull/20622) ([Vladimir](https://github.com/vdimir)).
|
||||||
|
* Fixed very rare bug when quorum insert with `quorum_parallel=1` is not really "quorum" because of deduplication. [#18215](https://github.com/ClickHouse/ClickHouse/pull/18215) ([filimonov](https://github.com/filimonov) - reported, [alesapin](https://github.com/alesapin) - fixed).
|
||||||
|
|
||||||
|
#### Build/Testing/Packaging Improvement
|
||||||
|
|
||||||
|
* Run stateless tests in parallel in CI. [#22300](https://github.com/ClickHouse/ClickHouse/pull/22300) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Simplify debian packages. This fixes [#21698](https://github.com/ClickHouse/ClickHouse/issues/21698). [#22976](https://github.com/ClickHouse/ClickHouse/pull/22976) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Added support for ClickHouse build on Apple M1. [#21639](https://github.com/ClickHouse/ClickHouse/pull/21639) ([changvvb](https://github.com/changvvb)).
|
||||||
|
* Fixed ClickHouse Keeper build for MacOS. [#22860](https://github.com/ClickHouse/ClickHouse/pull/22860) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fixed some tests on AArch64 platform. [#22596](https://github.com/ClickHouse/ClickHouse/pull/22596) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Added function alignment for possibly better performance. [#21431](https://github.com/ClickHouse/ClickHouse/pull/21431) ([Danila Kutenin](https://github.com/danlark1)).
|
||||||
|
* Adjust some tests to output identical results on amd64 and aarch64 (qemu). The result was depending on implementation specific CPU behaviour. [#22590](https://github.com/ClickHouse/ClickHouse/pull/22590) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Allow query profiling only on x86_64. See [#15174](https://github.com/ClickHouse/ClickHouse/issues/15174#issuecomment-812954965) and [#15638](https://github.com/ClickHouse/ClickHouse/issues/15638#issuecomment-703805337). This closes [#15638](https://github.com/ClickHouse/ClickHouse/issues/15638). [#22580](https://github.com/ClickHouse/ClickHouse/pull/22580) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Allow building with unbundled xz (lzma) using `USE_INTERNAL_XZ_LIBRARY=OFF` CMake option. [#22571](https://github.com/ClickHouse/ClickHouse/pull/22571) ([Kfir Itzhak](https://github.com/mastertheknife)).
|
||||||
|
* Enable bundled `openldap` on `ppc64le` [#22487](https://github.com/ClickHouse/ClickHouse/pull/22487) ([Kfir Itzhak](https://github.com/mastertheknife)).
|
||||||
|
* Disable incompatible libraries (platform specific typically) on `ppc64le` [#22475](https://github.com/ClickHouse/ClickHouse/pull/22475) ([Kfir Itzhak](https://github.com/mastertheknife)).
|
||||||
|
* Add Jepsen test in CI for clickhouse Keeper. [#22373](https://github.com/ClickHouse/ClickHouse/pull/22373) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Build `jemalloc` with support for [heap profiling](https://github.com/jemalloc/jemalloc/wiki/Use-Case%3A-Heap-Profiling). [#22834](https://github.com/ClickHouse/ClickHouse/pull/22834) ([nvartolomei](https://github.com/nvartolomei)).
|
||||||
|
* Avoid UB in `*Log` engines for rwlock unlock due to unlock from another thread. [#22583](https://github.com/ClickHouse/ClickHouse/pull/22583) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fixed UB by unlocking the rwlock of the TinyLog from the same thread. [#22560](https://github.com/ClickHouse/ClickHouse/pull/22560) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
|
||||||
|
|
||||||
## ClickHouse release 21.4
|
## ClickHouse release 21.4
|
||||||
|
|
||||||
### ClickHouse release 21.4.1 2021-04-12
|
### ClickHouse release 21.4.1 2021-04-12
|
||||||
|
@ -36,7 +36,7 @@ option(FAIL_ON_UNSUPPORTED_OPTIONS_COMBINATION
|
|||||||
if(FAIL_ON_UNSUPPORTED_OPTIONS_COMBINATION)
|
if(FAIL_ON_UNSUPPORTED_OPTIONS_COMBINATION)
|
||||||
set(RECONFIGURE_MESSAGE_LEVEL FATAL_ERROR)
|
set(RECONFIGURE_MESSAGE_LEVEL FATAL_ERROR)
|
||||||
else()
|
else()
|
||||||
set(RECONFIGURE_MESSAGE_LEVEL STATUS)
|
set(RECONFIGURE_MESSAGE_LEVEL WARNING)
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
enable_language(C CXX ASM)
|
enable_language(C CXX ASM)
|
||||||
@ -527,6 +527,7 @@ include (cmake/find/nanodbc.cmake)
|
|||||||
include (cmake/find/rocksdb.cmake)
|
include (cmake/find/rocksdb.cmake)
|
||||||
include (cmake/find/libpqxx.cmake)
|
include (cmake/find/libpqxx.cmake)
|
||||||
include (cmake/find/nuraft.cmake)
|
include (cmake/find/nuraft.cmake)
|
||||||
|
include (cmake/find/yaml-cpp.cmake)
|
||||||
|
|
||||||
|
|
||||||
if(NOT USE_INTERNAL_PARQUET_LIBRARY)
|
if(NOT USE_INTERNAL_PARQUET_LIBRARY)
|
||||||
|
@ -8,7 +8,7 @@ ClickHouse® is an open-source column-oriented database management system that a
|
|||||||
* [Tutorial](https://clickhouse.tech/docs/en/getting_started/tutorial/) shows how to set up and query small ClickHouse cluster.
|
* [Tutorial](https://clickhouse.tech/docs/en/getting_started/tutorial/) shows how to set up and query small ClickHouse cluster.
|
||||||
* [Documentation](https://clickhouse.tech/docs/en/) provides more in-depth information.
|
* [Documentation](https://clickhouse.tech/docs/en/) provides more in-depth information.
|
||||||
* [YouTube channel](https://www.youtube.com/c/ClickHouseDB) has a lot of content about ClickHouse in video format.
|
* [YouTube channel](https://www.youtube.com/c/ClickHouseDB) has a lot of content about ClickHouse in video format.
|
||||||
* [Slack](https://join.slack.com/t/clickhousedb/shared_invite/zt-nwwakmk4-xOJ6cdy0sJC3It8j348~IA) and [Telegram](https://telegram.me/clickhouse_en) allow to chat with ClickHouse users in real-time.
|
* [Slack](https://join.slack.com/t/clickhousedb/shared_invite/zt-qfort0u8-TWqK4wIP0YSdoDE0btKa1w) and [Telegram](https://telegram.me/clickhouse_en) allow to chat with ClickHouse users in real-time.
|
||||||
* [Blog](https://clickhouse.yandex/blog/en/) contains various ClickHouse-related articles, as well as announcements and reports about events.
|
* [Blog](https://clickhouse.yandex/blog/en/) contains various ClickHouse-related articles, as well as announcements and reports about events.
|
||||||
* [Code Browser](https://clickhouse.tech/codebrowser/html_report/ClickHouse/index.html) with syntax highlight and navigation.
|
* [Code Browser](https://clickhouse.tech/codebrowser/html_report/ClickHouse/index.html) with syntax highlight and navigation.
|
||||||
* [Contacts](https://clickhouse.tech/#contacts) can help to get your questions answered if there are any.
|
* [Contacts](https://clickhouse.tech/#contacts) can help to get your questions answered if there are any.
|
||||||
|
@ -3,5 +3,11 @@ add_library (bridge
|
|||||||
)
|
)
|
||||||
|
|
||||||
target_include_directories (daemon PUBLIC ..)
|
target_include_directories (daemon PUBLIC ..)
|
||||||
target_link_libraries (bridge PRIVATE daemon dbms Poco::Data Poco::Data::ODBC)
|
target_link_libraries (bridge
|
||||||
|
PRIVATE
|
||||||
|
daemon
|
||||||
|
dbms
|
||||||
|
Poco::Data
|
||||||
|
Poco::Data::ODBC
|
||||||
|
)
|
||||||
|
|
||||||
|
@ -468,7 +468,7 @@ void BaseDaemon::reloadConfiguration()
|
|||||||
* instead of using files specified in config.xml.
|
* instead of using files specified in config.xml.
|
||||||
* (It's convenient to log in console when you start server without any command line parameters.)
|
* (It's convenient to log in console when you start server without any command line parameters.)
|
||||||
*/
|
*/
|
||||||
config_path = config().getString("config-file", "config.xml");
|
config_path = config().getString("config-file", getDefaultConfigFileName());
|
||||||
DB::ConfigProcessor config_processor(config_path, false, true);
|
DB::ConfigProcessor config_processor(config_path, false, true);
|
||||||
config_processor.setConfigPath(Poco::Path(config_path).makeParent().toString());
|
config_processor.setConfigPath(Poco::Path(config_path).makeParent().toString());
|
||||||
loaded_config = config_processor.loadConfig(/* allow_zk_includes = */ true);
|
loaded_config = config_processor.loadConfig(/* allow_zk_includes = */ true);
|
||||||
@ -516,6 +516,11 @@ std::string BaseDaemon::getDefaultCorePath() const
|
|||||||
return "/opt/cores/";
|
return "/opt/cores/";
|
||||||
}
|
}
|
||||||
|
|
||||||
|
std::string BaseDaemon::getDefaultConfigFileName() const
|
||||||
|
{
|
||||||
|
return "config.xml";
|
||||||
|
}
|
||||||
|
|
||||||
void BaseDaemon::closeFDs()
|
void BaseDaemon::closeFDs()
|
||||||
{
|
{
|
||||||
#if defined(OS_FREEBSD) || defined(OS_DARWIN)
|
#if defined(OS_FREEBSD) || defined(OS_DARWIN)
|
||||||
|
@ -149,6 +149,8 @@ protected:
|
|||||||
|
|
||||||
virtual std::string getDefaultCorePath() const;
|
virtual std::string getDefaultCorePath() const;
|
||||||
|
|
||||||
|
virtual std::string getDefaultConfigFileName() const;
|
||||||
|
|
||||||
std::optional<DB::StatusFile> pid_file;
|
std::optional<DB::StatusFile> pid_file;
|
||||||
|
|
||||||
std::atomic_bool is_cancelled{false};
|
std::atomic_bool is_cancelled{false};
|
||||||
|
@ -78,6 +78,8 @@ PoolWithFailover::PoolWithFailover(
|
|||||||
const RemoteDescription & addresses,
|
const RemoteDescription & addresses,
|
||||||
const std::string & user,
|
const std::string & user,
|
||||||
const std::string & password,
|
const std::string & password,
|
||||||
|
unsigned default_connections_,
|
||||||
|
unsigned max_connections_,
|
||||||
size_t max_tries_)
|
size_t max_tries_)
|
||||||
: max_tries(max_tries_)
|
: max_tries(max_tries_)
|
||||||
, shareable(false)
|
, shareable(false)
|
||||||
@ -85,7 +87,13 @@ PoolWithFailover::PoolWithFailover(
|
|||||||
/// Replicas have the same priority, but traversed replicas are moved to the end of the queue.
|
/// Replicas have the same priority, but traversed replicas are moved to the end of the queue.
|
||||||
for (const auto & [host, port] : addresses)
|
for (const auto & [host, port] : addresses)
|
||||||
{
|
{
|
||||||
replicas_by_priority[0].emplace_back(std::make_shared<Pool>(database, host, user, password, port));
|
replicas_by_priority[0].emplace_back(std::make_shared<Pool>(database,
|
||||||
|
host, user, password, port,
|
||||||
|
/* socket_ = */ "",
|
||||||
|
MYSQLXX_DEFAULT_TIMEOUT,
|
||||||
|
MYSQLXX_DEFAULT_RW_TIMEOUT,
|
||||||
|
default_connections_,
|
||||||
|
max_connections_));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -115,6 +115,8 @@ namespace mysqlxx
|
|||||||
const RemoteDescription & addresses,
|
const RemoteDescription & addresses,
|
||||||
const std::string & user,
|
const std::string & user,
|
||||||
const std::string & password,
|
const std::string & password,
|
||||||
|
unsigned default_connections_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_START_CONNECTIONS,
|
||||||
|
unsigned max_connections_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_CONNECTIONS,
|
||||||
size_t max_tries_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_TRIES);
|
size_t max_tries_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_TRIES);
|
||||||
|
|
||||||
PoolWithFailover(const PoolWithFailover & other);
|
PoolWithFailover(const PoolWithFailover & other);
|
||||||
|
@ -1,9 +1,9 @@
|
|||||||
# This strings autochanged from release_lib.sh:
|
# This strings autochanged from release_lib.sh:
|
||||||
SET(VERSION_REVISION 54451)
|
SET(VERSION_REVISION 54452)
|
||||||
SET(VERSION_MAJOR 21)
|
SET(VERSION_MAJOR 21)
|
||||||
SET(VERSION_MINOR 6)
|
SET(VERSION_MINOR 7)
|
||||||
SET(VERSION_PATCH 1)
|
SET(VERSION_PATCH 1)
|
||||||
SET(VERSION_GITHASH 96fced4c3cf432fb0b401d2ab01f0c56e5f74a96)
|
SET(VERSION_GITHASH 976ccc2e908ac3bc28f763bfea8134ea0a121b40)
|
||||||
SET(VERSION_DESCRIBE v21.6.1.1-prestable)
|
SET(VERSION_DESCRIBE v21.7.1.1-prestable)
|
||||||
SET(VERSION_STRING 21.6.1.1)
|
SET(VERSION_STRING 21.7.1.1)
|
||||||
# end of autochange
|
# end of autochange
|
||||||
|
9
cmake/find/yaml-cpp.cmake
Normal file
9
cmake/find/yaml-cpp.cmake
Normal file
@ -0,0 +1,9 @@
|
|||||||
|
option(USE_YAML_CPP "Enable yaml-cpp" ${ENABLE_LIBRARIES})
|
||||||
|
|
||||||
|
if (NOT USE_YAML_CPP)
|
||||||
|
return()
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/yaml-cpp")
|
||||||
|
message (ERROR "submodule contrib/yaml-cpp is missing. to fix try run: \n git submodule update --init --recursive")
|
||||||
|
endif()
|
4
contrib/CMakeLists.txt
vendored
4
contrib/CMakeLists.txt
vendored
@ -50,6 +50,10 @@ add_subdirectory (replxx-cmake)
|
|||||||
add_subdirectory (unixodbc-cmake)
|
add_subdirectory (unixodbc-cmake)
|
||||||
add_subdirectory (nanodbc-cmake)
|
add_subdirectory (nanodbc-cmake)
|
||||||
|
|
||||||
|
if (USE_YAML_CPP)
|
||||||
|
add_subdirectory (yaml-cpp-cmake)
|
||||||
|
endif()
|
||||||
|
|
||||||
if (USE_INTERNAL_XZ_LIBRARY)
|
if (USE_INTERNAL_XZ_LIBRARY)
|
||||||
add_subdirectory (xz)
|
add_subdirectory (xz)
|
||||||
endif()
|
endif()
|
||||||
|
2
contrib/grpc
vendored
2
contrib/grpc
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 5b79aae85c515e0df4abfb7b1e07975fdc7cecc1
|
Subproject commit 60c986e15cae70aade721d26badabab1f822fdd6
|
2
contrib/re2
vendored
2
contrib/re2
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 7cf8b88e8f70f97fd4926b56aa87e7f53b2717e0
|
Subproject commit 13ebb377c6ad763ca61d12dd6f88b1126bd0b911
|
@ -1,7 +1,7 @@
|
|||||||
file (READ ${SOURCE_FILENAME} CONTENT)
|
file (READ ${SOURCE_FILENAME} CONTENT)
|
||||||
string (REGEX REPLACE "using re2::RE2;" "" CONTENT "${CONTENT}")
|
string (REGEX REPLACE "using re2::RE2;" "" CONTENT "${CONTENT}")
|
||||||
string (REGEX REPLACE "using re2::LazyRE2;" "" CONTENT "${CONTENT}")
|
string (REGEX REPLACE "using re2::LazyRE2;" "" CONTENT "${CONTENT}")
|
||||||
string (REGEX REPLACE "namespace re2" "namespace re2_st" CONTENT "${CONTENT}")
|
string (REGEX REPLACE "namespace re2 {" "namespace re2_st {" CONTENT "${CONTENT}")
|
||||||
string (REGEX REPLACE "re2::" "re2_st::" CONTENT "${CONTENT}")
|
string (REGEX REPLACE "re2::" "re2_st::" CONTENT "${CONTENT}")
|
||||||
string (REGEX REPLACE "\"re2/" "\"re2_st/" CONTENT "${CONTENT}")
|
string (REGEX REPLACE "\"re2/" "\"re2_st/" CONTENT "${CONTENT}")
|
||||||
string (REGEX REPLACE "(.\\*?_H)" "\\1_ST" CONTENT "${CONTENT}")
|
string (REGEX REPLACE "(.\\*?_H)" "\\1_ST" CONTENT "${CONTENT}")
|
||||||
|
1
contrib/yaml-cpp
vendored
Submodule
1
contrib/yaml-cpp
vendored
Submodule
@ -0,0 +1 @@
|
|||||||
|
Subproject commit 0c86adac6d117ee2b4afcedb8ade19036ca0327d
|
39
contrib/yaml-cpp-cmake/CMakeLists.txt
Normal file
39
contrib/yaml-cpp-cmake/CMakeLists.txt
Normal file
@ -0,0 +1,39 @@
|
|||||||
|
set (LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/yaml-cpp)
|
||||||
|
|
||||||
|
set (SRCS
|
||||||
|
${LIBRARY_DIR}/src/binary.cpp
|
||||||
|
${LIBRARY_DIR}/src/emitterutils.cpp
|
||||||
|
${LIBRARY_DIR}/src/null.cpp
|
||||||
|
${LIBRARY_DIR}/src/scantoken.cpp
|
||||||
|
${LIBRARY_DIR}/src/convert.cpp
|
||||||
|
${LIBRARY_DIR}/src/exceptions.cpp
|
||||||
|
${LIBRARY_DIR}/src/ostream_wrapper.cpp
|
||||||
|
${LIBRARY_DIR}/src/simplekey.cpp
|
||||||
|
${LIBRARY_DIR}/src/depthguard.cpp
|
||||||
|
${LIBRARY_DIR}/src/exp.cpp
|
||||||
|
${LIBRARY_DIR}/src/parse.cpp
|
||||||
|
${LIBRARY_DIR}/src/singledocparser.cpp
|
||||||
|
${LIBRARY_DIR}/src/directives.cpp
|
||||||
|
${LIBRARY_DIR}/src/memory.cpp
|
||||||
|
${LIBRARY_DIR}/src/parser.cpp
|
||||||
|
${LIBRARY_DIR}/src/stream.cpp
|
||||||
|
${LIBRARY_DIR}/src/emit.cpp
|
||||||
|
${LIBRARY_DIR}/src/nodebuilder.cpp
|
||||||
|
${LIBRARY_DIR}/src/regex_yaml.cpp
|
||||||
|
${LIBRARY_DIR}/src/tag.cpp
|
||||||
|
${LIBRARY_DIR}/src/emitfromevents.cpp
|
||||||
|
${LIBRARY_DIR}/src/node.cpp
|
||||||
|
${LIBRARY_DIR}/src/scanner.cpp
|
||||||
|
${LIBRARY_DIR}/src/emitter.cpp
|
||||||
|
${LIBRARY_DIR}/src/node_data.cpp
|
||||||
|
${LIBRARY_DIR}/src/scanscalar.cpp
|
||||||
|
${LIBRARY_DIR}/src/emitterstate.cpp
|
||||||
|
${LIBRARY_DIR}/src/nodeevents.cpp
|
||||||
|
${LIBRARY_DIR}/src/scantag.cpp
|
||||||
|
)
|
||||||
|
|
||||||
|
add_library (yaml-cpp ${SRCS})
|
||||||
|
|
||||||
|
|
||||||
|
target_include_directories(yaml-cpp PRIVATE ${LIBRARY_DIR}/include/yaml-cpp)
|
||||||
|
target_include_directories(yaml-cpp SYSTEM BEFORE PUBLIC ${LIBRARY_DIR}/include)
|
4
debian/changelog
vendored
4
debian/changelog
vendored
@ -1,5 +1,5 @@
|
|||||||
clickhouse (21.6.1.1) unstable; urgency=low
|
clickhouse (21.7.1.1) unstable; urgency=low
|
||||||
|
|
||||||
* Modified source code
|
* Modified source code
|
||||||
|
|
||||||
-- clickhouse-release <clickhouse-release@yandex-team.ru> Tue, 20 Apr 2021 01:48:16 +0300
|
-- clickhouse-release <clickhouse-release@yandex-team.ru> Thu, 20 May 2021 22:23:29 +0300
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
FROM ubuntu:18.04
|
FROM ubuntu:18.04
|
||||||
|
|
||||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||||
ARG version=21.6.1.*
|
ARG version=21.7.1.*
|
||||||
|
|
||||||
RUN apt-get update \
|
RUN apt-get update \
|
||||||
&& apt-get install --yes --no-install-recommends \
|
&& apt-get install --yes --no-install-recommends \
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
FROM ubuntu:20.04
|
FROM ubuntu:20.04
|
||||||
|
|
||||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||||
ARG version=21.6.1.*
|
ARG version=21.7.1.*
|
||||||
ARG gosu_ver=1.10
|
ARG gosu_ver=1.10
|
||||||
|
|
||||||
# set non-empty deb_location_url url to create a docker image
|
# set non-empty deb_location_url url to create a docker image
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
FROM ubuntu:18.04
|
FROM ubuntu:18.04
|
||||||
|
|
||||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||||
ARG version=21.6.1.*
|
ARG version=21.7.1.*
|
||||||
|
|
||||||
RUN apt-get update && \
|
RUN apt-get update && \
|
||||||
apt-get install -y apt-transport-https dirmngr && \
|
apt-get install -y apt-transport-https dirmngr && \
|
||||||
|
@ -73,7 +73,7 @@ function start_server
|
|||||||
--path "$FASTTEST_DATA"
|
--path "$FASTTEST_DATA"
|
||||||
--user_files_path "$FASTTEST_DATA/user_files"
|
--user_files_path "$FASTTEST_DATA/user_files"
|
||||||
--top_level_domains_path "$FASTTEST_DATA/top_level_domains"
|
--top_level_domains_path "$FASTTEST_DATA/top_level_domains"
|
||||||
--keeper_server.log_storage_path "$FASTTEST_DATA/coordination"
|
--keeper_server.storage_path "$FASTTEST_DATA/coordination"
|
||||||
)
|
)
|
||||||
clickhouse-server "${opts[@]}" &>> "$FASTTEST_OUTPUT/server.log" &
|
clickhouse-server "${opts[@]}" &>> "$FASTTEST_OUTPUT/server.log" &
|
||||||
server_pid=$!
|
server_pid=$!
|
||||||
@ -376,35 +376,14 @@ function run_tests
|
|||||||
# Depends on LLVM JIT
|
# Depends on LLVM JIT
|
||||||
01852_jit_if
|
01852_jit_if
|
||||||
01865_jit_comparison_constant_result
|
01865_jit_comparison_constant_result
|
||||||
|
01871_merge_tree_compile_expressions
|
||||||
)
|
)
|
||||||
|
|
||||||
(time clickhouse-test --hung-check -j 8 --order=random --use-skip-list --no-long --testname --shard --zookeeper --skip "${TESTS_TO_SKIP[@]}" -- "$FASTTEST_FOCUS" 2>&1 ||:) | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/test_log.txt"
|
time clickhouse-test --hung-check -j 8 --order=random --use-skip-list \
|
||||||
|
--no-long --testname --shard --zookeeper --skip "${TESTS_TO_SKIP[@]}" \
|
||||||
# substr is to remove semicolon after test name
|
-- "$FASTTEST_FOCUS" 2>&1 \
|
||||||
readarray -t FAILED_TESTS < <(awk '/\[ FAIL|TIMEOUT|ERROR \]/ { print substr($3, 1, length($3)-1) }' "$FASTTEST_OUTPUT/test_log.txt" | tee "$FASTTEST_OUTPUT/failed-parallel-tests.txt")
|
| ts '%Y-%m-%d %H:%M:%S' \
|
||||||
|
| tee "$FASTTEST_OUTPUT/test_log.txt"
|
||||||
# We will rerun sequentially any tests that have failed during parallel run.
|
|
||||||
# They might have failed because there was some interference from other tests
|
|
||||||
# running concurrently. If they fail even in seqential mode, we will report them.
|
|
||||||
# FIXME All tests that require exclusive access to the server must be
|
|
||||||
# explicitly marked as `sequential`, and `clickhouse-test` must detect them and
|
|
||||||
# run them in a separate group after all other tests. This is faster and also
|
|
||||||
# explicit instead of guessing.
|
|
||||||
if [[ -n "${FAILED_TESTS[*]}" ]]
|
|
||||||
then
|
|
||||||
stop_server ||:
|
|
||||||
|
|
||||||
# Clean the data so that there is no interference from the previous test run.
|
|
||||||
rm -rf "$FASTTEST_DATA"/{{meta,}data,user_files,coordination} ||:
|
|
||||||
|
|
||||||
start_server
|
|
||||||
|
|
||||||
echo "Going to run again: ${FAILED_TESTS[*]}"
|
|
||||||
|
|
||||||
clickhouse-test --hung-check --order=random --no-long --testname --shard --zookeeper "${FAILED_TESTS[@]}" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee -a "$FASTTEST_OUTPUT/test_log.txt"
|
|
||||||
else
|
|
||||||
echo "No failed tests"
|
|
||||||
fi
|
|
||||||
}
|
}
|
||||||
|
|
||||||
case "$stage" in
|
case "$stage" in
|
||||||
|
@ -0,0 +1,92 @@
|
|||||||
|
version: '2.3'
|
||||||
|
services:
|
||||||
|
zoo1:
|
||||||
|
image: ${image:-yandex/clickhouse-integration-test}
|
||||||
|
restart: always
|
||||||
|
user: ${user:-}
|
||||||
|
volumes:
|
||||||
|
- type: bind
|
||||||
|
source: ${keeper_binary:-}
|
||||||
|
target: /usr/bin/clickhouse
|
||||||
|
- type: bind
|
||||||
|
source: ${keeper_config_dir1:-}
|
||||||
|
target: /etc/clickhouse-keeper
|
||||||
|
- type: bind
|
||||||
|
source: ${keeper_logs_dir1:-}
|
||||||
|
target: /var/log/clickhouse-keeper
|
||||||
|
- type: ${keeper_fs:-tmpfs}
|
||||||
|
source: ${keeper_db_dir1:-}
|
||||||
|
target: /var/lib/clickhouse-keeper
|
||||||
|
entrypoint: "clickhouse keeper --config=/etc/clickhouse-keeper/keeper_config1.xml --log-file=/var/log/clickhouse-keeper/clickhouse-keeper.log --errorlog-file=/var/log/clickhouse-keeper/clickhouse-keeper.err.log"
|
||||||
|
cap_add:
|
||||||
|
- SYS_PTRACE
|
||||||
|
- NET_ADMIN
|
||||||
|
- IPC_LOCK
|
||||||
|
- SYS_NICE
|
||||||
|
security_opt:
|
||||||
|
- label:disable
|
||||||
|
dns_opt:
|
||||||
|
- attempts:2
|
||||||
|
- timeout:1
|
||||||
|
- inet6
|
||||||
|
- rotate
|
||||||
|
zoo2:
|
||||||
|
image: ${image:-yandex/clickhouse-integration-test}
|
||||||
|
restart: always
|
||||||
|
user: ${user:-}
|
||||||
|
volumes:
|
||||||
|
- type: bind
|
||||||
|
source: ${keeper_binary:-}
|
||||||
|
target: /usr/bin/clickhouse
|
||||||
|
- type: bind
|
||||||
|
source: ${keeper_config_dir2:-}
|
||||||
|
target: /etc/clickhouse-keeper
|
||||||
|
- type: bind
|
||||||
|
source: ${keeper_logs_dir2:-}
|
||||||
|
target: /var/log/clickhouse-keeper
|
||||||
|
- type: ${keeper_fs:-tmpfs}
|
||||||
|
source: ${keeper_db_dir2:-}
|
||||||
|
target: /var/lib/clickhouse-keeper
|
||||||
|
entrypoint: "clickhouse keeper --config=/etc/clickhouse-keeper/keeper_config2.xml --log-file=/var/log/clickhouse-keeper/clickhouse-keeper.log --errorlog-file=/var/log/clickhouse-keeper/clickhouse-keeper.err.log"
|
||||||
|
cap_add:
|
||||||
|
- SYS_PTRACE
|
||||||
|
- NET_ADMIN
|
||||||
|
- IPC_LOCK
|
||||||
|
- SYS_NICE
|
||||||
|
security_opt:
|
||||||
|
- label:disable
|
||||||
|
dns_opt:
|
||||||
|
- attempts:2
|
||||||
|
- timeout:1
|
||||||
|
- inet6
|
||||||
|
- rotate
|
||||||
|
zoo3:
|
||||||
|
image: ${image:-yandex/clickhouse-integration-test}
|
||||||
|
restart: always
|
||||||
|
user: ${user:-}
|
||||||
|
volumes:
|
||||||
|
- type: bind
|
||||||
|
source: ${keeper_binary:-}
|
||||||
|
target: /usr/bin/clickhouse
|
||||||
|
- type: bind
|
||||||
|
source: ${keeper_config_dir3:-}
|
||||||
|
target: /etc/clickhouse-keeper
|
||||||
|
- type: bind
|
||||||
|
source: ${keeper_logs_dir3:-}
|
||||||
|
target: /var/log/clickhouse-keeper
|
||||||
|
- type: ${keeper_fs:-tmpfs}
|
||||||
|
source: ${keeper_db_dir3:-}
|
||||||
|
target: /var/lib/clickhouse-keeper
|
||||||
|
entrypoint: "clickhouse keeper --config=/etc/clickhouse-keeper/keeper_config3.xml --log-file=/var/log/clickhouse-keeper/clickhouse-keeper.log --errorlog-file=/var/log/clickhouse-keeper/clickhouse-keeper.err.log"
|
||||||
|
cap_add:
|
||||||
|
- SYS_PTRACE
|
||||||
|
- NET_ADMIN
|
||||||
|
- IPC_LOCK
|
||||||
|
- SYS_NICE
|
||||||
|
security_opt:
|
||||||
|
- label:disable
|
||||||
|
dns_opt:
|
||||||
|
- attempts:2
|
||||||
|
- timeout:1
|
||||||
|
- inet6
|
||||||
|
- rotate
|
@ -15,7 +15,12 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
|||||||
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1] [TTL expr1],
|
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1] [TTL expr1],
|
||||||
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2] [TTL expr2],
|
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2] [TTL expr2],
|
||||||
...
|
...
|
||||||
) ENGINE = MySQL('host:port', 'database', 'table', 'user', 'password'[, replace_query, 'on_duplicate_clause']);
|
) ENGINE = MySQL('host:port', 'database', 'table', 'user', 'password'[, replace_query, 'on_duplicate_clause'])
|
||||||
|
SETTINGS
|
||||||
|
[connection_pool_size=16, ]
|
||||||
|
[connection_max_tries=3, ]
|
||||||
|
[connection_auto_close=true ]
|
||||||
|
;
|
||||||
```
|
```
|
||||||
|
|
||||||
See a detailed description of the [CREATE TABLE](../../../sql-reference/statements/create/table.md#create-table-query) query.
|
See a detailed description of the [CREATE TABLE](../../../sql-reference/statements/create/table.md#create-table-query) query.
|
||||||
|
@ -17,6 +17,7 @@ To define LDAP server you must add `ldap_servers` section to the `config.xml`.
|
|||||||
<yandex>
|
<yandex>
|
||||||
<!- ... -->
|
<!- ... -->
|
||||||
<ldap_servers>
|
<ldap_servers>
|
||||||
|
<!- Typical LDAP server. -->
|
||||||
<my_ldap_server>
|
<my_ldap_server>
|
||||||
<host>localhost</host>
|
<host>localhost</host>
|
||||||
<port>636</port>
|
<port>636</port>
|
||||||
@ -31,6 +32,18 @@ To define LDAP server you must add `ldap_servers` section to the `config.xml`.
|
|||||||
<tls_ca_cert_dir>/path/to/tls_ca_cert_dir</tls_ca_cert_dir>
|
<tls_ca_cert_dir>/path/to/tls_ca_cert_dir</tls_ca_cert_dir>
|
||||||
<tls_cipher_suite>ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:AES256-GCM-SHA384</tls_cipher_suite>
|
<tls_cipher_suite>ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:AES256-GCM-SHA384</tls_cipher_suite>
|
||||||
</my_ldap_server>
|
</my_ldap_server>
|
||||||
|
|
||||||
|
<!- Typical Active Directory with configured user DN detection for further role mapping. -->
|
||||||
|
<my_ad_server>
|
||||||
|
<host>localhost</host>
|
||||||
|
<port>389</port>
|
||||||
|
<bind_dn>EXAMPLE\{user_name}</bind_dn>
|
||||||
|
<user_dn_detection>
|
||||||
|
<base_dn>CN=Users,DC=example,DC=com</base_dn>
|
||||||
|
<search_filter>(&(objectClass=user)(sAMAccountName={user_name}))</search_filter>
|
||||||
|
</user_dn_detection>
|
||||||
|
<enable_tls>no</enable_tls>
|
||||||
|
</my_ad_server>
|
||||||
</ldap_servers>
|
</ldap_servers>
|
||||||
</yandex>
|
</yandex>
|
||||||
```
|
```
|
||||||
@ -43,6 +56,15 @@ Note, that you can define multiple LDAP servers inside the `ldap_servers` sectio
|
|||||||
- `port` — LDAP server port, default is `636` if `enable_tls` is set to `true`, `389` otherwise.
|
- `port` — LDAP server port, default is `636` if `enable_tls` is set to `true`, `389` otherwise.
|
||||||
- `bind_dn` — Template used to construct the DN to bind to.
|
- `bind_dn` — Template used to construct the DN to bind to.
|
||||||
- The resulting DN will be constructed by replacing all `{user_name}` substrings of the template with the actual user name during each authentication attempt.
|
- The resulting DN will be constructed by replacing all `{user_name}` substrings of the template with the actual user name during each authentication attempt.
|
||||||
|
- `user_dn_detection` - Section with LDAP search parameters for detecting the actual user DN of the bound user.
|
||||||
|
- This is mainly used in search filters for further role mapping when the server is Active Directory. The resulting user DN will be used when replacing `{user_dn}` substrings wherever they are allowed. By default, user DN is set equal to bind DN, but once search is performed, it will be updated with to the actual detected user DN value.
|
||||||
|
- `base_dn` - Template used to construct the base DN for the LDAP search.
|
||||||
|
- The resulting DN will be constructed by replacing all `{user_name}` and `{bind_dn}` substrings of the template with the actual user name and bind DN during the LDAP search.
|
||||||
|
- `scope` - Scope of the LDAP search.
|
||||||
|
- Accepted values are: `base`, `one_level`, `children`, `subtree` (the default).
|
||||||
|
- `search_filter` - Template used to construct the search filter for the LDAP search.
|
||||||
|
- The resulting filter will be constructed by replacing all `{user_name}`, `{bind_dn}`, and `{base_dn}` substrings of the template with the actual user name, bind DN, and base DN during the LDAP search.
|
||||||
|
- Note, that the special characters must be escaped properly in XML.
|
||||||
- `verification_cooldown` — A period of time, in seconds, after a successful bind attempt, during which the user will be assumed to be successfully authenticated for all consecutive requests without contacting the LDAP server.
|
- `verification_cooldown` — A period of time, in seconds, after a successful bind attempt, during which the user will be assumed to be successfully authenticated for all consecutive requests without contacting the LDAP server.
|
||||||
- Specify `0` (the default) to disable caching and force contacting the LDAP server for each authentication request.
|
- Specify `0` (the default) to disable caching and force contacting the LDAP server for each authentication request.
|
||||||
- `enable_tls` — A flag to trigger the use of the secure connection to the LDAP server.
|
- `enable_tls` — A flag to trigger the use of the secure connection to the LDAP server.
|
||||||
@ -107,7 +129,7 @@ Goes into `config.xml`.
|
|||||||
<yandex>
|
<yandex>
|
||||||
<!- ... -->
|
<!- ... -->
|
||||||
<user_directories>
|
<user_directories>
|
||||||
<!- ... -->
|
<!- Typical LDAP server. -->
|
||||||
<ldap>
|
<ldap>
|
||||||
<server>my_ldap_server</server>
|
<server>my_ldap_server</server>
|
||||||
<roles>
|
<roles>
|
||||||
@ -122,6 +144,18 @@ Goes into `config.xml`.
|
|||||||
<prefix>clickhouse_</prefix>
|
<prefix>clickhouse_</prefix>
|
||||||
</role_mapping>
|
</role_mapping>
|
||||||
</ldap>
|
</ldap>
|
||||||
|
|
||||||
|
<!- Typical Active Directory with role mapping that relies on the detected user DN. -->
|
||||||
|
<ldap>
|
||||||
|
<server>my_ad_server</server>
|
||||||
|
<role_mapping>
|
||||||
|
<base_dn>CN=Users,DC=example,DC=com</base_dn>
|
||||||
|
<attribute>CN</attribute>
|
||||||
|
<scope>subtree</scope>
|
||||||
|
<search_filter>(&(objectClass=group)(member={user_dn}))</search_filter>
|
||||||
|
<prefix>clickhouse_</prefix>
|
||||||
|
</role_mapping>
|
||||||
|
</ldap>
|
||||||
</user_directories>
|
</user_directories>
|
||||||
</yandex>
|
</yandex>
|
||||||
```
|
```
|
||||||
@ -137,13 +171,13 @@ Note that `my_ldap_server` referred in the `ldap` section inside the `user_direc
|
|||||||
- When a user authenticates, while still bound to LDAP, an LDAP search is performed using `search_filter` and the name of the logged-in user. For each entry found during that search, the value of the specified attribute is extracted. For each attribute value that has the specified prefix, the prefix is removed, and the rest of the value becomes the name of a local role defined in ClickHouse, which is expected to be created beforehand by the [CREATE ROLE](../../sql-reference/statements/create/role.md#create-role-statement) statement.
|
- When a user authenticates, while still bound to LDAP, an LDAP search is performed using `search_filter` and the name of the logged-in user. For each entry found during that search, the value of the specified attribute is extracted. For each attribute value that has the specified prefix, the prefix is removed, and the rest of the value becomes the name of a local role defined in ClickHouse, which is expected to be created beforehand by the [CREATE ROLE](../../sql-reference/statements/create/role.md#create-role-statement) statement.
|
||||||
- There can be multiple `role_mapping` sections defined inside the same `ldap` section. All of them will be applied.
|
- There can be multiple `role_mapping` sections defined inside the same `ldap` section. All of them will be applied.
|
||||||
- `base_dn` — Template used to construct the base DN for the LDAP search.
|
- `base_dn` — Template used to construct the base DN for the LDAP search.
|
||||||
- The resulting DN will be constructed by replacing all `{user_name}` and `{bind_dn}` substrings of the template with the actual user name and bind DN during each LDAP search.
|
- The resulting DN will be constructed by replacing all `{user_name}`, `{bind_dn}`, and `{user_dn}` substrings of the template with the actual user name, bind DN, and user DN during each LDAP search.
|
||||||
- `scope` — Scope of the LDAP search.
|
- `scope` — Scope of the LDAP search.
|
||||||
- Accepted values are: `base`, `one_level`, `children`, `subtree` (the default).
|
- Accepted values are: `base`, `one_level`, `children`, `subtree` (the default).
|
||||||
- `search_filter` — Template used to construct the search filter for the LDAP search.
|
- `search_filter` — Template used to construct the search filter for the LDAP search.
|
||||||
- The resulting filter will be constructed by replacing all `{user_name}`, `{bind_dn}` and `{base_dn}` substrings of the template with the actual user name, bind DN and base DN during each LDAP search.
|
- The resulting filter will be constructed by replacing all `{user_name}`, `{bind_dn}`, `{user_dn}`, and `{base_dn}` substrings of the template with the actual user name, bind DN, user DN, and base DN during each LDAP search.
|
||||||
- Note, that the special characters must be escaped properly in XML.
|
- Note, that the special characters must be escaped properly in XML.
|
||||||
- `attribute` — Attribute name whose values will be returned by the LDAP search.
|
- `attribute` — Attribute name whose values will be returned by the LDAP search. `cn`, by default.
|
||||||
- `prefix` — Prefix, that will be expected to be in front of each string in the original list of strings returned by the LDAP search. The prefix will be removed from the original strings and the resulting strings will be treated as local role names. Empty by default.
|
- `prefix` — Prefix, that will be expected to be in front of each string in the original list of strings returned by the LDAP search. The prefix will be removed from the original strings and the resulting strings will be treated as local role names. Empty by default.
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/external-authenticators/ldap/) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/external-authenticators/ldap/) <!--hide-->
|
||||||
|
@ -1520,8 +1520,8 @@ Do not merge aggregation states from different servers for distributed query pro
|
|||||||
Possible values:
|
Possible values:
|
||||||
|
|
||||||
- 0 — Disabled (final query processing is done on the initiator node).
|
- 0 — Disabled (final query processing is done on the initiator node).
|
||||||
- 1 - Do not merge aggregation states from different servers for distributed query processing (query completelly processed on the shard, initiator only proxy the data).
|
- 1 - Do not merge aggregation states from different servers for distributed query processing (query completelly processed on the shard, initiator only proxy the data), can be used in case it is for certain that there are different keys on different shards.
|
||||||
- 2 - Same as 1 but apply `ORDER BY` and `LIMIT` on the initiator (can be used for queries with `ORDER BY` and/or `LIMIT`).
|
- 2 - Same as `1` but applies `ORDER BY` and `LIMIT` (it is not possilbe when the query processed completelly on the remote node, like for `distributed_group_by_no_merge=1`) on the initiator (can be used for queries with `ORDER BY` and/or `LIMIT`).
|
||||||
|
|
||||||
**Example**
|
**Example**
|
||||||
|
|
||||||
|
@ -253,7 +253,7 @@ windowFunnel(window, [mode, [mode, ... ]])(timestamp, cond1, cond2, ..., condN)
|
|||||||
|
|
||||||
**Parameters**
|
**Parameters**
|
||||||
|
|
||||||
- `window` — Length of the sliding window, it is the time interval between first condition and last condition. The unit of `window` depends on the `timestamp` itself and varies. Determined using the expression `timestamp of cond1 <= timestamp of cond2 <= ... <= timestamp of condN <= timestamp of cond1 + window`.
|
- `window` — Length of the sliding window, it is the time interval between the first and the last condition. The unit of `window` depends on the `timestamp` itself and varies. Determined using the expression `timestamp of cond1 <= timestamp of cond2 <= ... <= timestamp of condN <= timestamp of cond1 + window`.
|
||||||
- `mode` — It is an optional argument. One or more modes can be set.
|
- `mode` — It is an optional argument. One or more modes can be set.
|
||||||
- `'strict'` — If same condition holds for sequence of events then such non-unique events would be skipped.
|
- `'strict'` — If same condition holds for sequence of events then such non-unique events would be skipped.
|
||||||
- `'strict_order'` — Don't allow interventions of other events. E.g. in the case of `A->B->D->C`, it stops finding `A->B->C` at the `D` and the max event level is 2.
|
- `'strict_order'` — Don't allow interventions of other events. E.g. in the case of `A->B->D->C`, it stops finding `A->B->C` at the `D` and the max event level is 2.
|
||||||
@ -312,7 +312,7 @@ FROM
|
|||||||
GROUP BY user_id
|
GROUP BY user_id
|
||||||
)
|
)
|
||||||
GROUP BY level
|
GROUP BY level
|
||||||
ORDER BY level ASC
|
ORDER BY level ASC;
|
||||||
```
|
```
|
||||||
|
|
||||||
Result:
|
Result:
|
||||||
|
@ -422,7 +422,7 @@ Type: [UInt8](../../sql-reference/data-types/int-uint.md).
|
|||||||
Query:
|
Query:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
SELECT isIPAddressInRange('127.0.0.1', '127.0.0.0/8')
|
SELECT isIPAddressInRange('127.0.0.1', '127.0.0.0/8');
|
||||||
```
|
```
|
||||||
|
|
||||||
Result:
|
Result:
|
||||||
@ -436,7 +436,7 @@ Result:
|
|||||||
Query:
|
Query:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
SELECT isIPAddressInRange('127.0.0.1', 'ffff::/16')
|
SELECT isIPAddressInRange('127.0.0.1', 'ffff::/16');
|
||||||
```
|
```
|
||||||
|
|
||||||
Result:
|
Result:
|
||||||
|
@ -373,7 +373,7 @@ This function accepts a number or date or date with time, and returns a FixedStr
|
|||||||
|
|
||||||
## reinterpretAsUUID {#reinterpretasuuid}
|
## reinterpretAsUUID {#reinterpretasuuid}
|
||||||
|
|
||||||
This function accepts 16 bytes string, and returns UUID containing bytes representing the corresponding value in network byte order (big-endian). If the string isn't long enough, the functions work as if the string is padded with the necessary number of null bytes to the end. If the string longer than 16 bytes, the extra bytes at the end are ignored.
|
Accepts 16 bytes string and returns UUID containing bytes representing the corresponding value in network byte order (big-endian). If the string isn't long enough, the function works as if the string is padded with the necessary number of null bytes to the end. If the string longer than 16 bytes, the extra bytes at the end are ignored.
|
||||||
|
|
||||||
**Syntax**
|
**Syntax**
|
||||||
|
|
||||||
@ -429,7 +429,24 @@ Result:
|
|||||||
|
|
||||||
## reinterpret(x, T) {#type_conversion_function-reinterpret}
|
## reinterpret(x, T) {#type_conversion_function-reinterpret}
|
||||||
|
|
||||||
Use the same source in-memory bytes sequence for `x` value and reinterpret it to destination type
|
Uses the same source in-memory bytes sequence for `x` value and reinterprets it to destination type.
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
reinterpret(x, type)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Arguments**
|
||||||
|
|
||||||
|
- `x` — Any type.
|
||||||
|
- `type` — Destination type. [String](../../sql-reference/data-types/string.md).
|
||||||
|
|
||||||
|
**Returned value**
|
||||||
|
|
||||||
|
- Destination type value.
|
||||||
|
|
||||||
|
**Examples**
|
||||||
|
|
||||||
Query:
|
Query:
|
||||||
```sql
|
```sql
|
||||||
@ -448,11 +465,27 @@ Result:
|
|||||||
|
|
||||||
## CAST(x, T) {#type_conversion_function-cast}
|
## CAST(x, T) {#type_conversion_function-cast}
|
||||||
|
|
||||||
Converts input value `x` to the `T` data type. Unlike to `reinterpret` function use external representation of `x` value.
|
Converts input value `x` to the `T` data type. Unlike to `reinterpret` function, type conversion is performed in a natural way.
|
||||||
|
|
||||||
The syntax `CAST(x AS t)` is also supported.
|
The syntax `CAST(x AS t)` is also supported.
|
||||||
|
|
||||||
Note, that if value `x` does not fit the bounds of type T, the function overflows. For example, CAST(-1, 'UInt8') returns 255.
|
!!! note "Note"
|
||||||
|
If value `x` does not fit the bounds of type `T`, the function overflows. For example, `CAST(-1, 'UInt8')` returns `255`.
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
CAST(x, T)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Arguments**
|
||||||
|
|
||||||
|
- `x` — Any type.
|
||||||
|
- `T` — Destination type. [String](../../sql-reference/data-types/string.md).
|
||||||
|
|
||||||
|
**Returned value**
|
||||||
|
|
||||||
|
- Destination type value.
|
||||||
|
|
||||||
**Examples**
|
**Examples**
|
||||||
|
|
||||||
@ -460,9 +493,9 @@ Query:
|
|||||||
|
|
||||||
```sql
|
```sql
|
||||||
SELECT
|
SELECT
|
||||||
cast(toInt8(-1), 'UInt8') AS cast_int_to_uint,
|
CAST(toInt8(-1), 'UInt8') AS cast_int_to_uint,
|
||||||
cast(toInt8(1), 'Float32') AS cast_int_to_float,
|
CAST(toInt8(1), 'Float32') AS cast_int_to_float,
|
||||||
cast('1', 'UInt32') AS cast_string_to_int
|
CAST('1', 'UInt32') AS cast_string_to_int;
|
||||||
```
|
```
|
||||||
|
|
||||||
Result:
|
Result:
|
||||||
@ -492,7 +525,7 @@ Result:
|
|||||||
└─────────────────────┴─────────────────────┴────────────┴─────────────────────┴───────────────────────────┘
|
└─────────────────────┴─────────────────────┴────────────┴─────────────────────┴───────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
Conversion to FixedString(N) only works for arguments of type String or FixedString(N).
|
Conversion to FixedString(N) only works for arguments of type [String](../../sql-reference/data-types/string.md) or [FixedString](../../sql-reference/data-types/fixedstring.md).
|
||||||
|
|
||||||
Type conversion to [Nullable](../../sql-reference/data-types/nullable.md) and back is supported.
|
Type conversion to [Nullable](../../sql-reference/data-types/nullable.md) and back is supported.
|
||||||
|
|
||||||
@ -1038,7 +1071,7 @@ Result:
|
|||||||
|
|
||||||
## parseDateTime64BestEffort {#parsedatetime64besteffort}
|
## parseDateTime64BestEffort {#parsedatetime64besteffort}
|
||||||
|
|
||||||
Same as [parseDateTimeBestEffort](#parsedatetimebesteffort) function but also parse milliseconds and microseconds and return `DateTime64(3)` or `DateTime64(6)` data types.
|
Same as [parseDateTimeBestEffort](#parsedatetimebesteffort) function but also parse milliseconds and microseconds and returns [DateTime](../../sql-reference/functions/type-conversion-functions.md#data_type-datetime) data type.
|
||||||
|
|
||||||
**Syntax**
|
**Syntax**
|
||||||
|
|
||||||
@ -1049,9 +1082,13 @@ parseDateTime64BestEffort(time_string [, precision [, time_zone]])
|
|||||||
**Parameters**
|
**Parameters**
|
||||||
|
|
||||||
- `time_string` — String containing a date or date with time to convert. [String](../../sql-reference/data-types/string.md).
|
- `time_string` — String containing a date or date with time to convert. [String](../../sql-reference/data-types/string.md).
|
||||||
- `precision` — `3` for milliseconds, `6` for microseconds. Default `3`. Optional [UInt8](../../sql-reference/data-types/int-uint.md).
|
- `precision` — Required precision. `3` — for milliseconds, `6` — for microseconds. Default — `3`. Optional. [UInt8](../../sql-reference/data-types/int-uint.md).
|
||||||
- `time_zone` — [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone). The function parses `time_string` according to the timezone. Optional. [String](../../sql-reference/data-types/string.md).
|
- `time_zone` — [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone). The function parses `time_string` according to the timezone. Optional. [String](../../sql-reference/data-types/string.md).
|
||||||
|
|
||||||
|
**Returned value**
|
||||||
|
|
||||||
|
- `time_string` converted to the [DateTime](../../sql-reference/data-types/datetime.md) data type.
|
||||||
|
|
||||||
**Examples**
|
**Examples**
|
||||||
|
|
||||||
Query:
|
Query:
|
||||||
@ -1064,7 +1101,7 @@ UNION ALL
|
|||||||
SELECT parseDateTime64BestEffort('2021-01-01 01:01:00.12346',6) AS a, toTypeName(a) AS t
|
SELECT parseDateTime64BestEffort('2021-01-01 01:01:00.12346',6) AS a, toTypeName(a) AS t
|
||||||
UNION ALL
|
UNION ALL
|
||||||
SELECT parseDateTime64BestEffort('2021-01-01 01:01:00.12346',3,'Europe/Moscow') AS a, toTypeName(a) AS t
|
SELECT parseDateTime64BestEffort('2021-01-01 01:01:00.12346',3,'Europe/Moscow') AS a, toTypeName(a) AS t
|
||||||
FORMAT PrettyCompactMonoBlcok
|
FORMAT PrettyCompactMonoBlock;
|
||||||
```
|
```
|
||||||
|
|
||||||
Result:
|
Result:
|
||||||
@ -1131,12 +1168,14 @@ Result:
|
|||||||
|
|
||||||
## toUnixTimestamp64Nano {#tounixtimestamp64nano}
|
## toUnixTimestamp64Nano {#tounixtimestamp64nano}
|
||||||
|
|
||||||
Converts a `DateTime64` to a `Int64` value with fixed sub-second precision.
|
Converts a `DateTime64` to a `Int64` value with fixed sub-second precision. Input value is scaled up or down appropriately depending on it precision.
|
||||||
Input value is scaled up or down appropriately depending on it precision. Please note that output value is a timestamp in UTC, not in timezone of `DateTime64`.
|
|
||||||
|
!!! info "Note"
|
||||||
|
The output value is a timestamp in UTC, not in the timezone of `DateTime64`.
|
||||||
|
|
||||||
**Syntax**
|
**Syntax**
|
||||||
|
|
||||||
``` sql
|
```sql
|
||||||
toUnixTimestamp64Milli(value)
|
toUnixTimestamp64Milli(value)
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -1152,7 +1191,7 @@ toUnixTimestamp64Milli(value)
|
|||||||
|
|
||||||
Query:
|
Query:
|
||||||
|
|
||||||
``` sql
|
```sql
|
||||||
WITH toDateTime64('2019-09-16 19:20:12.345678910', 6) AS dt64
|
WITH toDateTime64('2019-09-16 19:20:12.345678910', 6) AS dt64
|
||||||
SELECT toUnixTimestamp64Milli(dt64);
|
SELECT toUnixTimestamp64Milli(dt64);
|
||||||
```
|
```
|
||||||
@ -1298,4 +1337,3 @@ Result:
|
|||||||
│ 2,"good" │
|
│ 2,"good" │
|
||||||
└───────────────────────────────────────────┘
|
└───────────────────────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -316,7 +316,7 @@ Allows executing [CREATE](../../sql-reference/statements/create/index.md) and [A
|
|||||||
|
|
||||||
Allows executing [DROP](../../sql-reference/statements/misc.md#drop) and [DETACH](../../sql-reference/statements/misc.md#detach) queries according to the following hierarchy of privileges:
|
Allows executing [DROP](../../sql-reference/statements/misc.md#drop) and [DETACH](../../sql-reference/statements/misc.md#detach) queries according to the following hierarchy of privileges:
|
||||||
|
|
||||||
- `DROP`. Level:
|
- `DROP`. Level: `GROUP`
|
||||||
- `DROP DATABASE`. Level: `DATABASE`
|
- `DROP DATABASE`. Level: `DATABASE`
|
||||||
- `DROP TABLE`. Level: `TABLE`
|
- `DROP TABLE`. Level: `TABLE`
|
||||||
- `DROP VIEW`. Level: `VIEW`
|
- `DROP VIEW`. Level: `VIEW`
|
||||||
|
@ -183,7 +183,7 @@ CREATE TABLE big_table (name String, value UInt32) ENGINE = HDFS('hdfs://hdfs1:9
|
|||||||
#### Ограничения {#limitations}
|
#### Ограничения {#limitations}
|
||||||
* hadoop\_security\_kerberos\_ticket\_cache\_path могут быть определены только на глобальном уровне
|
* hadoop\_security\_kerberos\_ticket\_cache\_path могут быть определены только на глобальном уровне
|
||||||
|
|
||||||
## Поддержика Kerberos {#kerberos-support}
|
## Поддержка Kerberos {#kerberos-support}
|
||||||
|
|
||||||
Если hadoop\_security\_authentication параметр имеет значение 'kerberos', ClickHouse аутентифицируется с помощью Kerberos.
|
Если hadoop\_security\_authentication параметр имеет значение 'kerberos', ClickHouse аутентифицируется с помощью Kerberos.
|
||||||
[Расширенные параметры](#clickhouse-extras) и hadoop\_security\_kerberos\_ticket\_cache\_path помогают сделать это.
|
[Расширенные параметры](#clickhouse-extras) и hadoop\_security\_kerberos\_ticket\_cache\_path помогают сделать это.
|
||||||
|
@ -253,7 +253,7 @@ windowFunnel(window, [mode, [mode, ... ]])(timestamp, cond1, cond2, ..., condN)
|
|||||||
|
|
||||||
**Параметры**
|
**Параметры**
|
||||||
|
|
||||||
- `window` — ширина скользящего окна по времени. Единица измерения зависит от `timestamp` и может варьироваться. Должно соблюдаться условие `timestamp события cond2 <= timestamp события cond1 + window`.
|
- `window` — ширина скользящего окна по времени. Это время между первым и последним условием. Единица измерения зависит от `timestamp` и может варьироваться. Должно соблюдаться условие `timestamp события cond1 <= timestamp события cond2 <= ... <= timestamp события condN <= timestamp события cond1 + window`.
|
||||||
- `mode` — необязательный параметр. Может быть установленно несколько значений одновременно.
|
- `mode` — необязательный параметр. Может быть установленно несколько значений одновременно.
|
||||||
- `'strict'` — не учитывать подряд идущие повторяющиеся события.
|
- `'strict'` — не учитывать подряд идущие повторяющиеся события.
|
||||||
- `'strict_order'` — запрещает посторонние события в искомой последовательности. Например, при поиске цепочки `A->B->C` в `A->B->D->C` поиск будет остановлен на `D` и функция вернет 2.
|
- `'strict_order'` — запрещает посторонние события в искомой последовательности. Например, при поиске цепочки `A->B->C` в `A->B->D->C` поиск будет остановлен на `D` и функция вернет 2.
|
||||||
@ -311,7 +311,7 @@ FROM
|
|||||||
GROUP BY user_id
|
GROUP BY user_id
|
||||||
)
|
)
|
||||||
GROUP BY level
|
GROUP BY level
|
||||||
ORDER BY level ASC
|
ORDER BY level ASC;
|
||||||
```
|
```
|
||||||
|
|
||||||
## retention {#retention}
|
## retention {#retention}
|
||||||
|
@ -5,11 +5,11 @@ toc_title: "Функции для шифрования"
|
|||||||
|
|
||||||
# Функции шифрования {#encryption-functions}
|
# Функции шифрования {#encryption-functions}
|
||||||
|
|
||||||
Даннвые функции реализуют шифрование и расшифровку данных с помощью AES (Advanced Encryption Standard) алгоритма.
|
Данные функции реализуют шифрование и расшифровку данных с помощью AES (Advanced Encryption Standard) алгоритма.
|
||||||
|
|
||||||
Длина ключа зависит от режима шифрования. Он может быть длинной в 16, 24 и 32 байта для режимов шифрования `-128-`, `-196-` и `-256-` соответственно.
|
Длина ключа зависит от режима шифрования. Он может быть длинной в 16, 24 и 32 байта для режимов шифрования `-128-`, `-196-` и `-256-` соответственно.
|
||||||
|
|
||||||
Длина инициализирующего вектора всегда 16 байт (лишнии байты игнорируются).
|
Длина инициализирующего вектора всегда 16 байт (лишние байты игнорируются).
|
||||||
|
|
||||||
Обратите внимание, что до версии Clickhouse 21.1 эти функции работали медленно.
|
Обратите внимание, что до версии Clickhouse 21.1 эти функции работали медленно.
|
||||||
|
|
||||||
|
@ -397,9 +397,9 @@ SELECT addr, isIPv6String(addr) FROM ( SELECT ['::', '1111::ffff', '::ffff:127.0
|
|||||||
|
|
||||||
## isIPAddressInRange {#isipaddressinrange}
|
## isIPAddressInRange {#isipaddressinrange}
|
||||||
|
|
||||||
Проверяет попадает ли IP адрес в интервал, заданный в [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) нотации.
|
Проверяет, попадает ли IP адрес в интервал, заданный в нотации [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing).
|
||||||
|
|
||||||
**Syntax**
|
**Синтаксис**
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
isIPAddressInRange(address, prefix)
|
isIPAddressInRange(address, prefix)
|
||||||
@ -409,7 +409,7 @@ isIPAddressInRange(address, prefix)
|
|||||||
**Аргументы**
|
**Аргументы**
|
||||||
|
|
||||||
- `address` — IPv4 или IPv6 адрес. [String](../../sql-reference/data-types/string.md).
|
- `address` — IPv4 или IPv6 адрес. [String](../../sql-reference/data-types/string.md).
|
||||||
- `prefix` — IPv4 или IPv6 подсеть, заданная в CIDR нотации. [String](../../sql-reference/data-types/string.md).
|
- `prefix` — IPv4 или IPv6 подсеть, заданная в нотации CIDR. [String](../../sql-reference/data-types/string.md).
|
||||||
|
|
||||||
**Возвращаемое значение**
|
**Возвращаемое значение**
|
||||||
|
|
||||||
@ -422,7 +422,7 @@ isIPAddressInRange(address, prefix)
|
|||||||
Запрос:
|
Запрос:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
SELECT isIPAddressInRange('127.0.0.1', '127.0.0.0/8')
|
SELECT isIPAddressInRange('127.0.0.1', '127.0.0.0/8');
|
||||||
```
|
```
|
||||||
|
|
||||||
Результат:
|
Результат:
|
||||||
@ -436,7 +436,7 @@ SELECT isIPAddressInRange('127.0.0.1', '127.0.0.0/8')
|
|||||||
Запрос:
|
Запрос:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
SELECT isIPAddressInRange('127.0.0.1', 'ffff::/16')
|
SELECT isIPAddressInRange('127.0.0.1', 'ffff::/16');
|
||||||
```
|
```
|
||||||
|
|
||||||
Результат:
|
Результат:
|
||||||
|
@ -369,7 +369,7 @@ SELECT toFixedString('foo\0bar', 8) AS s, toStringCutToZero(s) AS s_cut;
|
|||||||
|
|
||||||
## reinterpretAsUUID {#reinterpretasuuid}
|
## reinterpretAsUUID {#reinterpretasuuid}
|
||||||
|
|
||||||
Функция принимает шестнадцатибайтную строку и интерпретирует ее байты в network order (big-endian). Если строка имеет недостаточную длину, то функция работает так, как будто строка дополнена необходимым количетсвом нулевых байт с конца. Если строка длиннее, чем шестнадцать байт, то игнорируются лишние байты с конца.
|
Функция принимает строку из 16 байт и интерпретирует ее байты в порядок от старшего к младшему. Если строка имеет недостаточную длину, то функция работает так, как будто строка дополнена необходимым количеством нулевых байтов с конца. Если строка длиннее, чем 16 байтов, то лишние байты с конца игнорируются.
|
||||||
|
|
||||||
**Синтаксис**
|
**Синтаксис**
|
||||||
|
|
||||||
@ -425,9 +425,27 @@ SELECT uuid = uuid2;
|
|||||||
|
|
||||||
## reinterpret(x, T) {#type_conversion_function-reinterpret}
|
## reinterpret(x, T) {#type_conversion_function-reinterpret}
|
||||||
|
|
||||||
Использует туже самую исходную последовательность байт в памяти для значения `x` и переинтерпретирует ее как конечный тип данных
|
Использует ту же самую исходную последовательность байтов в памяти для значения `x` и интерпретирует ее как конечный тип данных `T`.
|
||||||
|
|
||||||
|
**Синтаксис**
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
reinterpret(x, type)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Аргументы**
|
||||||
|
|
||||||
|
- `x` — любой тип данных.
|
||||||
|
- `type` — конечный тип данных. [String](../../sql-reference/data-types/string.md).
|
||||||
|
|
||||||
|
**Возвращаемое значение**
|
||||||
|
|
||||||
|
- Значение конечного типа данных.
|
||||||
|
|
||||||
|
**Примеры**
|
||||||
|
|
||||||
Запрос:
|
Запрос:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SELECT reinterpret(toInt8(-1), 'UInt8') as int_to_uint,
|
SELECT reinterpret(toInt8(-1), 'UInt8') as int_to_uint,
|
||||||
reinterpret(toInt8(1), 'Float32') as int_to_float,
|
reinterpret(toInt8(1), 'Float32') as int_to_float,
|
||||||
@ -448,7 +466,23 @@ SELECT reinterpret(toInt8(-1), 'UInt8') as int_to_uint,
|
|||||||
|
|
||||||
Поддерживается также синтаксис `CAST(x AS t)`.
|
Поддерживается также синтаксис `CAST(x AS t)`.
|
||||||
|
|
||||||
Обратите внимание, что если значение `x` не может быть преобразовано к типу `T`, возникает переполнение. Например, `CAST(-1, 'UInt8')` возвращает 255.
|
!!! warning "Предупреждение"
|
||||||
|
Если значение `x` не может быть преобразовано к типу `T`, возникает переполнение. Например, `CAST(-1, 'UInt8')` возвращает 255.
|
||||||
|
|
||||||
|
**Синтаксис**
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
CAST(x, T)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Аргументы**
|
||||||
|
|
||||||
|
- `x` — любой тип данных.
|
||||||
|
- `T` — конечный тип данных. [String](../../sql-reference/data-types/string.md).
|
||||||
|
|
||||||
|
**Возвращаемое значение**
|
||||||
|
|
||||||
|
- Значение конечного типа данных.
|
||||||
|
|
||||||
**Примеры**
|
**Примеры**
|
||||||
|
|
||||||
@ -456,9 +490,9 @@ SELECT reinterpret(toInt8(-1), 'UInt8') as int_to_uint,
|
|||||||
|
|
||||||
```sql
|
```sql
|
||||||
SELECT
|
SELECT
|
||||||
cast(toInt8(-1), 'UInt8') AS cast_int_to_uint,
|
CAST(toInt8(-1), 'UInt8') AS cast_int_to_uint,
|
||||||
cast(toInt8(1), 'Float32') AS cast_int_to_float,
|
CAST(toInt8(1), 'Float32') AS cast_int_to_float,
|
||||||
cast('1', 'UInt32') AS cast_string_to_int
|
CAST('1', 'UInt32') AS cast_string_to_int
|
||||||
```
|
```
|
||||||
|
|
||||||
Результат:
|
Результат:
|
||||||
@ -488,9 +522,9 @@ SELECT
|
|||||||
└─────────────────────┴─────────────────────┴────────────┴─────────────────────┴───────────────────────────┘
|
└─────────────────────┴─────────────────────┴────────────┴─────────────────────┴───────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
Преобразование в FixedString(N) работает только для аргументов типа String или FixedString(N).
|
Преобразование в FixedString(N) работает только для аргументов типа [String](../../sql-reference/data-types/string.md) или [FixedString](../../sql-reference/data-types/fixedstring.md).
|
||||||
|
|
||||||
Поддержано преобразование к типу [Nullable](../../sql-reference/functions/type-conversion-functions.md) и обратно.
|
Поддерживается преобразование к типу [Nullable](../../sql-reference/functions/type-conversion-functions.md) и обратно.
|
||||||
|
|
||||||
**Примеры**
|
**Примеры**
|
||||||
|
|
||||||
@ -860,7 +894,7 @@ AS parseDateTimeBestEffortUS;
|
|||||||
## parseDateTimeBestEffortOrZero {#parsedatetimebesteffortorzero}
|
## parseDateTimeBestEffortOrZero {#parsedatetimebesteffortorzero}
|
||||||
## parseDateTime32BestEffortOrZero {#parsedatetime32besteffortorzero}
|
## parseDateTime32BestEffortOrZero {#parsedatetime32besteffortorzero}
|
||||||
|
|
||||||
Работает также как [parseDateTimeBestEffort](#parsedatetimebesteffort), но возвращает нулевую дату или нулевую дату и время когда получает формат даты который не может быть обработан.
|
Работает аналогично функции [parseDateTimeBestEffort](#parsedatetimebesteffort), но возвращает нулевое значение, если формат даты не может быть обработан.
|
||||||
|
|
||||||
## parseDateTimeBestEffortUSOrNull {#parsedatetimebesteffortusornull}
|
## parseDateTimeBestEffortUSOrNull {#parsedatetimebesteffortusornull}
|
||||||
|
|
||||||
@ -1036,19 +1070,23 @@ SELECT parseDateTimeBestEffortUSOrZero('02.2021') AS parseDateTimeBestEffortUSOr
|
|||||||
|
|
||||||
## parseDateTime64BestEffort {#parsedatetime64besteffort}
|
## parseDateTime64BestEffort {#parsedatetime64besteffort}
|
||||||
|
|
||||||
Работает также как функция [parseDateTimeBestEffort](#parsedatetimebesteffort) но также понимамет милисекунды и микросекунды и возвращает `DateTime64(3)` или `DateTime64(6)` типы данных в зависимости от заданной точности.
|
Работает аналогично функции [parseDateTimeBestEffort](#parsedatetimebesteffort), но также принимает миллисекунды и микросекунды. Возвращает тип данных [DateTime](../../sql-reference/functions/type-conversion-functions.md#data_type-datetime).
|
||||||
|
|
||||||
**Syntax**
|
**Синтаксис**
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
parseDateTime64BestEffort(time_string [, precision [, time_zone]])
|
parseDateTime64BestEffort(time_string [, precision [, time_zone]])
|
||||||
```
|
```
|
||||||
|
|
||||||
**Parameters**
|
**Аргументы**
|
||||||
|
|
||||||
- `time_string` — String containing a date or date with time to convert. [String](../../sql-reference/data-types/string.md).
|
- `time_string` — строка, содержащая дату или дату со временем, которые нужно преобразовать. [String](../../sql-reference/data-types/string.md).
|
||||||
- `precision` — `3` for milliseconds, `6` for microseconds. Default `3`. Optional [UInt8](../../sql-reference/data-types/int-uint.md).
|
- `precision` — требуемая точность: `3` — для миллисекунд, `6` — для микросекунд. По умолчанию — `3`. Необязательный. [UInt8](../../sql-reference/data-types/int-uint.md).
|
||||||
- `time_zone` — [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone). The function parses `time_string` according to the timezone. Optional. [String](../../sql-reference/data-types/string.md).
|
- `time_zone` — [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone). Разбирает значение `time_string` в зависимости от часового пояса. Необязательный. [String](../../sql-reference/data-types/string.md).
|
||||||
|
|
||||||
|
**Возвращаемое значение**
|
||||||
|
|
||||||
|
- `time_string`, преобразованная в тип данных [DateTime](../../sql-reference/data-types/datetime.md).
|
||||||
|
|
||||||
**Примеры**
|
**Примеры**
|
||||||
|
|
||||||
@ -1062,7 +1100,7 @@ UNION ALL
|
|||||||
SELECT parseDateTime64BestEffort('2021-01-01 01:01:00.12346',6) AS a, toTypeName(a) AS t
|
SELECT parseDateTime64BestEffort('2021-01-01 01:01:00.12346',6) AS a, toTypeName(a) AS t
|
||||||
UNION ALL
|
UNION ALL
|
||||||
SELECT parseDateTime64BestEffort('2021-01-01 01:01:00.12346',3,'Europe/Moscow') AS a, toTypeName(a) AS t
|
SELECT parseDateTime64BestEffort('2021-01-01 01:01:00.12346',3,'Europe/Moscow') AS a, toTypeName(a) AS t
|
||||||
FORMAT PrettyCompactMonoBlcok
|
FORMAT PrettyCompactMonoBlock;
|
||||||
```
|
```
|
||||||
|
|
||||||
Результат:
|
Результат:
|
||||||
@ -1078,12 +1116,11 @@ FORMAT PrettyCompactMonoBlcok
|
|||||||
|
|
||||||
## parseDateTime64BestEffortOrNull {#parsedatetime32besteffortornull}
|
## parseDateTime64BestEffortOrNull {#parsedatetime32besteffortornull}
|
||||||
|
|
||||||
Работает также как функция [parseDateTime64BestEffort](#parsedatetime64besteffort) но возвращает `NULL` когда встречает формат даты который не может обработать.
|
Работает аналогично функции [parseDateTime64BestEffort](#parsedatetime64besteffort), но возвращает `NULL`, если формат даты не может быть обработан.
|
||||||
|
|
||||||
## parseDateTime64BestEffortOrZero {#parsedatetime64besteffortorzero}
|
## parseDateTime64BestEffortOrZero {#parsedatetime64besteffortorzero}
|
||||||
|
|
||||||
Работает также как функция [parseDateTime64BestEffort](#parsedatetimebesteffort) но возвращает "нулевую" дату и время когда встречает формат даты который не может обработать.
|
Работает аналогично функции [parseDateTime64BestEffort](#parsedatetimebesteffort), но возвращает нулевую дату и время, если формат даты не может быть обработан.
|
||||||
|
|
||||||
|
|
||||||
## toLowCardinality {#tolowcardinality}
|
## toLowCardinality {#tolowcardinality}
|
||||||
|
|
||||||
@ -1130,11 +1167,14 @@ SELECT toLowCardinality('1');
|
|||||||
## toUnixTimestamp64Nano {#tounixtimestamp64nano}
|
## toUnixTimestamp64Nano {#tounixtimestamp64nano}
|
||||||
|
|
||||||
Преобразует значение `DateTime64` в значение `Int64` с фиксированной точностью менее одной секунды.
|
Преобразует значение `DateTime64` в значение `Int64` с фиксированной точностью менее одной секунды.
|
||||||
Входное значение округляется соответствующим образом вверх или вниз в зависимости от его точности. Обратите внимание, что возвращаемое значение - это временная метка в UTC, а не в часовом поясе `DateTime64`.
|
Входное значение округляется соответствующим образом вверх или вниз в зависимости от его точности.
|
||||||
|
|
||||||
|
!!! info "Примечание"
|
||||||
|
Возвращаемое значение — это временная метка в UTC, а не в часовом поясе `DateTime64`.
|
||||||
|
|
||||||
**Синтаксис**
|
**Синтаксис**
|
||||||
|
|
||||||
``` sql
|
```sql
|
||||||
toUnixTimestamp64Milli(value)
|
toUnixTimestamp64Milli(value)
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -1150,7 +1190,7 @@ toUnixTimestamp64Milli(value)
|
|||||||
|
|
||||||
Запрос:
|
Запрос:
|
||||||
|
|
||||||
``` sql
|
```sql
|
||||||
WITH toDateTime64('2019-09-16 19:20:12.345678910', 6) AS dt64
|
WITH toDateTime64('2019-09-16 19:20:12.345678910', 6) AS dt64
|
||||||
SELECT toUnixTimestamp64Milli(dt64);
|
SELECT toUnixTimestamp64Milli(dt64);
|
||||||
```
|
```
|
||||||
@ -1296,4 +1336,3 @@ FROM numbers(3);
|
|||||||
│ 2,"good" │
|
│ 2,"good" │
|
||||||
└───────────────────────────────────────────┘
|
└───────────────────────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -51,5 +51,5 @@ The easiest way to see the result is to use `--livereload=8888` argument of buil
|
|||||||
|
|
||||||
At the moment there’s no easy way to do just that, but you can consider:
|
At the moment there’s no easy way to do just that, but you can consider:
|
||||||
|
|
||||||
- To hit the “Watch” button on top of GitHub web interface to know as early as possible, even during pull request. Alternative to this is `#github-activity` channel of [public ClickHouse Slack](https://join.slack.com/t/clickhousedb/shared_invite/zt-nwwakmk4-xOJ6cdy0sJC3It8j348~IA).
|
- To hit the “Watch” button on top of GitHub web interface to know as early as possible, even during pull request. Alternative to this is `#github-activity` channel of [public ClickHouse Slack](https://join.slack.com/t/clickhousedb/shared_invite/zt-qfort0u8-TWqK4wIP0YSdoDE0btKa1w).
|
||||||
- Some search engines allow to subscribe on specific website changes via email and you can opt-in for that for https://clickhouse.tech.
|
- Some search engines allow to subscribe on specific website changes via email and you can opt-in for that for https://clickhouse.tech.
|
||||||
|
@ -62,7 +62,6 @@ def build_amp(lang, args, cfg):
|
|||||||
for root, _, filenames in os.walk(site_temp):
|
for root, _, filenames in os.walk(site_temp):
|
||||||
if 'index.html' in filenames:
|
if 'index.html' in filenames:
|
||||||
paths.append(prepare_amp_html(lang, args, root, site_temp, main_site_dir))
|
paths.append(prepare_amp_html(lang, args, root, site_temp, main_site_dir))
|
||||||
test.test_amp(paths, lang)
|
|
||||||
logging.info(f'Finished building AMP version for {lang}')
|
logging.info(f'Finished building AMP version for {lang}')
|
||||||
|
|
||||||
|
|
||||||
|
@ -40,7 +40,7 @@ def build_for_lang(lang, args):
|
|||||||
|
|
||||||
site_names = {
|
site_names = {
|
||||||
'en': 'ClickHouse Blog',
|
'en': 'ClickHouse Blog',
|
||||||
'ru': 'Блог ClickHouse '
|
'ru': 'Блог ClickHouse'
|
||||||
}
|
}
|
||||||
|
|
||||||
assert len(site_names) == len(languages)
|
assert len(site_names) == len(languages)
|
||||||
@ -62,7 +62,7 @@ def build_for_lang(lang, args):
|
|||||||
strict=True,
|
strict=True,
|
||||||
theme=theme_cfg,
|
theme=theme_cfg,
|
||||||
nav=blog_nav,
|
nav=blog_nav,
|
||||||
copyright='©2016–2020 Yandex LLC',
|
copyright='©2016–2021 Yandex LLC',
|
||||||
use_directory_urls=True,
|
use_directory_urls=True,
|
||||||
repo_name='ClickHouse/ClickHouse',
|
repo_name='ClickHouse/ClickHouse',
|
||||||
repo_url='https://github.com/ClickHouse/ClickHouse/',
|
repo_url='https://github.com/ClickHouse/ClickHouse/',
|
||||||
|
@ -94,7 +94,7 @@ def build_for_lang(lang, args):
|
|||||||
site_dir=site_dir,
|
site_dir=site_dir,
|
||||||
strict=True,
|
strict=True,
|
||||||
theme=theme_cfg,
|
theme=theme_cfg,
|
||||||
copyright='©2016–2020 Yandex LLC',
|
copyright='©2016–2021 Yandex LLC',
|
||||||
use_directory_urls=True,
|
use_directory_urls=True,
|
||||||
repo_name='ClickHouse/ClickHouse',
|
repo_name='ClickHouse/ClickHouse',
|
||||||
repo_url='https://github.com/ClickHouse/ClickHouse/',
|
repo_url='https://github.com/ClickHouse/ClickHouse/',
|
||||||
|
@ -31,7 +31,16 @@ def build_nav_entry(root, args):
|
|||||||
result_items.append((prio, title, payload))
|
result_items.append((prio, title, payload))
|
||||||
elif filename.endswith('.md'):
|
elif filename.endswith('.md'):
|
||||||
path = os.path.join(root, filename)
|
path = os.path.join(root, filename)
|
||||||
meta, content = util.read_md_file(path)
|
|
||||||
|
meta = ''
|
||||||
|
content = ''
|
||||||
|
|
||||||
|
try:
|
||||||
|
meta, content = util.read_md_file(path)
|
||||||
|
except:
|
||||||
|
print('Error in file: {}'.format(path))
|
||||||
|
raise
|
||||||
|
|
||||||
path = path.split('/', 2)[-1]
|
path = path.split('/', 2)[-1]
|
||||||
title = meta.get('toc_title', find_first_header(content))
|
title = meta.get('toc_title', find_first_header(content))
|
||||||
if title:
|
if title:
|
||||||
|
@ -3,34 +3,9 @@
|
|||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
import bs4
|
import bs4
|
||||||
|
|
||||||
import logging
|
|
||||||
import os
|
|
||||||
import subprocess
|
import subprocess
|
||||||
|
|
||||||
import bs4
|
|
||||||
|
|
||||||
|
|
||||||
def test_amp(paths, lang):
|
|
||||||
try:
|
|
||||||
# Get latest amp validator version
|
|
||||||
subprocess.check_call('amphtml-validator --help',
|
|
||||||
stdout=subprocess.DEVNULL,
|
|
||||||
stderr=subprocess.DEVNULL,
|
|
||||||
shell=True)
|
|
||||||
except subprocess.CalledProcessError:
|
|
||||||
subprocess.check_call('npm i -g amphtml-validator', stderr=subprocess.DEVNULL, shell=True)
|
|
||||||
|
|
||||||
paths = ' '.join(paths)
|
|
||||||
command = f'amphtml-validator {paths}'
|
|
||||||
try:
|
|
||||||
subprocess.check_output(command, shell=True).decode('utf-8')
|
|
||||||
except subprocess.CalledProcessError:
|
|
||||||
logging.error(f'Invalid AMP for {lang}')
|
|
||||||
raise
|
|
||||||
|
|
||||||
|
|
||||||
def test_template(template_path):
|
def test_template(template_path):
|
||||||
if template_path.endswith('amp.html'):
|
if template_path.endswith('amp.html'):
|
||||||
|
@ -155,10 +155,6 @@ def build_website(args):
|
|||||||
os.path.join(args.src_dir, 'utils', 'list-versions', 'version_date.tsv'),
|
os.path.join(args.src_dir, 'utils', 'list-versions', 'version_date.tsv'),
|
||||||
os.path.join(args.output_dir, 'data', 'version_date.tsv'))
|
os.path.join(args.output_dir, 'data', 'version_date.tsv'))
|
||||||
|
|
||||||
shutil.copy2(
|
|
||||||
os.path.join(args.website_dir, 'js', 'embedd.min.js'),
|
|
||||||
os.path.join(args.output_dir, 'js', 'embedd.min.js'))
|
|
||||||
|
|
||||||
for root, _, filenames in os.walk(args.output_dir):
|
for root, _, filenames in os.walk(args.output_dir):
|
||||||
for filename in filenames:
|
for filename in filenames:
|
||||||
if filename == 'main.html':
|
if filename == 'main.html':
|
||||||
|
@ -7,11 +7,11 @@ toc_title: ODBC
|
|||||||
|
|
||||||
# ODBC {#table-engine-odbc}
|
# ODBC {#table-engine-odbc}
|
||||||
|
|
||||||
允许ClickHouse通过以下方式连接到外部数据库 [ODBC](https://en.wikipedia.org/wiki/Open_Database_Connectivity).
|
允许ClickHouse通过[ODBC](https://en.wikipedia.org/wiki/Open_Database_Connectivity)方式连接到外部数据库.
|
||||||
|
|
||||||
为了安全地实现ODBC连接,ClickHouse使用单独的程序 `clickhouse-odbc-bridge`. 如果直接从ODBC驱动程序加载 `clickhouse-server`,驱动程序问题可能会导致ClickHouse服务器崩溃。 ClickHouse自动启动 `clickhouse-odbc-bridge` 当它是必需的。 ODBC桥程序是从相同的软件包作为安装 `clickhouse-server`.
|
为了安全地实现ODBC连接,ClickHouse使用了一个独立程序 `clickhouse-odbc-bridge`. 如果ODBC驱动程序是直接从 `clickhouse-server`中加载的,那么驱动问题可能会导致ClickHouse服务崩溃。 当有需要时,ClickHouse会自动启动 `clickhouse-odbc-bridge`。 ODBC桥梁程序与`clickhouse-server`来自相同的安装包.
|
||||||
|
|
||||||
该引擎支持 [可为空](../../../sql-reference/data-types/nullable.md) 数据类型。
|
该引擎支持 [可为空](../../../sql-reference/data-types/nullable.md) 的数据类型。
|
||||||
|
|
||||||
## 创建表 {#creating-a-table}
|
## 创建表 {#creating-a-table}
|
||||||
|
|
||||||
@ -25,14 +25,14 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
|||||||
ENGINE = ODBC(connection_settings, external_database, external_table)
|
ENGINE = ODBC(connection_settings, external_database, external_table)
|
||||||
```
|
```
|
||||||
|
|
||||||
请参阅的详细说明 [CREATE TABLE](../../../sql-reference/statements/create.md#create-table-query) 查询。
|
详情请见 [CREATE TABLE](../../../sql-reference/statements/create.md#create-table-query) 查询。
|
||||||
|
|
||||||
表结构可以与源表结构不同:
|
表结构可以与源表结构不同:
|
||||||
|
|
||||||
- 列名应与源表中的列名相同,但您可以按任何顺序使用其中的一些列。
|
- 列名应与源表中的列名相同,但您可以按任何顺序使用其中的一些列。
|
||||||
- 列类型可能与源表中的列类型不同。 ClickHouse尝试 [投](../../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) ClickHouse数据类型的值。
|
- 列类型可能与源表中的列类型不同。 ClickHouse尝试将数值[映射](../../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) 到ClickHouse的数据类型。
|
||||||
|
|
||||||
**发动机参数**
|
**引擎参数**
|
||||||
|
|
||||||
- `connection_settings` — Name of the section with connection settings in the `odbc.ini` 文件
|
- `connection_settings` — Name of the section with connection settings in the `odbc.ini` 文件
|
||||||
- `external_database` — Name of a database in an external DBMS.
|
- `external_database` — Name of a database in an external DBMS.
|
||||||
@ -40,13 +40,13 @@ ENGINE = ODBC(connection_settings, external_database, external_table)
|
|||||||
|
|
||||||
## 用法示例 {#usage-example}
|
## 用法示例 {#usage-example}
|
||||||
|
|
||||||
**通过ODBC从本地MySQL安装中检索数据**
|
**通过ODBC从本地安装的MySQL中检索数据**
|
||||||
|
|
||||||
此示例检查Ubuntu Linux18.04和MySQL服务器5.7。
|
本示例针对Ubuntu Linux18.04和MySQL服务器5.7进行检查。
|
||||||
|
|
||||||
确保安装了unixODBC和MySQL连接器。
|
请确保安装了unixODBC和MySQL连接器。
|
||||||
|
|
||||||
默认情况下(如果从软件包安装),ClickHouse以用户身份启动 `clickhouse`. 因此,您需要在MySQL服务器中创建和配置此用户。
|
默认情况下(如果从软件包安装),ClickHouse以用户`clickhouse`的身份启动 . 因此,您需要在MySQL服务器中创建和配置此用户。
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ sudo mysql
|
$ sudo mysql
|
||||||
@ -57,7 +57,7 @@ mysql> CREATE USER 'clickhouse'@'localhost' IDENTIFIED BY 'clickhouse';
|
|||||||
mysql> GRANT ALL PRIVILEGES ON *.* TO 'clickhouse'@'clickhouse' WITH GRANT OPTION;
|
mysql> GRANT ALL PRIVILEGES ON *.* TO 'clickhouse'@'clickhouse' WITH GRANT OPTION;
|
||||||
```
|
```
|
||||||
|
|
||||||
然后配置连接 `/etc/odbc.ini`.
|
然后在`/etc/odbc.ini`中配置连接 .
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ cat /etc/odbc.ini
|
$ cat /etc/odbc.ini
|
||||||
@ -70,7 +70,7 @@ USERNAME = clickhouse
|
|||||||
PASSWORD = clickhouse
|
PASSWORD = clickhouse
|
||||||
```
|
```
|
||||||
|
|
||||||
您可以使用 `isql` unixodbc安装中的实用程序。
|
您可以从安装的unixodbc中使用 `isql` 实用程序来检查连接情况。
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ isql -v mysqlconn
|
$ isql -v mysqlconn
|
||||||
|
@ -7,37 +7,37 @@ toc_title: "\u6570\u636E\u5907\u4EFD"
|
|||||||
|
|
||||||
# 数据备份 {#data-backup}
|
# 数据备份 {#data-backup}
|
||||||
|
|
||||||
尽管[副本](../engines/table-engines/mergetree-family/replication.md) 可以预防硬件错误带来的数据丢失, 但是它不能防止人为操作的错误: 意外删除数据, 删除错误的 table 或者删除错误 cluster 上的 table, 可以导致错误数据处理错误或者数据损坏的 bugs. 这类意外可能会影响所有的副本. ClickHouse 有内建的保障措施可以预防一些错误 — 例如, 默认情况下[您不能使用类似MergeTree的引擎删除包含超过50Gb数据的表](server-configuration-parameters/settings.md#max-table-size-to-drop). 但是,这些保障措施不能涵盖所有可能的情况,并且可以规避。
|
尽管 [副本] (../engines/table-engines/mergetree-family/replication.md) 可以提供针对硬件的错误防护, 但是它不能预防人为操作失误: 数据的意外删除, 错误表的删除或者错误集群上表的删除, 以及导致错误数据处理或者数据损坏的软件bug. 在很多案例中,这类意外可能会影响所有的副本. ClickHouse 有内置的保护措施可以预防一些错误 — 例如, 默认情况下 [不能人工删除使用带有MergeTree引擎且包含超过50Gb数据的表] (server-configuration-parameters/settings.md#max-table-size-to-drop). 但是,这些保护措施不能覆盖所有可能情况,并且这些措施可以被绕过。
|
||||||
|
|
||||||
为了有效地减少可能的人为错误,您应该 **提前**准备备份和还原数据的策略.
|
为了有效地减少可能的人为错误,您应该 **提前** 仔细的准备备份和数据还原的策略.
|
||||||
|
|
||||||
不同公司有不同的可用资源和业务需求,因此没有适合各种情况的ClickHouse备份和恢复通用解决方案。 适用于 1GB 的数据的方案可能并不适用于几十 PB 数据的情况。 有多种可能的并有自己优缺点的方法,这将在下面讨论。 好的主意是同时结合使用多种方法而不是仅使用一种,这样可以弥补不同方法各自的缺点。
|
不同公司有不同的可用资源和业务需求,因此不存在一个通用的解决方案可以应对各种情况下的ClickHouse备份和恢复。 适用于 1GB 数据的方案可能并不适用于几十 PB 数据的情况。 有多种具备各自优缺点的可能方法,将在下面对其进行讨论。最好使用几种方法而不是仅仅使用一种方法来弥补它们的各种缺点。。
|
||||||
|
|
||||||
!!! note "注"
|
!!! note "注"
|
||||||
请记住,如果您备份了某些内容并且从未尝试过还原它,那么当您实际需要它时(或者至少需要比业务能够容忍的时间更长),恢复可能无法正常工作。 因此,无论您选择哪种备份方法,请确保自动还原过程,并定期在备用ClickHouse群集上练习。
|
需要注意的是,如果您备份了某些内容并且从未尝试过还原它,那么当您实际需要它时可能无法正常恢复(或者至少需要的时间比业务能够容忍的时间更长)。 因此,无论您选择哪种备份方法,请确保自动还原过程,并定期在备用ClickHouse群集上演练。
|
||||||
|
|
||||||
## 将源数据复制到其他地方 {#duplicating-source-data-somewhere-else}
|
## 将源数据复制到其它地方 {#duplicating-source-data-somewhere-else}
|
||||||
|
|
||||||
通常被聚集到ClickHouse的数据是通过某种持久队列传递的,例如 [Apache Kafka](https://kafka.apache.org). 在这种情况下,可以配置一组额外的订阅服务器,这些订阅服务器将在写入ClickHouse时读取相同的数据流,并将其存储在冷存储中。 大多数公司已经有一些默认的推荐冷存储,可能是对象存储或分布式文件系统,如 [HDFS](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html).
|
通常摄入到ClickHouse的数据是通过某种持久队列传递的,例如 [Apache Kafka] (https://kafka.apache.org). 在这种情况下,可以配置一组额外的订阅服务器,这些订阅服务器将在写入ClickHouse时读取相同的数据流,并将其存储在冷存储中。 大多数公司已经有一些默认推荐的冷存储,可能是对象存储或分布式文件系统,如 [HDFS] (https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html).
|
||||||
|
|
||||||
## 文件系统快照 {#filesystem-snapshots}
|
## 文件系统快照 {#filesystem-snapshots}
|
||||||
|
|
||||||
某些本地文件系统提供快照功能(例如, [ZFS](https://en.wikipedia.org/wiki/ZFS)),但它们可能不是提供实时查询的最佳选择。 一个可能的解决方案是使用这种文件系统创建额外的副本,并将它们从 [分布](../engines/table-engines/special/distributed.md) 用于以下目的的表 `SELECT` 查询。 任何修改数据的查询都无法访问此类副本上的快照。 作为奖励,这些副本可能具有特殊的硬件配置,每个服务器附加更多的磁盘,这将是经济高效的。
|
某些本地文件系统提供快照功能(例如, [ZFS] (https://en.wikipedia.org/wiki/ZFS)),但它们可能不是提供实时查询的最佳选择。 一个可能的解决方案是使用这种文件系统创建额外的副本,并将它们与用于`SELECT` 查询的 [分布式] (../engines/table-engines/special/distributed.md) 表分离。 任何修改数据的查询都无法访问此类副本上的快照。 作为回报,这些副本可能具有特殊的硬件配置,每个服务器附加更多的磁盘,这将是经济高效的。
|
||||||
|
|
||||||
## clickhouse-copier {#clickhouse-copier}
|
## clickhouse-copier {#clickhouse-copier}
|
||||||
|
|
||||||
[clickhouse-copier](utilities/clickhouse-copier.md) 是一个多功能工具,最初创建用于重新分片pb大小的表。 因为它可以在ClickHouse表和集群之间可靠地复制数据,所以它还可用于备份和还原数据。
|
[clickhouse-copier] (utilities/clickhouse-copier.md) 是一个多功能工具,最初创建它是为了用于重新切分pb大小的表。 因为它能够在ClickHouse表和集群之间可靠地复制数据,所以它也可用于备份和还原数据。
|
||||||
|
|
||||||
对于较小的数据量,一个简单的 `INSERT INTO ... SELECT ...` 到远程表也可以工作。
|
对于较小的数据量,一个简单的 `INSERT INTO ... SELECT ...` 到远程表也可以工作。
|
||||||
|
|
||||||
## 部件操作 {#manipulations-with-parts}
|
## part操作 {#manipulations-with-parts}
|
||||||
|
|
||||||
ClickHouse允许使用 `ALTER TABLE ... FREEZE PARTITION ...` 查询以创建表分区的本地副本。 这是利用硬链接(hardlink)到 `/var/lib/clickhouse/shadow/` 文件夹中实现的,所以它通常不会占用旧数据的额外磁盘空间。 创建的文件副本不由ClickHouse服务器处理,所以你可以把它们留在那里:你将有一个简单的备份,不需要任何额外的外部系统,但它仍然会容易出现硬件问题。 出于这个原因,最好将它们远程复制到另一个位置,然后删除本地副本。 分布式文件系统和对象存储仍然是一个不错的选择,但是具有足够大容量的正常附加文件服务器也可以工作(在这种情况下,传输将通过网络文件系统 [rsync](https://en.wikipedia.org/wiki/Rsync)).
|
ClickHouse允许使用 `ALTER TABLE ... FREEZE PARTITION ...` 查询以创建表分区的本地副本。 这是利用硬链接(hardlink)到 `/var/lib/clickhouse/shadow/` 文件夹中实现的,所以它通常不会因为旧数据而占用额外的磁盘空间。 创建的文件副本不由ClickHouse服务器处理,所以你可以把它们留在那里:你将有一个简单的备份,不需要任何额外的外部系统,但它仍然容易出现硬件问题。 出于这个原因,最好将它们远程复制到另一个位置,然后删除本地副本。 分布式文件系统和对象存储仍然是一个不错的选择,但是具有足够大容量的正常附加文件服务器也可以工作(在这种情况下,传输将通过网络文件系统或者也许是 [rsync] (https://en.wikipedia.org/wiki/Rsync) 来进行).
|
||||||
|
|
||||||
数据可以使用 `ALTER TABLE ... ATTACH PARTITION ...` 从备份中恢复。
|
数据可以使用 `ALTER TABLE ... ATTACH PARTITION ...` 从备份中恢复。
|
||||||
|
|
||||||
有关与分区操作相关的查询的详细信息,请参阅 [更改文档](../sql-reference/statements/alter.md#alter_manipulations-with-partitions).
|
有关与分区操作相关的查询的详细信息,请参阅 [更改文档] (../sql-reference/statements/alter.md#alter_manipulations-with-partitions).
|
||||||
|
|
||||||
第三方工具可用于自动化此方法: [clickhouse-backup](https://github.com/AlexAkulov/clickhouse-backup).
|
第三方工具可用于自动化此方法: [clickhouse-backup] (https://github.com/AlexAkulov/clickhouse-backup).
|
||||||
|
|
||||||
[原始文章](https://clickhouse.tech/docs/en/operations/backup/) <!--hide-->
|
[原始文章] (https://clickhouse.tech/docs/en/operations/backup/) <!--hide-->
|
||||||
|
@ -5,13 +5,13 @@ machine_translated_rev: 5decc73b5dc60054f19087d3690c4eb99446a6c3
|
|||||||
|
|
||||||
# 系统。data_type_families {#system_tables-data_type_families}
|
# 系统。data_type_families {#system_tables-data_type_families}
|
||||||
|
|
||||||
包含有关受支持的信息 [数据类型](../../sql-reference/data-types/).
|
包含有关受支持的[数据类型](../../sql-reference/data-types/)的信息.
|
||||||
|
|
||||||
列:
|
列字段包括:
|
||||||
|
|
||||||
- `name` ([字符串](../../sql-reference/data-types/string.md)) — Data type name.
|
- `name` ([String](../../sql-reference/data-types/string.md)) — 数据类型的名称.
|
||||||
- `case_insensitive` ([UInt8](../../sql-reference/data-types/int-uint.md)) — Property that shows whether you can use a data type name in a query in case insensitive manner or not. For example, `Date` 和 `date` 都是有效的。
|
- `case_insensitive` ([UInt8](../../sql-reference/data-types/int-uint.md)) — 该属性显示是否可以在查询中以不区分大小写的方式使用数据类型名称。例如 `Date` 和 `date` 都是有效的。
|
||||||
- `alias_to` ([字符串](../../sql-reference/data-types/string.md)) — Data type name for which `name` 是个化名
|
- `alias_to` ([String](../../sql-reference/data-types/string.md)) — 名称为别名的数据类型名称。
|
||||||
|
|
||||||
**示例**
|
**示例**
|
||||||
|
|
||||||
@ -36,4 +36,4 @@ SELECT * FROM system.data_type_families WHERE alias_to = 'String'
|
|||||||
|
|
||||||
**另请参阅**
|
**另请参阅**
|
||||||
|
|
||||||
- [语法](../../sql-reference/syntax.md) — Information about supported syntax.
|
- [Syntax](../../sql-reference/syntax.md) — 关于所支持的语法信息.
|
||||||
|
@ -7,33 +7,33 @@ toc_title: "\u7CFB\u7EDF\u8868"
|
|||||||
|
|
||||||
# 系统表 {#system-tables}
|
# 系统表 {#system-tables}
|
||||||
|
|
||||||
## 导言 {#system-tables-introduction}
|
## 引言 {#system-tables-introduction}
|
||||||
|
|
||||||
系统表提供以下信息:
|
系统表提供的信息如下:
|
||||||
|
|
||||||
- 服务器状态、进程和环境。
|
- 服务器的状态、进程以及环境。
|
||||||
- 服务器的内部进程。
|
- 服务器的内部进程。
|
||||||
|
|
||||||
系统表:
|
系统表:
|
||||||
|
|
||||||
- 坐落于 `system` 数据库。
|
- 存储于 `system` 数据库。
|
||||||
- 仅适用于读取数据。
|
- 仅提供数据读取功能。
|
||||||
- 不能删除或更改,但可以分离。
|
- 不能被删除或更改,但可以对其进行分离(detach)操作。
|
||||||
|
|
||||||
大多数系统表将数据存储在RAM中。 ClickHouse服务器在开始时创建此类系统表。
|
大多数系统表将其数据存储在RAM中。 一个ClickHouse服务在刚启动时便会创建此类系统表。
|
||||||
|
|
||||||
与其他系统表不同,系统日志表 [metric_log](../../operations/system-tables/metric_log.md#system_tables-metric_log), [query_log](../../operations/system-tables/query_log.md#system_tables-query_log), [query_thread_log](../../operations/system-tables/query_thread_log.md#system_tables-query_thread_log), [trace_log](../../operations/system-tables/trace_log.md#system_tables-trace_log), [part_log](../../operations/system-tables/part_log.md#system.part_log), crash_log and text_log 默认采用[MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) 引擎并将其数据存储在存储文件系统中。 如果从文件系统中删除表,ClickHouse服务器会在下一次写入数据时再次创建空表。 如果系统表架构在新版本中发生更改,则ClickHouse会重命名当前表并创建一个新表。
|
不同于其他系统表,系统日志表 [metric_log](../../operations/system-tables/metric_log.md#system_tables-metric_log), [query_log](../../operations/system-tables/query_log.md#system_tables-query_log), [query_thread_log](../../operations/system-tables/query_thread_log.md#system_tables-query_thread_log), [trace_log](../../operations/system-tables/trace_log.md#system_tables-trace_log), [part_log](../../operations/system-tables/part_log.md#system.part_log), crash_log and text_log 默认采用[MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) 引擎并将其数据存储在文件系统中。 如果人为的从文件系统中删除表,ClickHouse服务器会在下一次进行数据写入时再次创建空表。 如果系统表结构在新版本中发生更改,那么ClickHouse会重命名当前表并创建一个新表。
|
||||||
|
|
||||||
用户可以通过在`/etc/clickhouse-server/config.d/`下创建与系统表同名的配置文件, 或者在`/etc/clickhouse-server/config.xml`中设置相应配置项,来自定义系统日志表的结构。可以自定义的配置项如下:
|
用户可以通过在`/etc/clickhouse-server/config.d/`下创建与系统表同名的配置文件, 或者在`/etc/clickhouse-server/config.xml`中设置相应配置项,来自定义系统日志表的结构。可供自定义的配置项如下:
|
||||||
|
|
||||||
- `database`: 系统日志表所在的数据库。这个选项目前已经废弃。所有的系统日表都位于`system`库中。
|
- `database`: 系统日志表所在的数据库。这个选项目前已经不推荐使用。所有的系统日表都位于`system`库中。
|
||||||
- `table`: 系统日志表名。
|
- `table`: 接收数据写入的系统日志表。
|
||||||
- `partition_by`: 指定[PARTITION BY](../../engines/table-engines/mergetree-family/custom-partitioning-key.md)表达式。
|
- `partition_by`: 指定[PARTITION BY](../../engines/table-engines/mergetree-family/custom-partitioning-key.md)表达式。
|
||||||
- `ttl`: 指定系统日志表TTL选项。
|
- `ttl`: 指定系统日志表TTL选项。
|
||||||
- `flush_interval_milliseconds`: 指定系统日志表数据落盘时间。
|
- `flush_interval_milliseconds`: 指定日志表数据刷新到磁盘的时间间隔。
|
||||||
- `engine`: 指定完整的表引擎定义。(以`ENGINE = `开始)。 这个选项与`partition_by`以及`ttl`冲突。如果两者一起设置,服务启动时会抛出异常并且退出。
|
- `engine`: 指定完整的表引擎定义。(以`ENGINE = `开头)。 这个选项与`partition_by`以及`ttl`冲突。如果与两者一起设置,服务启动时会抛出异常并且退出。
|
||||||
|
|
||||||
一个配置定义的例子如下:
|
配置定义的示例如下:
|
||||||
|
|
||||||
```
|
```
|
||||||
<yandex>
|
<yandex>
|
||||||
@ -50,20 +50,20 @@ toc_title: "\u7CFB\u7EDF\u8868"
|
|||||||
</yandex>
|
</yandex>
|
||||||
```
|
```
|
||||||
|
|
||||||
默认情况下,表增长是无限的。 要控制表的大小,可以使用 TTL 删除过期日志记录的设置。 你也可以使用分区功能 `MergeTree`-发动机表。
|
默认情况下,表增长是无限的。可以通过TTL 删除过期日志记录的设置来控制表的大小。 你也可以使用分区功能 `MergeTree`-引擎表。
|
||||||
|
|
||||||
## 系统指标的来源 {#system-tables-sources-of-system-metrics}
|
## 系统指标的来源 {#system-tables-sources-of-system-metrics}
|
||||||
|
|
||||||
用于收集ClickHouse服务器使用的系统指标:
|
用于收集ClickHouse服务器使用的系统指标:
|
||||||
|
|
||||||
- `CAP_NET_ADMIN` 能力。
|
- `CAP_NET_ADMIN` 能力。
|
||||||
- [procfs](https://en.wikipedia.org/wiki/Procfs) (仅在Linux中)。
|
- [procfs](https://en.wikipedia.org/wiki/Procfs) (仅限于Linux)。
|
||||||
|
|
||||||
**procfs**
|
**procfs**
|
||||||
|
|
||||||
如果ClickHouse服务器没有 `CAP_NET_ADMIN` 能力,它试图回落到 `ProcfsMetricsProvider`. `ProcfsMetricsProvider` 允许收集每个查询系统指标(用于CPU和I/O)。
|
如果ClickHouse服务器没有 `CAP_NET_ADMIN` 能力,那么它将试图退回到 `ProcfsMetricsProvider`. `ProcfsMetricsProvider` 允许收集每个查询系统指标(包括CPU和I/O)。
|
||||||
|
|
||||||
如果系统上支持并启用procfs,ClickHouse server将收集这些指标:
|
如果系统上支持并启用procfs,ClickHouse server将收集如下指标:
|
||||||
|
|
||||||
- `OSCPUVirtualTimeMicroseconds`
|
- `OSCPUVirtualTimeMicroseconds`
|
||||||
- `OSCPUWaitMicroseconds`
|
- `OSCPUWaitMicroseconds`
|
||||||
|
@ -5,9 +5,9 @@ toc_priority: 61
|
|||||||
toc_title: "\u95F4\u9694"
|
toc_title: "\u95F4\u9694"
|
||||||
---
|
---
|
||||||
|
|
||||||
# 间隔 {#data-type-interval}
|
# Interval类型 {#data-type-interval}
|
||||||
|
|
||||||
表示时间和日期间隔的数据类型族。 由此产生的类型 [INTERVAL](../../../sql-reference/operators/index.md#operator-interval) 接线员
|
表示时间和日期间隔的数据类型家族。 [INTERVAL](../../../sql-reference/operators/index.md#operator-interval) 运算的结果类型。
|
||||||
|
|
||||||
!!! warning "警告"
|
!!! warning "警告"
|
||||||
`Interval` 数据类型值不能存储在表中。
|
`Interval` 数据类型值不能存储在表中。
|
||||||
@ -15,7 +15,7 @@ toc_title: "\u95F4\u9694"
|
|||||||
结构:
|
结构:
|
||||||
|
|
||||||
- 时间间隔作为无符号整数值。
|
- 时间间隔作为无符号整数值。
|
||||||
- 间隔的类型。
|
- 时间间隔的类型。
|
||||||
|
|
||||||
支持的时间间隔类型:
|
支持的时间间隔类型:
|
||||||
|
|
||||||
@ -28,7 +28,7 @@ toc_title: "\u95F4\u9694"
|
|||||||
- `QUARTER`
|
- `QUARTER`
|
||||||
- `YEAR`
|
- `YEAR`
|
||||||
|
|
||||||
对于每个间隔类型,都有一个单独的数据类型。 例如, `DAY` 间隔对应于 `IntervalDay` 数据类型:
|
对于每个时间间隔类型,都有一个单独的数据类型。 例如, `DAY` 间隔对应于 `IntervalDay` 数据类型:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
SELECT toTypeName(INTERVAL 4 DAY)
|
SELECT toTypeName(INTERVAL 4 DAY)
|
||||||
@ -42,7 +42,7 @@ SELECT toTypeName(INTERVAL 4 DAY)
|
|||||||
|
|
||||||
## 使用说明 {#data-type-interval-usage-remarks}
|
## 使用说明 {#data-type-interval-usage-remarks}
|
||||||
|
|
||||||
您可以使用 `Interval`-在算术运算类型值 [日期](../../../sql-reference/data-types/date.md) 和 [日期时间](../../../sql-reference/data-types/datetime.md)-类型值。 例如,您可以将4天添加到当前时间:
|
您可以在与 [日期](../../../sql-reference/data-types/date.md) 和 [日期时间](../../../sql-reference/data-types/datetime.md) 类型值的算术运算中使用 `Interval` 类型值。 例如,您可以将4天添加到当前时间:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
SELECT now() as current_date_time, current_date_time + INTERVAL 4 DAY
|
SELECT now() as current_date_time, current_date_time + INTERVAL 4 DAY
|
||||||
@ -54,10 +54,10 @@ SELECT now() as current_date_time, current_date_time + INTERVAL 4 DAY
|
|||||||
└─────────────────────┴───────────────────────────────┘
|
└─────────────────────┴───────────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
不同类型的间隔不能合并。 你不能使用间隔,如 `4 DAY 1 HOUR`. 以小于或等于间隔的最小单位的单位指定间隔,例如,间隔 `1 day and an hour` 间隔可以表示为 `25 HOUR` 或 `90000 SECOND`.
|
不同类型的间隔不能合并。 你不能使用诸如 `4 DAY 1 HOUR` 的时间间隔. 以小于或等于时间间隔最小单位的单位来指定间隔,例如,时间间隔 `1 day and an hour` 可以表示为 `25 HOUR` 或 `90000 SECOND`.
|
||||||
|
|
||||||
你不能执行算术运算 `Interval`-类型值,但你可以添加不同类型的时间间隔,因此值 `Date` 或 `DateTime` 数据类型。 例如:
|
|
||||||
|
|
||||||
|
你不能对 `Interval` 类型的值执行算术运算,但你可以向 `Date` 或 `DateTime` 数据类型的值添加不同类型的时间间隔,例如:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
SELECT now() AS current_date_time, current_date_time + INTERVAL 4 DAY + INTERVAL 3 HOUR
|
SELECT now() AS current_date_time, current_date_time + INTERVAL 4 DAY + INTERVAL 3 HOUR
|
||||||
```
|
```
|
||||||
@ -81,5 +81,5 @@ Code: 43. DB::Exception: Received from localhost:9000. DB::Exception: Wrong argu
|
|||||||
|
|
||||||
## 另请参阅 {#see-also}
|
## 另请参阅 {#see-also}
|
||||||
|
|
||||||
- [INTERVAL](../../../sql-reference/operators/index.md#operator-interval) 接线员
|
- [INTERVAL](../../../sql-reference/operators/index.md#operator-interval) 操作
|
||||||
- [toInterval](../../../sql-reference/functions/type-conversion-functions.md#function-tointerval) 类型转换函数
|
- [toInterval](../../../sql-reference/functions/type-conversion-functions.md#function-tointerval) 类型转换函数
|
||||||
|
@ -238,7 +238,7 @@ SELECT a, b, c FROM (SELECT ...)
|
|||||||
|
|
||||||
当一个`SELECT`子句包含`DISTINCT`, `GROUP BY`, `ORDER BY`, `LIMIT`时,请注意,这些仅会在插入数据时在每个单独的数据块上执行。例如,如果你在其中包含了`GROUP BY`,则只会在查询期间进行聚合,但聚合范围仅限于单个批的写入数据。数据不会进一步被聚合。但是当你使用一些其他数据聚合引擎时这是例外的,如:`SummingMergeTree`。
|
当一个`SELECT`子句包含`DISTINCT`, `GROUP BY`, `ORDER BY`, `LIMIT`时,请注意,这些仅会在插入数据时在每个单独的数据块上执行。例如,如果你在其中包含了`GROUP BY`,则只会在查询期间进行聚合,但聚合范围仅限于单个批的写入数据。数据不会进一步被聚合。但是当你使用一些其他数据聚合引擎时这是例外的,如:`SummingMergeTree`。
|
||||||
|
|
||||||
目前对物化视图执行`ALTER`是不支持的,因此这可能是不方便的。如果物化视图是使用的`TO [db.]name`的方式进行构建的,你可以使用`DETACH`语句现将视图剥离,然后使用`ALTER`运行在目标表上,然后使用`ATTACH`将之前剥离的表重新加载进来。
|
目前对物化视图执行`ALTER`是不支持的,因此这可能是不方便的。如果物化视图是使用的`TO [db.]name`的方式进行构建的,你可以使用`DETACH`语句先将视图剥离,然后使用`ALTER`运行在目标表上,然后使用`ATTACH`将之前剥离的表重新加载进来。
|
||||||
|
|
||||||
视图看起来和普通的表相同。例如,你可以通过`SHOW TABLES`查看到它们。
|
视图看起来和普通的表相同。例如,你可以通过`SHOW TABLES`查看到它们。
|
||||||
|
|
||||||
|
@ -14,7 +14,7 @@ INSERT INTO t VALUES (1, 'Hello, world'), (2, 'abc'), (3, 'def')
|
|||||||
|
|
||||||
含`INSERT INTO t VALUES` 的部分由完整SQL解析器处理,包含数据的部分 `(1, 'Hello, world'), (2, 'abc'), (3, 'def')` 交给快速流式解析器解析。通过设置参数 [input_format_values_interpret_expressions](../operations/settings/settings.md#settings-input_format_values_interpret_expressions),你也可以对数据部分开启完整SQL解析器。当 `input_format_values_interpret_expressions = 1` 时,CH优先采用快速流式解析器来解析数据。如果失败,CH再尝试用完整SQL解析器来处理,就像处理SQL [expression](#syntax-expressions) 一样。
|
含`INSERT INTO t VALUES` 的部分由完整SQL解析器处理,包含数据的部分 `(1, 'Hello, world'), (2, 'abc'), (3, 'def')` 交给快速流式解析器解析。通过设置参数 [input_format_values_interpret_expressions](../operations/settings/settings.md#settings-input_format_values_interpret_expressions),你也可以对数据部分开启完整SQL解析器。当 `input_format_values_interpret_expressions = 1` 时,CH优先采用快速流式解析器来解析数据。如果失败,CH再尝试用完整SQL解析器来处理,就像处理SQL [expression](#syntax-expressions) 一样。
|
||||||
|
|
||||||
数据可以采用任何格式。当CH接受到请求时,服务端先在内存中计算不超过 [max_query_size](../operations/settings/settings.md#settings-max_query_size) 字节的请求数据(默认1 mb),然后剩下部分交给快速流式解析器。
|
数据可以采用任何格式。当CH接收到请求时,服务端先在内存中计算不超过 [max_query_size](../operations/settings/settings.md#settings-max_query_size) 字节的请求数据(默认1 mb),然后剩下部分交给快速流式解析器。
|
||||||
|
|
||||||
这将避免在处理大型的 `INSERT`语句时出现问题。
|
这将避免在处理大型的 `INSERT`语句时出现问题。
|
||||||
|
|
||||||
|
@ -47,6 +47,9 @@ option (ENABLE_CLICKHOUSE_LIBRARY_BRIDGE "HTTP-server working like a proxy to Li
|
|||||||
option (ENABLE_CLICKHOUSE_GIT_IMPORT "A tool to analyze Git repositories"
|
option (ENABLE_CLICKHOUSE_GIT_IMPORT "A tool to analyze Git repositories"
|
||||||
${ENABLE_CLICKHOUSE_ALL})
|
${ENABLE_CLICKHOUSE_ALL})
|
||||||
|
|
||||||
|
|
||||||
|
option (ENABLE_CLICKHOUSE_KEEPER "ClickHouse alternative to ZooKeeper" ${ENABLE_CLICKHOUSE_ALL})
|
||||||
|
|
||||||
if (CLICKHOUSE_SPLIT_BINARY)
|
if (CLICKHOUSE_SPLIT_BINARY)
|
||||||
option(ENABLE_CLICKHOUSE_INSTALL "Install ClickHouse without .deb/.rpm/.tgz packages (having the binary only)" OFF)
|
option(ENABLE_CLICKHOUSE_INSTALL "Install ClickHouse without .deb/.rpm/.tgz packages (having the binary only)" OFF)
|
||||||
else ()
|
else ()
|
||||||
@ -134,6 +137,12 @@ else()
|
|||||||
message(STATUS "ClickHouse git-import: OFF")
|
message(STATUS "ClickHouse git-import: OFF")
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
|
if (ENABLE_CLICKHOUSE_KEEPER)
|
||||||
|
message(STATUS "ClickHouse keeper mode: ON")
|
||||||
|
else()
|
||||||
|
message(STATUS "ClickHouse keeper mode: OFF")
|
||||||
|
endif()
|
||||||
|
|
||||||
if(NOT (MAKE_STATIC_LIBRARIES OR SPLIT_SHARED_LIBRARIES))
|
if(NOT (MAKE_STATIC_LIBRARIES OR SPLIT_SHARED_LIBRARIES))
|
||||||
set(CLICKHOUSE_ONE_SHARED ON)
|
set(CLICKHOUSE_ONE_SHARED ON)
|
||||||
endif()
|
endif()
|
||||||
@ -189,6 +198,54 @@ macro(clickhouse_program_add name)
|
|||||||
clickhouse_program_add_executable(${name})
|
clickhouse_program_add_executable(${name})
|
||||||
endmacro()
|
endmacro()
|
||||||
|
|
||||||
|
# Embed default config files as a resource into the binary.
|
||||||
|
# This is needed for two purposes:
|
||||||
|
# 1. Allow to run the binary without download of any other files.
|
||||||
|
# 2. Allow to implement "sudo clickhouse install" tool.
|
||||||
|
#
|
||||||
|
# Arguments: target (server, client, keeper, etc.) and list of files
|
||||||
|
#
|
||||||
|
# Also dependency on TARGET_FILE is required, look at examples in programs/server and programs/keeper
|
||||||
|
macro(clickhouse_embed_binaries)
|
||||||
|
# TODO We actually need this on Mac, FreeBSD.
|
||||||
|
if (OS_LINUX)
|
||||||
|
|
||||||
|
set(arguments_list "${ARGN}")
|
||||||
|
list(GET arguments_list 0 target)
|
||||||
|
|
||||||
|
# for some reason cmake iterates loop including <stop>
|
||||||
|
math(EXPR arguments_count "${ARGC}-1")
|
||||||
|
|
||||||
|
foreach(RESOURCE_POS RANGE 1 "${arguments_count}")
|
||||||
|
list(GET arguments_list "${RESOURCE_POS}" RESOURCE_FILE)
|
||||||
|
set(RESOURCE_OBJ ${RESOURCE_FILE}.o)
|
||||||
|
set(RESOURCE_OBJS ${RESOURCE_OBJS} ${RESOURCE_OBJ})
|
||||||
|
|
||||||
|
# https://stackoverflow.com/questions/14776463/compile-and-add-an-object-file-from-a-binary-with-cmake
|
||||||
|
# PPC64LE fails to do this with objcopy, use ld or lld instead
|
||||||
|
if (ARCH_PPC64LE)
|
||||||
|
add_custom_command(OUTPUT ${RESOURCE_OBJ}
|
||||||
|
COMMAND cd ${CMAKE_CURRENT_SOURCE_DIR} && ${CMAKE_LINKER} -m elf64lppc -r -b binary -o "${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ}" ${RESOURCE_FILE})
|
||||||
|
else()
|
||||||
|
add_custom_command(OUTPUT ${RESOURCE_OBJ}
|
||||||
|
COMMAND cd ${CMAKE_CURRENT_SOURCE_DIR} && ${OBJCOPY_PATH} -I binary ${OBJCOPY_ARCH_OPTIONS} ${RESOURCE_FILE} "${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ}"
|
||||||
|
COMMAND ${OBJCOPY_PATH} --rename-section .data=.rodata,alloc,load,readonly,data,contents
|
||||||
|
"${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ}" "${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ}")
|
||||||
|
endif()
|
||||||
|
set_source_files_properties(${RESOURCE_OBJ} PROPERTIES EXTERNAL_OBJECT true GENERATED true)
|
||||||
|
endforeach()
|
||||||
|
|
||||||
|
add_library(clickhouse_${target}_configs STATIC ${RESOURCE_OBJS})
|
||||||
|
set_target_properties(clickhouse_${target}_configs PROPERTIES LINKER_LANGUAGE C)
|
||||||
|
|
||||||
|
# whole-archive prevents symbols from being discarded for unknown reason
|
||||||
|
# CMake can shuffle each of target_link_libraries arguments with other
|
||||||
|
# libraries in linker command. To avoid this we hardcode whole-archive
|
||||||
|
# library into single string.
|
||||||
|
add_dependencies(clickhouse-${target}-lib clickhouse_${target}_configs)
|
||||||
|
endif ()
|
||||||
|
endmacro()
|
||||||
|
|
||||||
|
|
||||||
add_subdirectory (server)
|
add_subdirectory (server)
|
||||||
add_subdirectory (client)
|
add_subdirectory (client)
|
||||||
@ -202,6 +259,7 @@ add_subdirectory (obfuscator)
|
|||||||
add_subdirectory (install)
|
add_subdirectory (install)
|
||||||
add_subdirectory (git-import)
|
add_subdirectory (git-import)
|
||||||
add_subdirectory (bash-completion)
|
add_subdirectory (bash-completion)
|
||||||
|
add_subdirectory (keeper)
|
||||||
|
|
||||||
if (ENABLE_CLICKHOUSE_ODBC_BRIDGE)
|
if (ENABLE_CLICKHOUSE_ODBC_BRIDGE)
|
||||||
add_subdirectory (odbc-bridge)
|
add_subdirectory (odbc-bridge)
|
||||||
@ -212,15 +270,15 @@ if (ENABLE_CLICKHOUSE_LIBRARY_BRIDGE)
|
|||||||
endif ()
|
endif ()
|
||||||
|
|
||||||
if (CLICKHOUSE_ONE_SHARED)
|
if (CLICKHOUSE_ONE_SHARED)
|
||||||
add_library(clickhouse-lib SHARED ${CLICKHOUSE_SERVER_SOURCES} ${CLICKHOUSE_CLIENT_SOURCES} ${CLICKHOUSE_LOCAL_SOURCES} ${CLICKHOUSE_BENCHMARK_SOURCES} ${CLICKHOUSE_COPIER_SOURCES} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_SOURCES} ${CLICKHOUSE_COMPRESSOR_SOURCES} ${CLICKHOUSE_FORMAT_SOURCES} ${CLICKHOUSE_OBFUSCATOR_SOURCES} ${CLICKHOUSE_GIT_IMPORT_SOURCES} ${CLICKHOUSE_ODBC_BRIDGE_SOURCES})
|
add_library(clickhouse-lib SHARED ${CLICKHOUSE_SERVER_SOURCES} ${CLICKHOUSE_CLIENT_SOURCES} ${CLICKHOUSE_LOCAL_SOURCES} ${CLICKHOUSE_BENCHMARK_SOURCES} ${CLICKHOUSE_COPIER_SOURCES} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_SOURCES} ${CLICKHOUSE_COMPRESSOR_SOURCES} ${CLICKHOUSE_FORMAT_SOURCES} ${CLICKHOUSE_OBFUSCATOR_SOURCES} ${CLICKHOUSE_GIT_IMPORT_SOURCES} ${CLICKHOUSE_ODBC_BRIDGE_SOURCES} ${CLICKHOUSE_KEEPER_SOURCES})
|
||||||
target_link_libraries(clickhouse-lib ${CLICKHOUSE_SERVER_LINK} ${CLICKHOUSE_CLIENT_LINK} ${CLICKHOUSE_LOCAL_LINK} ${CLICKHOUSE_BENCHMARK_LINK} ${CLICKHOUSE_COPIER_LINK} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_LINK} ${CLICKHOUSE_COMPRESSOR_LINK} ${CLICKHOUSE_FORMAT_LINK} ${CLICKHOUSE_OBFUSCATOR_LINK} ${CLICKHOUSE_GIT_IMPORT_LINK} ${CLICKHOUSE_ODBC_BRIDGE_LINK})
|
target_link_libraries(clickhouse-lib ${CLICKHOUSE_SERVER_LINK} ${CLICKHOUSE_CLIENT_LINK} ${CLICKHOUSE_LOCAL_LINK} ${CLICKHOUSE_BENCHMARK_LINK} ${CLICKHOUSE_COPIER_LINK} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_LINK} ${CLICKHOUSE_COMPRESSOR_LINK} ${CLICKHOUSE_FORMAT_LINK} ${CLICKHOUSE_OBFUSCATOR_LINK} ${CLICKHOUSE_GIT_IMPORT_LINK} ${CLICKHOUSE_ODBC_BRIDGE_LINK} ${CLICKHOUSE_KEEPER_LINK})
|
||||||
target_include_directories(clickhouse-lib ${CLICKHOUSE_SERVER_INCLUDE} ${CLICKHOUSE_CLIENT_INCLUDE} ${CLICKHOUSE_LOCAL_INCLUDE} ${CLICKHOUSE_BENCHMARK_INCLUDE} ${CLICKHOUSE_COPIER_INCLUDE} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_INCLUDE} ${CLICKHOUSE_COMPRESSOR_INCLUDE} ${CLICKHOUSE_FORMAT_INCLUDE} ${CLICKHOUSE_OBFUSCATOR_INCLUDE} ${CLICKHOUSE_GIT_IMPORT_INCLUDE} ${CLICKHOUSE_ODBC_BRIDGE_INCLUDE})
|
target_include_directories(clickhouse-lib ${CLICKHOUSE_SERVER_INCLUDE} ${CLICKHOUSE_CLIENT_INCLUDE} ${CLICKHOUSE_LOCAL_INCLUDE} ${CLICKHOUSE_BENCHMARK_INCLUDE} ${CLICKHOUSE_COPIER_INCLUDE} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_INCLUDE} ${CLICKHOUSE_COMPRESSOR_INCLUDE} ${CLICKHOUSE_FORMAT_INCLUDE} ${CLICKHOUSE_OBFUSCATOR_INCLUDE} ${CLICKHOUSE_GIT_IMPORT_INCLUDE} ${CLICKHOUSE_ODBC_BRIDGE_INCLUDE} ${CLICKHOUSE_KEEPER_INCLUDE})
|
||||||
set_target_properties(clickhouse-lib PROPERTIES SOVERSION ${VERSION_MAJOR}.${VERSION_MINOR} VERSION ${VERSION_SO} OUTPUT_NAME clickhouse DEBUG_POSTFIX "")
|
set_target_properties(clickhouse-lib PROPERTIES SOVERSION ${VERSION_MAJOR}.${VERSION_MINOR} VERSION ${VERSION_SO} OUTPUT_NAME clickhouse DEBUG_POSTFIX "")
|
||||||
install (TARGETS clickhouse-lib LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR} COMPONENT clickhouse)
|
install (TARGETS clickhouse-lib LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR} COMPONENT clickhouse)
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
if (CLICKHOUSE_SPLIT_BINARY)
|
if (CLICKHOUSE_SPLIT_BINARY)
|
||||||
set (CLICKHOUSE_ALL_TARGETS clickhouse-server clickhouse-client clickhouse-local clickhouse-benchmark clickhouse-extract-from-config clickhouse-compressor clickhouse-format clickhouse-obfuscator clickhouse-git-import clickhouse-copier)
|
set (CLICKHOUSE_ALL_TARGETS clickhouse-server clickhouse-client clickhouse-local clickhouse-benchmark clickhouse-extract-from-config clickhouse-compressor clickhouse-format clickhouse-obfuscator clickhouse-git-import clickhouse-copier clickhouse-keeper)
|
||||||
|
|
||||||
if (ENABLE_CLICKHOUSE_ODBC_BRIDGE)
|
if (ENABLE_CLICKHOUSE_ODBC_BRIDGE)
|
||||||
list (APPEND CLICKHOUSE_ALL_TARGETS clickhouse-odbc-bridge)
|
list (APPEND CLICKHOUSE_ALL_TARGETS clickhouse-odbc-bridge)
|
||||||
@ -277,6 +335,9 @@ else ()
|
|||||||
if (ENABLE_CLICKHOUSE_GIT_IMPORT)
|
if (ENABLE_CLICKHOUSE_GIT_IMPORT)
|
||||||
clickhouse_target_link_split_lib(clickhouse git-import)
|
clickhouse_target_link_split_lib(clickhouse git-import)
|
||||||
endif ()
|
endif ()
|
||||||
|
if (ENABLE_CLICKHOUSE_KEEPER)
|
||||||
|
clickhouse_target_link_split_lib(clickhouse keeper)
|
||||||
|
endif()
|
||||||
if (ENABLE_CLICKHOUSE_INSTALL)
|
if (ENABLE_CLICKHOUSE_INSTALL)
|
||||||
clickhouse_target_link_split_lib(clickhouse install)
|
clickhouse_target_link_split_lib(clickhouse install)
|
||||||
endif ()
|
endif ()
|
||||||
@ -332,6 +393,11 @@ else ()
|
|||||||
install (FILES "${CMAKE_CURRENT_BINARY_DIR}/clickhouse-git-import" DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse)
|
install (FILES "${CMAKE_CURRENT_BINARY_DIR}/clickhouse-git-import" DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse)
|
||||||
list(APPEND CLICKHOUSE_BUNDLE clickhouse-git-import)
|
list(APPEND CLICKHOUSE_BUNDLE clickhouse-git-import)
|
||||||
endif ()
|
endif ()
|
||||||
|
if (ENABLE_CLICKHOUSE_KEEPER)
|
||||||
|
add_custom_target (clickhouse-keeper ALL COMMAND ${CMAKE_COMMAND} -E create_symlink clickhouse clickhouse-keeper DEPENDS clickhouse)
|
||||||
|
install (FILES "${CMAKE_CURRENT_BINARY_DIR}/clickhouse-keeper" DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse)
|
||||||
|
list(APPEND CLICKHOUSE_BUNDLE clickhouse-keeper)
|
||||||
|
endif ()
|
||||||
|
|
||||||
install (TARGETS clickhouse RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse)
|
install (TARGETS clickhouse RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse)
|
||||||
|
|
||||||
|
@ -1366,6 +1366,27 @@ private:
|
|||||||
{
|
{
|
||||||
const auto * exception = server_exception ? server_exception.get() : client_exception.get();
|
const auto * exception = server_exception ? server_exception.get() : client_exception.get();
|
||||||
fmt::print(stderr, "Error on processing query '{}': {}\n", ast_to_process->formatForErrorMessage(), exception->message());
|
fmt::print(stderr, "Error on processing query '{}': {}\n", ast_to_process->formatForErrorMessage(), exception->message());
|
||||||
|
|
||||||
|
// Try to reconnect after errors, for two reasons:
|
||||||
|
// 1. We might not have realized that the server died, e.g. if
|
||||||
|
// it sent us a <Fatal> trace and closed connection properly.
|
||||||
|
// 2. The connection might have gotten into a wrong state and
|
||||||
|
// the next query will get false positive about
|
||||||
|
// "Unknown packet from server".
|
||||||
|
try
|
||||||
|
{
|
||||||
|
connection->forceConnected(connection_parameters.timeouts);
|
||||||
|
}
|
||||||
|
catch (...)
|
||||||
|
{
|
||||||
|
// Just report it, we'll terminate below.
|
||||||
|
fmt::print(stderr,
|
||||||
|
"Error while reconnecting to the server: Code: {}: {}\n",
|
||||||
|
getCurrentExceptionCode(),
|
||||||
|
getCurrentExceptionMessage(true));
|
||||||
|
|
||||||
|
assert(!connection->isConnected());
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!connection->isConnected())
|
if (!connection->isConnected())
|
||||||
@ -1469,11 +1490,6 @@ private:
|
|||||||
server_exception.reset();
|
server_exception.reset();
|
||||||
client_exception.reset();
|
client_exception.reset();
|
||||||
have_error = false;
|
have_error = false;
|
||||||
|
|
||||||
// We have to reinitialize connection after errors, because it
|
|
||||||
// might have gotten into a wrong state and we'll get false
|
|
||||||
// positives about "Unknown packet from server".
|
|
||||||
connection->forceConnected(connection_parameters.timeouts);
|
|
||||||
}
|
}
|
||||||
else if (ast_to_process->formatForErrorMessage().size() > 500)
|
else if (ast_to_process->formatForErrorMessage().size() > 500)
|
||||||
{
|
{
|
||||||
|
@ -16,3 +16,4 @@
|
|||||||
#cmakedefine01 ENABLE_CLICKHOUSE_INSTALL
|
#cmakedefine01 ENABLE_CLICKHOUSE_INSTALL
|
||||||
#cmakedefine01 ENABLE_CLICKHOUSE_ODBC_BRIDGE
|
#cmakedefine01 ENABLE_CLICKHOUSE_ODBC_BRIDGE
|
||||||
#cmakedefine01 ENABLE_CLICKHOUSE_LIBRARY_BRIDGE
|
#cmakedefine01 ENABLE_CLICKHOUSE_LIBRARY_BRIDGE
|
||||||
|
#cmakedefine01 ENABLE_CLICKHOUSE_KEEPER
|
||||||
|
24
programs/keeper/CMakeLists.txt
Normal file
24
programs/keeper/CMakeLists.txt
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
set(CLICKHOUSE_KEEPER_SOURCES
|
||||||
|
Keeper.cpp
|
||||||
|
)
|
||||||
|
|
||||||
|
if (OS_LINUX)
|
||||||
|
set (LINK_RESOURCE_LIB INTERFACE "-Wl,${WHOLE_ARCHIVE} $<TARGET_FILE:clickhouse_keeper_configs> -Wl,${NO_WHOLE_ARCHIVE}")
|
||||||
|
endif ()
|
||||||
|
|
||||||
|
set (CLICKHOUSE_KEEPER_LINK
|
||||||
|
PRIVATE
|
||||||
|
clickhouse_common_config
|
||||||
|
clickhouse_common_io
|
||||||
|
clickhouse_common_zookeeper
|
||||||
|
daemon
|
||||||
|
dbms
|
||||||
|
|
||||||
|
${LINK_RESOURCE_LIB}
|
||||||
|
)
|
||||||
|
|
||||||
|
clickhouse_program_add(keeper)
|
||||||
|
|
||||||
|
install (FILES keeper_config.xml DESTINATION "${CLICKHOUSE_ETC_DIR}/clickhouse-keeper" COMPONENT clickhouse-keeper)
|
||||||
|
|
||||||
|
clickhouse_embed_binaries(keeper keeper_config.xml keeper_embedded.xml)
|
474
programs/keeper/Keeper.cpp
Normal file
474
programs/keeper/Keeper.cpp
Normal file
@ -0,0 +1,474 @@
|
|||||||
|
#include "Keeper.h"
|
||||||
|
|
||||||
|
#include <sys/stat.h>
|
||||||
|
#include <pwd.h>
|
||||||
|
#include <Common/ClickHouseRevision.h>
|
||||||
|
#include <Server/ProtocolServerAdapter.h>
|
||||||
|
#include <Common/DNSResolver.h>
|
||||||
|
#include <Interpreters/DNSCacheUpdater.h>
|
||||||
|
#include <Poco/Net/NetException.h>
|
||||||
|
#include <Poco/Net/TCPServerParams.h>
|
||||||
|
#include <Poco/Net/TCPServer.h>
|
||||||
|
#include <common/defines.h>
|
||||||
|
#include <common/logger_useful.h>
|
||||||
|
#include <common/ErrorHandlers.h>
|
||||||
|
#include <ext/scope_guard.h>
|
||||||
|
#include <Poco/Util/HelpFormatter.h>
|
||||||
|
#include <Poco/Version.h>
|
||||||
|
#include <Poco/Environment.h>
|
||||||
|
#include <Common/getMultipleKeysFromConfig.h>
|
||||||
|
#include <filesystem>
|
||||||
|
#include <IO/UseSSL.h>
|
||||||
|
|
||||||
|
#if !defined(ARCADIA_BUILD)
|
||||||
|
# include "config_core.h"
|
||||||
|
# include "Common/config_version.h"
|
||||||
|
#endif
|
||||||
|
|
||||||
|
#if USE_SSL
|
||||||
|
# include <Poco/Net/Context.h>
|
||||||
|
# include <Poco/Net/SecureServerSocket.h>
|
||||||
|
#endif
|
||||||
|
|
||||||
|
#if USE_NURAFT
|
||||||
|
# include <Server/KeeperTCPHandlerFactory.h>
|
||||||
|
#endif
|
||||||
|
|
||||||
|
#if defined(OS_LINUX)
|
||||||
|
# include <unistd.h>
|
||||||
|
# include <sys/syscall.h>
|
||||||
|
#endif
|
||||||
|
|
||||||
|
|
||||||
|
int mainEntryClickHouseKeeper(int argc, char ** argv)
|
||||||
|
{
|
||||||
|
DB::Keeper app;
|
||||||
|
|
||||||
|
try
|
||||||
|
{
|
||||||
|
return app.run(argc, argv);
|
||||||
|
}
|
||||||
|
catch (...)
|
||||||
|
{
|
||||||
|
std::cerr << DB::getCurrentExceptionMessage(true) << "\n";
|
||||||
|
auto code = DB::getCurrentExceptionCode();
|
||||||
|
return code ? code : 1;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
namespace DB
|
||||||
|
{
|
||||||
|
|
||||||
|
namespace ErrorCodes
|
||||||
|
{
|
||||||
|
extern const int NO_ELEMENTS_IN_CONFIG;
|
||||||
|
extern const int SUPPORT_IS_DISABLED;
|
||||||
|
extern const int NETWORK_ERROR;
|
||||||
|
extern const int MISMATCHING_USERS_FOR_PROCESS_AND_DATA;
|
||||||
|
extern const int FAILED_TO_GETPWUID;
|
||||||
|
}
|
||||||
|
|
||||||
|
namespace
|
||||||
|
{
|
||||||
|
|
||||||
|
int waitServersToFinish(std::vector<DB::ProtocolServerAdapter> & servers, size_t seconds_to_wait)
|
||||||
|
{
|
||||||
|
const int sleep_max_ms = 1000 * seconds_to_wait;
|
||||||
|
const int sleep_one_ms = 100;
|
||||||
|
int sleep_current_ms = 0;
|
||||||
|
int current_connections = 0;
|
||||||
|
for (;;)
|
||||||
|
{
|
||||||
|
current_connections = 0;
|
||||||
|
|
||||||
|
for (auto & server : servers)
|
||||||
|
{
|
||||||
|
server.stop();
|
||||||
|
current_connections += server.currentConnections();
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!current_connections)
|
||||||
|
break;
|
||||||
|
|
||||||
|
sleep_current_ms += sleep_one_ms;
|
||||||
|
if (sleep_current_ms < sleep_max_ms)
|
||||||
|
std::this_thread::sleep_for(std::chrono::milliseconds(sleep_one_ms));
|
||||||
|
else
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
return current_connections;
|
||||||
|
}
|
||||||
|
|
||||||
|
Poco::Net::SocketAddress makeSocketAddress(const std::string & host, UInt16 port, Poco::Logger * log)
|
||||||
|
{
|
||||||
|
Poco::Net::SocketAddress socket_address;
|
||||||
|
try
|
||||||
|
{
|
||||||
|
socket_address = Poco::Net::SocketAddress(host, port);
|
||||||
|
}
|
||||||
|
catch (const Poco::Net::DNSException & e)
|
||||||
|
{
|
||||||
|
const auto code = e.code();
|
||||||
|
if (code == EAI_FAMILY
|
||||||
|
#if defined(EAI_ADDRFAMILY)
|
||||||
|
|| code == EAI_ADDRFAMILY
|
||||||
|
#endif
|
||||||
|
)
|
||||||
|
{
|
||||||
|
LOG_ERROR(log, "Cannot resolve listen_host ({}), error {}: {}. "
|
||||||
|
"If it is an IPv6 address and your host has disabled IPv6, then consider to "
|
||||||
|
"specify IPv4 address to listen in <listen_host> element of configuration "
|
||||||
|
"file. Example: <listen_host>0.0.0.0</listen_host>",
|
||||||
|
host, e.code(), e.message());
|
||||||
|
}
|
||||||
|
|
||||||
|
throw;
|
||||||
|
}
|
||||||
|
return socket_address;
|
||||||
|
}
|
||||||
|
|
||||||
|
[[noreturn]] void forceShutdown()
|
||||||
|
{
|
||||||
|
#if defined(THREAD_SANITIZER) && defined(OS_LINUX)
|
||||||
|
/// Thread sanitizer tries to do something on exit that we don't need if we want to exit immediately,
|
||||||
|
/// while connection handling threads are still run.
|
||||||
|
(void)syscall(SYS_exit_group, 0);
|
||||||
|
__builtin_unreachable();
|
||||||
|
#else
|
||||||
|
_exit(0);
|
||||||
|
#endif
|
||||||
|
}
|
||||||
|
|
||||||
|
std::string getUserName(uid_t user_id)
|
||||||
|
{
|
||||||
|
/// Try to convert user id into user name.
|
||||||
|
auto buffer_size = sysconf(_SC_GETPW_R_SIZE_MAX);
|
||||||
|
if (buffer_size <= 0)
|
||||||
|
buffer_size = 1024;
|
||||||
|
std::string buffer;
|
||||||
|
buffer.reserve(buffer_size);
|
||||||
|
|
||||||
|
struct passwd passwd_entry;
|
||||||
|
struct passwd * result = nullptr;
|
||||||
|
const auto error = getpwuid_r(user_id, &passwd_entry, buffer.data(), buffer_size, &result);
|
||||||
|
|
||||||
|
if (error)
|
||||||
|
throwFromErrno("Failed to find user name for " + toString(user_id), ErrorCodes::FAILED_TO_GETPWUID, error);
|
||||||
|
else if (result)
|
||||||
|
return result->pw_name;
|
||||||
|
return toString(user_id);
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
Poco::Net::SocketAddress Keeper::socketBindListen(Poco::Net::ServerSocket & socket, const std::string & host, UInt16 port, [[maybe_unused]] bool secure) const
|
||||||
|
{
|
||||||
|
auto address = makeSocketAddress(host, port, &logger());
|
||||||
|
#if !defined(POCO_CLICKHOUSE_PATCH) || POCO_VERSION < 0x01090100
|
||||||
|
if (secure)
|
||||||
|
/// Bug in old (<1.9.1) poco, listen() after bind() with reusePort param will fail because have no implementation in SecureServerSocketImpl
|
||||||
|
/// https://github.com/pocoproject/poco/pull/2257
|
||||||
|
socket.bind(address, /* reuseAddress = */ true);
|
||||||
|
else
|
||||||
|
#endif
|
||||||
|
#if POCO_VERSION < 0x01080000
|
||||||
|
socket.bind(address, /* reuseAddress = */ true);
|
||||||
|
#else
|
||||||
|
socket.bind(address, /* reuseAddress = */ true, /* reusePort = */ config().getBool("listen_reuse_port", false));
|
||||||
|
#endif
|
||||||
|
|
||||||
|
socket.listen(/* backlog = */ config().getUInt("listen_backlog", 64));
|
||||||
|
|
||||||
|
return address;
|
||||||
|
}
|
||||||
|
|
||||||
|
void Keeper::createServer(const std::string & listen_host, const char * port_name, bool listen_try, CreateServerFunc && func) const
|
||||||
|
{
|
||||||
|
/// For testing purposes, user may omit tcp_port or http_port or https_port in configuration file.
|
||||||
|
if (!config().has(port_name))
|
||||||
|
return;
|
||||||
|
|
||||||
|
auto port = config().getInt(port_name);
|
||||||
|
try
|
||||||
|
{
|
||||||
|
func(port);
|
||||||
|
}
|
||||||
|
catch (const Poco::Exception &)
|
||||||
|
{
|
||||||
|
std::string message = "Listen [" + listen_host + "]:" + std::to_string(port) + " failed: " + getCurrentExceptionMessage(false);
|
||||||
|
|
||||||
|
if (listen_try)
|
||||||
|
{
|
||||||
|
LOG_WARNING(&logger(), "{}. If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to "
|
||||||
|
"specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration "
|
||||||
|
"file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> ."
|
||||||
|
" Example for disabled IPv4: <listen_host>::</listen_host>",
|
||||||
|
message);
|
||||||
|
}
|
||||||
|
else
|
||||||
|
{
|
||||||
|
throw Exception{message, ErrorCodes::NETWORK_ERROR};
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void Keeper::uninitialize()
|
||||||
|
{
|
||||||
|
logger().information("shutting down");
|
||||||
|
BaseDaemon::uninitialize();
|
||||||
|
}
|
||||||
|
|
||||||
|
int Keeper::run()
|
||||||
|
{
|
||||||
|
if (config().hasOption("help"))
|
||||||
|
{
|
||||||
|
Poco::Util::HelpFormatter help_formatter(Keeper::options());
|
||||||
|
auto header_str = fmt::format("{} [OPTION] [-- [ARG]...]\n"
|
||||||
|
"positional arguments can be used to rewrite config.xml properties, for example, --http_port=8010",
|
||||||
|
commandName());
|
||||||
|
help_formatter.setHeader(header_str);
|
||||||
|
help_formatter.format(std::cout);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
if (config().hasOption("version"))
|
||||||
|
{
|
||||||
|
std::cout << DBMS_NAME << " keeper version " << VERSION_STRING << VERSION_OFFICIAL << "." << std::endl;
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
return Application::run(); // NOLINT
|
||||||
|
}
|
||||||
|
|
||||||
|
void Keeper::initialize(Poco::Util::Application & self)
|
||||||
|
{
|
||||||
|
BaseDaemon::initialize(self);
|
||||||
|
logger().information("starting up");
|
||||||
|
|
||||||
|
LOG_INFO(&logger(), "OS Name = {}, OS Version = {}, OS Architecture = {}",
|
||||||
|
Poco::Environment::osName(),
|
||||||
|
Poco::Environment::osVersion(),
|
||||||
|
Poco::Environment::osArchitecture());
|
||||||
|
}
|
||||||
|
|
||||||
|
std::string Keeper::getDefaultConfigFileName() const
|
||||||
|
{
|
||||||
|
return "keeper_config.xml";
|
||||||
|
}
|
||||||
|
|
||||||
|
void Keeper::defineOptions(Poco::Util::OptionSet & options)
|
||||||
|
{
|
||||||
|
options.addOption(
|
||||||
|
Poco::Util::Option("help", "h", "show help and exit")
|
||||||
|
.required(false)
|
||||||
|
.repeatable(false)
|
||||||
|
.binding("help"));
|
||||||
|
options.addOption(
|
||||||
|
Poco::Util::Option("version", "V", "show version and exit")
|
||||||
|
.required(false)
|
||||||
|
.repeatable(false)
|
||||||
|
.binding("version"));
|
||||||
|
BaseDaemon::defineOptions(options);
|
||||||
|
}
|
||||||
|
|
||||||
|
int Keeper::main(const std::vector<std::string> & /*args*/)
|
||||||
|
{
|
||||||
|
Poco::Logger * log = &logger();
|
||||||
|
|
||||||
|
UseSSL use_ssl;
|
||||||
|
|
||||||
|
MainThreadStatus::getInstance();
|
||||||
|
|
||||||
|
#if !defined(NDEBUG) || !defined(__OPTIMIZE__)
|
||||||
|
LOG_WARNING(log, "Keeper was built in debug mode. It will work slowly.");
|
||||||
|
#endif
|
||||||
|
|
||||||
|
#if defined(SANITIZER)
|
||||||
|
LOG_WARNING(log, "Keeper was built with sanitizer. It will work slowly.");
|
||||||
|
#endif
|
||||||
|
|
||||||
|
auto shared_context = Context::createShared();
|
||||||
|
global_context = Context::createGlobal(shared_context.get());
|
||||||
|
|
||||||
|
global_context->makeGlobalContext();
|
||||||
|
global_context->setApplicationType(Context::ApplicationType::KEEPER);
|
||||||
|
|
||||||
|
if (!config().has("keeper_server"))
|
||||||
|
throw Exception(ErrorCodes::NO_ELEMENTS_IN_CONFIG, "Keeper configuration (<keeper_server> section) not found in config");
|
||||||
|
|
||||||
|
|
||||||
|
std::string path;
|
||||||
|
|
||||||
|
if (config().has("keeper_server.storage_path"))
|
||||||
|
path = config().getString("keeper_server.storage_path");
|
||||||
|
else if (config().has("keeper_server.log_storage_path"))
|
||||||
|
path = config().getString("keeper_server.log_storage_path");
|
||||||
|
else if (config().has("keeper_server.snapshot_storage_path"))
|
||||||
|
path = config().getString("keeper_server.snapshot_storage_path");
|
||||||
|
else
|
||||||
|
path = std::filesystem::path{KEEPER_DEFAULT_PATH};
|
||||||
|
|
||||||
|
|
||||||
|
/// Check that the process user id matches the owner of the data.
|
||||||
|
const auto effective_user_id = geteuid();
|
||||||
|
struct stat statbuf;
|
||||||
|
if (stat(path.c_str(), &statbuf) == 0 && effective_user_id != statbuf.st_uid)
|
||||||
|
{
|
||||||
|
const auto effective_user = getUserName(effective_user_id);
|
||||||
|
const auto data_owner = getUserName(statbuf.st_uid);
|
||||||
|
std::string message = "Effective user of the process (" + effective_user +
|
||||||
|
") does not match the owner of the data (" + data_owner + ").";
|
||||||
|
if (effective_user_id == 0)
|
||||||
|
{
|
||||||
|
message += " Run under 'sudo -u " + data_owner + "'.";
|
||||||
|
throw Exception(message, ErrorCodes::MISMATCHING_USERS_FOR_PROCESS_AND_DATA);
|
||||||
|
}
|
||||||
|
else
|
||||||
|
{
|
||||||
|
LOG_WARNING(log, message);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const Settings & settings = global_context->getSettingsRef();
|
||||||
|
|
||||||
|
GlobalThreadPool::initialize(config().getUInt("max_thread_pool_size", 100));
|
||||||
|
|
||||||
|
static ServerErrorHandler error_handler;
|
||||||
|
Poco::ErrorHandler::set(&error_handler);
|
||||||
|
|
||||||
|
/// Initialize DateLUT early, to not interfere with running time of first query.
|
||||||
|
LOG_DEBUG(log, "Initializing DateLUT.");
|
||||||
|
DateLUT::instance();
|
||||||
|
LOG_TRACE(log, "Initialized DateLUT with time zone '{}'.", DateLUT::instance().getTimeZone());
|
||||||
|
|
||||||
|
/// Don't want to use DNS cache
|
||||||
|
DNSResolver::instance().setDisableCacheFlag();
|
||||||
|
|
||||||
|
Poco::ThreadPool server_pool(3, config().getUInt("max_connections", 1024));
|
||||||
|
|
||||||
|
std::vector<std::string> listen_hosts = DB::getMultipleValuesFromConfig(config(), "", "listen_host");
|
||||||
|
|
||||||
|
bool listen_try = config().getBool("listen_try", false);
|
||||||
|
if (listen_hosts.empty())
|
||||||
|
{
|
||||||
|
listen_hosts.emplace_back("::1");
|
||||||
|
listen_hosts.emplace_back("127.0.0.1");
|
||||||
|
listen_try = true;
|
||||||
|
}
|
||||||
|
|
||||||
|
auto servers = std::make_shared<std::vector<ProtocolServerAdapter>>();
|
||||||
|
|
||||||
|
#if USE_NURAFT
|
||||||
|
/// Initialize test keeper RAFT. Do nothing if no nu_keeper_server in config.
|
||||||
|
global_context->initializeKeeperStorageDispatcher();
|
||||||
|
for (const auto & listen_host : listen_hosts)
|
||||||
|
{
|
||||||
|
/// TCP Keeper
|
||||||
|
const char * port_name = "keeper_server.tcp_port";
|
||||||
|
createServer(listen_host, port_name, listen_try, [&](UInt16 port)
|
||||||
|
{
|
||||||
|
Poco::Net::ServerSocket socket;
|
||||||
|
auto address = socketBindListen(socket, listen_host, port);
|
||||||
|
socket.setReceiveTimeout(settings.receive_timeout);
|
||||||
|
socket.setSendTimeout(settings.send_timeout);
|
||||||
|
servers->emplace_back(
|
||||||
|
port_name,
|
||||||
|
std::make_unique<Poco::Net::TCPServer>(
|
||||||
|
new KeeperTCPHandlerFactory(*this, false), server_pool, socket, new Poco::Net::TCPServerParams));
|
||||||
|
|
||||||
|
LOG_INFO(log, "Listening for connections to Keeper (tcp): {}", address.toString());
|
||||||
|
});
|
||||||
|
|
||||||
|
const char * secure_port_name = "keeper_server.tcp_port_secure";
|
||||||
|
createServer(listen_host, secure_port_name, listen_try, [&](UInt16 port)
|
||||||
|
{
|
||||||
|
#if USE_SSL
|
||||||
|
Poco::Net::SecureServerSocket socket;
|
||||||
|
auto address = socketBindListen(socket, listen_host, port, /* secure = */ true);
|
||||||
|
socket.setReceiveTimeout(settings.receive_timeout);
|
||||||
|
socket.setSendTimeout(settings.send_timeout);
|
||||||
|
servers->emplace_back(
|
||||||
|
secure_port_name,
|
||||||
|
std::make_unique<Poco::Net::TCPServer>(
|
||||||
|
new KeeperTCPHandlerFactory(*this, true), server_pool, socket, new Poco::Net::TCPServerParams));
|
||||||
|
LOG_INFO(log, "Listening for connections to Keeper with secure protocol (tcp_secure): {}", address.toString());
|
||||||
|
#else
|
||||||
|
UNUSED(port);
|
||||||
|
throw Exception{"SSL support for TCP protocol is disabled because Poco library was built without NetSSL support.",
|
||||||
|
ErrorCodes::SUPPORT_IS_DISABLED};
|
||||||
|
#endif
|
||||||
|
});
|
||||||
|
}
|
||||||
|
#else
|
||||||
|
throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "ClickHouse keeper built without NuRaft library. Cannot use coordination.");
|
||||||
|
#endif
|
||||||
|
|
||||||
|
for (auto & server : *servers)
|
||||||
|
server.start();
|
||||||
|
|
||||||
|
SCOPE_EXIT({
|
||||||
|
LOG_INFO(log, "Shutting down.");
|
||||||
|
|
||||||
|
global_context->shutdown();
|
||||||
|
|
||||||
|
LOG_DEBUG(log, "Waiting for current connections to Keeper to finish.");
|
||||||
|
int current_connections = 0;
|
||||||
|
for (auto & server : *servers)
|
||||||
|
{
|
||||||
|
server.stop();
|
||||||
|
current_connections += server.currentConnections();
|
||||||
|
}
|
||||||
|
|
||||||
|
if (current_connections)
|
||||||
|
LOG_INFO(log, "Closed all listening sockets. Waiting for {} outstanding connections.", current_connections);
|
||||||
|
else
|
||||||
|
LOG_INFO(log, "Closed all listening sockets.");
|
||||||
|
|
||||||
|
if (current_connections > 0)
|
||||||
|
current_connections = waitServersToFinish(*servers, config().getInt("shutdown_wait_unfinished", 5));
|
||||||
|
|
||||||
|
if (current_connections)
|
||||||
|
LOG_INFO(log, "Closed connections to Keeper. But {} remain. Probably some users cannot finish their connections after context shutdown.", current_connections);
|
||||||
|
else
|
||||||
|
LOG_INFO(log, "Closed connections to Keeper.");
|
||||||
|
|
||||||
|
global_context->shutdownKeeperStorageDispatcher();
|
||||||
|
|
||||||
|
/// Wait server pool to avoid use-after-free of destroyed context in the handlers
|
||||||
|
server_pool.joinAll();
|
||||||
|
|
||||||
|
/** Explicitly destroy Context. It is more convenient than in destructor of Server, because logger is still available.
|
||||||
|
* At this moment, no one could own shared part of Context.
|
||||||
|
*/
|
||||||
|
global_context.reset();
|
||||||
|
shared_context.reset();
|
||||||
|
|
||||||
|
LOG_DEBUG(log, "Destroyed global context.");
|
||||||
|
|
||||||
|
if (current_connections)
|
||||||
|
{
|
||||||
|
LOG_INFO(log, "Will shutdown forcefully.");
|
||||||
|
forceShutdown();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
|
||||||
|
buildLoggers(config(), logger());
|
||||||
|
|
||||||
|
LOG_INFO(log, "Ready for connections.");
|
||||||
|
|
||||||
|
waitForTerminationRequest();
|
||||||
|
|
||||||
|
return Application::EXIT_OK;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
void Keeper::logRevision() const
|
||||||
|
{
|
||||||
|
Poco::Logger::root().information("Starting ClickHouse Keeper " + std::string{VERSION_STRING}
|
||||||
|
+ " with revision " + std::to_string(ClickHouseRevision::getVersionRevision())
|
||||||
|
+ ", " + build_id_info
|
||||||
|
+ ", PID " + std::to_string(getpid()));
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
}
|
69
programs/keeper/Keeper.h
Normal file
69
programs/keeper/Keeper.h
Normal file
@ -0,0 +1,69 @@
|
|||||||
|
#pragma once
|
||||||
|
|
||||||
|
#include <Server/IServer.h>
|
||||||
|
#include <daemon/BaseDaemon.h>
|
||||||
|
|
||||||
|
namespace Poco
|
||||||
|
{
|
||||||
|
namespace Net
|
||||||
|
{
|
||||||
|
class ServerSocket;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
namespace DB
|
||||||
|
{
|
||||||
|
|
||||||
|
/// standalone clickhouse-keeper server (replacement for ZooKeeper). Uses the same
|
||||||
|
/// config as clickhouse-server. Serves requests on TCP ports with or without
|
||||||
|
/// SSL using ZooKeeper protocol.
|
||||||
|
class Keeper : public BaseDaemon, public IServer
|
||||||
|
{
|
||||||
|
public:
|
||||||
|
using ServerApplication::run;
|
||||||
|
|
||||||
|
Poco::Util::LayeredConfiguration & config() const override
|
||||||
|
{
|
||||||
|
return BaseDaemon::config();
|
||||||
|
}
|
||||||
|
|
||||||
|
Poco::Logger & logger() const override
|
||||||
|
{
|
||||||
|
return BaseDaemon::logger();
|
||||||
|
}
|
||||||
|
|
||||||
|
ContextPtr context() const override
|
||||||
|
{
|
||||||
|
return global_context;
|
||||||
|
}
|
||||||
|
|
||||||
|
bool isCancelled() const override
|
||||||
|
{
|
||||||
|
return BaseDaemon::isCancelled();
|
||||||
|
}
|
||||||
|
|
||||||
|
void defineOptions(Poco::Util::OptionSet & _options) override;
|
||||||
|
|
||||||
|
protected:
|
||||||
|
void logRevision() const override;
|
||||||
|
|
||||||
|
int run() override;
|
||||||
|
|
||||||
|
void initialize(Application & self) override;
|
||||||
|
|
||||||
|
void uninitialize() override;
|
||||||
|
|
||||||
|
int main(const std::vector<std::string> & args) override;
|
||||||
|
|
||||||
|
std::string getDefaultConfigFileName() const override;
|
||||||
|
|
||||||
|
private:
|
||||||
|
ContextPtr global_context;
|
||||||
|
|
||||||
|
Poco::Net::SocketAddress socketBindListen(Poco::Net::ServerSocket & socket, const std::string & host, UInt16 port, [[maybe_unused]] bool secure = false) const;
|
||||||
|
|
||||||
|
using CreateServerFunc = std::function<void(UInt16)>;
|
||||||
|
void createServer(const std::string & listen_host, const char * port_name, bool listen_try, CreateServerFunc && func) const;
|
||||||
|
};
|
||||||
|
|
||||||
|
}
|
6
programs/keeper/clickhouse-keeper.cpp
Normal file
6
programs/keeper/clickhouse-keeper.cpp
Normal file
@ -0,0 +1,6 @@
|
|||||||
|
int mainEntryClickHouseKeeper(int argc, char ** argv);
|
||||||
|
|
||||||
|
int main(int argc_, char ** argv_)
|
||||||
|
{
|
||||||
|
return mainEntryClickHouseKeeper(argc_, argv_);
|
||||||
|
}
|
81
programs/keeper/keeper_config.xml
Normal file
81
programs/keeper/keeper_config.xml
Normal file
@ -0,0 +1,81 @@
|
|||||||
|
<yandex>
|
||||||
|
<logger>
|
||||||
|
<!-- Possible levels [1]:
|
||||||
|
|
||||||
|
- none (turns off logging)
|
||||||
|
- fatal
|
||||||
|
- critical
|
||||||
|
- error
|
||||||
|
- warning
|
||||||
|
- notice
|
||||||
|
- information
|
||||||
|
- debug
|
||||||
|
- trace
|
||||||
|
|
||||||
|
[1]: https://github.com/pocoproject/poco/blob/poco-1.9.4-release/Foundation/include/Poco/Logger.h#L105-L114
|
||||||
|
-->
|
||||||
|
<level>trace</level>
|
||||||
|
<log>/var/log/clickhouse-keeper/clickhouse-keeper.log</log>
|
||||||
|
<errorlog>/var/log/clickhouse-keeper/clickhouse-keeper.err.log</errorlog>
|
||||||
|
<!-- Rotation policy
|
||||||
|
See https://github.com/pocoproject/poco/blob/poco-1.9.4-release/Foundation/include/Poco/FileChannel.h#L54-L85
|
||||||
|
-->
|
||||||
|
<size>1000M</size>
|
||||||
|
<count>10</count>
|
||||||
|
<!-- <console>1</console> --> <!-- Default behavior is autodetection (log to console if not daemon mode and is tty) -->
|
||||||
|
</logger>
|
||||||
|
|
||||||
|
<max_connections>4096</max_connections>
|
||||||
|
|
||||||
|
<keeper_server>
|
||||||
|
<tcp_port>9181</tcp_port>
|
||||||
|
|
||||||
|
<!-- Must be unique among all keeper serves -->
|
||||||
|
<server_id>1</server_id>
|
||||||
|
|
||||||
|
<log_storage_path>/var/lib/clickhouse/coordination/logs</log_storage_path>
|
||||||
|
<snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path>
|
||||||
|
|
||||||
|
<coordination_settings>
|
||||||
|
<operation_timeout_ms>10000</operation_timeout_ms>
|
||||||
|
<session_timeout_ms>30000</session_timeout_ms>
|
||||||
|
<raft_logs_level>information</raft_logs_level>
|
||||||
|
<!-- All settings listed in https://github.com/ClickHouse/ClickHouse/blob/master/src/Coordination/CoordinationSettings.h -->
|
||||||
|
</coordination_settings>
|
||||||
|
|
||||||
|
<raft_configuration>
|
||||||
|
<server>
|
||||||
|
<id>1</id>
|
||||||
|
|
||||||
|
<!-- Internal port and hostname -->
|
||||||
|
<hostname>localhost</hostname>
|
||||||
|
<port>44444</port>
|
||||||
|
</server>
|
||||||
|
|
||||||
|
<!-- Add more servers here -->
|
||||||
|
|
||||||
|
</raft_configuration>
|
||||||
|
</keeper_server>
|
||||||
|
|
||||||
|
|
||||||
|
<openSSL>
|
||||||
|
<server>
|
||||||
|
<!-- Used for secure tcp port -->
|
||||||
|
<!-- openssl req -subj "/CN=localhost" -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout /etc/clickhouse-server/server.key -out /etc/clickhouse-server/server.crt -->
|
||||||
|
<certificateFile>/etc/clickhouse-keeper/server.crt</certificateFile>
|
||||||
|
<privateKeyFile>/etc/clickhouse-keeper/server.key</privateKeyFile>
|
||||||
|
<!-- dhparams are optional. You can delete the <dhParamsFile> element.
|
||||||
|
To generate dhparams, use the following command:
|
||||||
|
openssl dhparam -out /etc/clickhouse-keeper/dhparam.pem 4096
|
||||||
|
Only file format with BEGIN DH PARAMETERS is supported.
|
||||||
|
-->
|
||||||
|
<dhParamsFile>/etc/clickhouse-keeper/dhparam.pem</dhParamsFile>
|
||||||
|
<verificationMode>none</verificationMode>
|
||||||
|
<loadDefaultCAFile>true</loadDefaultCAFile>
|
||||||
|
<cacheSessions>true</cacheSessions>
|
||||||
|
<disableProtocols>sslv2,sslv3</disableProtocols>
|
||||||
|
<preferServerCiphers>true</preferServerCiphers>
|
||||||
|
</server>
|
||||||
|
</openSSL>
|
||||||
|
|
||||||
|
</yandex>
|
21
programs/keeper/keeper_embedded.xml
Normal file
21
programs/keeper/keeper_embedded.xml
Normal file
@ -0,0 +1,21 @@
|
|||||||
|
<yandex>
|
||||||
|
<logger>
|
||||||
|
<level>trace</level>
|
||||||
|
<console>true</console>
|
||||||
|
</logger>
|
||||||
|
|
||||||
|
<keeper_server>
|
||||||
|
<tcp_port>9181</tcp_port>
|
||||||
|
<server_id>1</server_id>
|
||||||
|
<log_storage_path>./keeper_log</log_storage_path>
|
||||||
|
<snapshot_storage_path>./keeper_snapshot</snapshot_storage_path>
|
||||||
|
|
||||||
|
<raft_configuration>
|
||||||
|
<server>
|
||||||
|
<id>1</id>
|
||||||
|
<hostname>localhost</hostname>
|
||||||
|
<port>44444</port>
|
||||||
|
</server>
|
||||||
|
</raft_configuration>
|
||||||
|
</keeper_server>
|
||||||
|
</yandex>
|
@ -55,6 +55,9 @@ int mainEntryClickHouseObfuscator(int argc, char ** argv);
|
|||||||
#if ENABLE_CLICKHOUSE_GIT_IMPORT
|
#if ENABLE_CLICKHOUSE_GIT_IMPORT
|
||||||
int mainEntryClickHouseGitImport(int argc, char ** argv);
|
int mainEntryClickHouseGitImport(int argc, char ** argv);
|
||||||
#endif
|
#endif
|
||||||
|
#if ENABLE_CLICKHOUSE_KEEPER
|
||||||
|
int mainEntryClickHouseKeeper(int argc, char ** argv);
|
||||||
|
#endif
|
||||||
#if ENABLE_CLICKHOUSE_INSTALL
|
#if ENABLE_CLICKHOUSE_INSTALL
|
||||||
int mainEntryClickHouseInstall(int argc, char ** argv);
|
int mainEntryClickHouseInstall(int argc, char ** argv);
|
||||||
int mainEntryClickHouseStart(int argc, char ** argv);
|
int mainEntryClickHouseStart(int argc, char ** argv);
|
||||||
@ -112,6 +115,9 @@ std::pair<const char *, MainFunc> clickhouse_applications[] =
|
|||||||
#if ENABLE_CLICKHOUSE_GIT_IMPORT
|
#if ENABLE_CLICKHOUSE_GIT_IMPORT
|
||||||
{"git-import", mainEntryClickHouseGitImport},
|
{"git-import", mainEntryClickHouseGitImport},
|
||||||
#endif
|
#endif
|
||||||
|
#if ENABLE_CLICKHOUSE_KEEPER
|
||||||
|
{"keeper", mainEntryClickHouseKeeper},
|
||||||
|
#endif
|
||||||
#if ENABLE_CLICKHOUSE_INSTALL
|
#if ENABLE_CLICKHOUSE_INSTALL
|
||||||
{"install", mainEntryClickHouseInstall},
|
{"install", mainEntryClickHouseInstall},
|
||||||
{"start", mainEntryClickHouseStart},
|
{"start", mainEntryClickHouseStart},
|
||||||
|
@ -31,37 +31,4 @@ clickhouse_program_add(server)
|
|||||||
|
|
||||||
install(FILES config.xml users.xml DESTINATION "${CLICKHOUSE_ETC_DIR}/clickhouse-server" COMPONENT clickhouse)
|
install(FILES config.xml users.xml DESTINATION "${CLICKHOUSE_ETC_DIR}/clickhouse-server" COMPONENT clickhouse)
|
||||||
|
|
||||||
# TODO We actually need this on Mac, FreeBSD.
|
clickhouse_embed_binaries(server config.xml users.xml embedded.xml play.html)
|
||||||
if (OS_LINUX)
|
|
||||||
# Embed default config files as a resource into the binary.
|
|
||||||
# This is needed for two purposes:
|
|
||||||
# 1. Allow to run the binary without download of any other files.
|
|
||||||
# 2. Allow to implement "sudo clickhouse install" tool.
|
|
||||||
|
|
||||||
foreach(RESOURCE_FILE config.xml users.xml embedded.xml play.html)
|
|
||||||
set(RESOURCE_OBJ ${RESOURCE_FILE}.o)
|
|
||||||
set(RESOURCE_OBJS ${RESOURCE_OBJS} ${RESOURCE_OBJ})
|
|
||||||
|
|
||||||
# https://stackoverflow.com/questions/14776463/compile-and-add-an-object-file-from-a-binary-with-cmake
|
|
||||||
# PPC64LE fails to do this with objcopy, use ld or lld instead
|
|
||||||
if (ARCH_PPC64LE)
|
|
||||||
add_custom_command(OUTPUT ${RESOURCE_OBJ}
|
|
||||||
COMMAND cd ${CMAKE_CURRENT_SOURCE_DIR} && ${CMAKE_LINKER} -m elf64lppc -r -b binary -o "${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ}" ${RESOURCE_FILE})
|
|
||||||
else()
|
|
||||||
add_custom_command(OUTPUT ${RESOURCE_OBJ}
|
|
||||||
COMMAND cd ${CMAKE_CURRENT_SOURCE_DIR} && ${OBJCOPY_PATH} -I binary ${OBJCOPY_ARCH_OPTIONS} ${RESOURCE_FILE} "${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ}"
|
|
||||||
COMMAND ${OBJCOPY_PATH} --rename-section .data=.rodata,alloc,load,readonly,data,contents
|
|
||||||
"${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ}" "${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ}")
|
|
||||||
endif()
|
|
||||||
set_source_files_properties(${RESOURCE_OBJ} PROPERTIES EXTERNAL_OBJECT true GENERATED true)
|
|
||||||
endforeach(RESOURCE_FILE)
|
|
||||||
|
|
||||||
add_library(clickhouse_server_configs STATIC ${RESOURCE_OBJS})
|
|
||||||
set_target_properties(clickhouse_server_configs PROPERTIES LINKER_LANGUAGE C)
|
|
||||||
|
|
||||||
# whole-archive prevents symbols from being discarded for unknown reason
|
|
||||||
# CMake can shuffle each of target_link_libraries arguments with other
|
|
||||||
# libraries in linker command. To avoid this we hardcode whole-archive
|
|
||||||
# library into single string.
|
|
||||||
add_dependencies(clickhouse-server-lib clickhouse_server_configs)
|
|
||||||
endif ()
|
|
||||||
|
86
programs/server/config-example.yaml.disabled
Normal file
86
programs/server/config-example.yaml.disabled
Normal file
@ -0,0 +1,86 @@
|
|||||||
|
# We can use 3 main node types in YAML: Scalar, Map and Sequence.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# A Scalar is a simple key-value pair:
|
||||||
|
|
||||||
|
scalar: 123
|
||||||
|
|
||||||
|
# Here we have a key "scalar" and value "123"
|
||||||
|
# If we rewrite this in XML, we will get <scalar>123</scalar>
|
||||||
|
|
||||||
|
# We can also represent an empty value with '':
|
||||||
|
|
||||||
|
key: ''
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# A Map is a node, which contains other nodes:
|
||||||
|
|
||||||
|
map:
|
||||||
|
key1: value1
|
||||||
|
key2: value2
|
||||||
|
small_map:
|
||||||
|
key3: value3
|
||||||
|
|
||||||
|
# This map can be converted into:
|
||||||
|
# <map>
|
||||||
|
# <key1>value1</key1>
|
||||||
|
# <key2>value2</key2>
|
||||||
|
# <small_map>
|
||||||
|
# <key3>value3</key3>
|
||||||
|
# </small_map>
|
||||||
|
# </map>
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# A Sequence is a node, which contains also other nodes.
|
||||||
|
# The main difference from Map is that Sequence can also contain simple values.
|
||||||
|
|
||||||
|
sequence:
|
||||||
|
- val1
|
||||||
|
- val2
|
||||||
|
- key: 123
|
||||||
|
- map:
|
||||||
|
mkey1: foo
|
||||||
|
mkey2: bar
|
||||||
|
|
||||||
|
# We can represent it in XML this way:
|
||||||
|
# <sequence>val1</sequence>
|
||||||
|
# <sequence>val2</sequence>
|
||||||
|
# <sequence>
|
||||||
|
# <key>123</key>
|
||||||
|
# </sequence>
|
||||||
|
# <sequence>
|
||||||
|
# <map>
|
||||||
|
# <mkey1>foo</mkey1>
|
||||||
|
# <mkey2>bar</mkey2>
|
||||||
|
# </map>
|
||||||
|
# </sequence>
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# YAML does not have direct support for structures like XML attributes.
|
||||||
|
# We represent them as nodes with @ prefix in key. Note, that @ is reserved by YAML standard,
|
||||||
|
# so you will need to write double quotes around the key. Both Map and Sequence can have
|
||||||
|
# attributes as children nodes
|
||||||
|
|
||||||
|
map:
|
||||||
|
"@attr1": value1
|
||||||
|
"@attr2": value2
|
||||||
|
key: 123
|
||||||
|
|
||||||
|
# This gives us:
|
||||||
|
# <map attr1="value1" attr2="value2">
|
||||||
|
# <key>123</key>
|
||||||
|
# </map>
|
||||||
|
|
||||||
|
sequence:
|
||||||
|
- "@attr1": value1
|
||||||
|
- "@attr2": value2
|
||||||
|
- 123
|
||||||
|
- abc
|
||||||
|
|
||||||
|
# And this gives us:
|
||||||
|
# <map attr1="value1" attr2="value2">123</map>
|
||||||
|
# <map attr1="value1" attr2="value2">abc</map>
|
@ -362,6 +362,20 @@
|
|||||||
bind_dn - template used to construct the DN to bind to.
|
bind_dn - template used to construct the DN to bind to.
|
||||||
The resulting DN will be constructed by replacing all '{user_name}' substrings of the template with the actual
|
The resulting DN will be constructed by replacing all '{user_name}' substrings of the template with the actual
|
||||||
user name during each authentication attempt.
|
user name during each authentication attempt.
|
||||||
|
user_dn_detection - section with LDAP search parameters for detecting the actual user DN of the bound user.
|
||||||
|
This is mainly used in search filters for further role mapping when the server is Active Directory. The
|
||||||
|
resulting user DN will be used when replacing '{user_dn}' substrings wherever they are allowed. By default,
|
||||||
|
user DN is set equal to bind DN, but once search is performed, it will be updated with to the actual detected
|
||||||
|
user DN value.
|
||||||
|
base_dn - template used to construct the base DN for the LDAP search.
|
||||||
|
The resulting DN will be constructed by replacing all '{user_name}' and '{bind_dn}' substrings
|
||||||
|
of the template with the actual user name and bind DN during the LDAP search.
|
||||||
|
scope - scope of the LDAP search.
|
||||||
|
Accepted values are: 'base', 'one_level', 'children', 'subtree' (the default).
|
||||||
|
search_filter - template used to construct the search filter for the LDAP search.
|
||||||
|
The resulting filter will be constructed by replacing all '{user_name}', '{bind_dn}', and '{base_dn}'
|
||||||
|
substrings of the template with the actual user name, bind DN, and base DN during the LDAP search.
|
||||||
|
Note, that the special characters must be escaped properly in XML.
|
||||||
verification_cooldown - a period of time, in seconds, after a successful bind attempt, during which a user will be assumed
|
verification_cooldown - a period of time, in seconds, after a successful bind attempt, during which a user will be assumed
|
||||||
to be successfully authenticated for all consecutive requests without contacting the LDAP server.
|
to be successfully authenticated for all consecutive requests without contacting the LDAP server.
|
||||||
Specify 0 (the default) to disable caching and force contacting the LDAP server for each authentication request.
|
Specify 0 (the default) to disable caching and force contacting the LDAP server for each authentication request.
|
||||||
@ -393,6 +407,17 @@
|
|||||||
<tls_ca_cert_dir>/path/to/tls_ca_cert_dir</tls_ca_cert_dir>
|
<tls_ca_cert_dir>/path/to/tls_ca_cert_dir</tls_ca_cert_dir>
|
||||||
<tls_cipher_suite>ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:AES256-GCM-SHA384</tls_cipher_suite>
|
<tls_cipher_suite>ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:AES256-GCM-SHA384</tls_cipher_suite>
|
||||||
</my_ldap_server>
|
</my_ldap_server>
|
||||||
|
Example (typical Active Directory with configured user DN detection for further role mapping):
|
||||||
|
<my_ad_server>
|
||||||
|
<host>localhost</host>
|
||||||
|
<port>389</port>
|
||||||
|
<bind_dn>EXAMPLE\{user_name}</bind_dn>
|
||||||
|
<user_dn_detection>
|
||||||
|
<base_dn>CN=Users,DC=example,DC=com</base_dn>
|
||||||
|
<search_filter>(&(objectClass=user)(sAMAccountName={user_name}))</search_filter>
|
||||||
|
</user_dn_detection>
|
||||||
|
<enable_tls>no</enable_tls>
|
||||||
|
</my_ad_server>
|
||||||
-->
|
-->
|
||||||
</ldap_servers>
|
</ldap_servers>
|
||||||
|
|
||||||
@ -444,15 +469,16 @@
|
|||||||
There can be multiple 'role_mapping' sections defined inside the same 'ldap' section. All of them will be
|
There can be multiple 'role_mapping' sections defined inside the same 'ldap' section. All of them will be
|
||||||
applied.
|
applied.
|
||||||
base_dn - template used to construct the base DN for the LDAP search.
|
base_dn - template used to construct the base DN for the LDAP search.
|
||||||
The resulting DN will be constructed by replacing all '{user_name}' and '{bind_dn}' substrings
|
The resulting DN will be constructed by replacing all '{user_name}', '{bind_dn}', and '{user_dn}'
|
||||||
of the template with the actual user name and bind DN during each LDAP search.
|
substrings of the template with the actual user name, bind DN, and user DN during each LDAP search.
|
||||||
scope - scope of the LDAP search.
|
scope - scope of the LDAP search.
|
||||||
Accepted values are: 'base', 'one_level', 'children', 'subtree' (the default).
|
Accepted values are: 'base', 'one_level', 'children', 'subtree' (the default).
|
||||||
search_filter - template used to construct the search filter for the LDAP search.
|
search_filter - template used to construct the search filter for the LDAP search.
|
||||||
The resulting filter will be constructed by replacing all '{user_name}', '{bind_dn}', and '{base_dn}'
|
The resulting filter will be constructed by replacing all '{user_name}', '{bind_dn}', '{user_dn}', and
|
||||||
substrings of the template with the actual user name, bind DN, and base DN during each LDAP search.
|
'{base_dn}' substrings of the template with the actual user name, bind DN, user DN, and base DN during
|
||||||
|
each LDAP search.
|
||||||
Note, that the special characters must be escaped properly in XML.
|
Note, that the special characters must be escaped properly in XML.
|
||||||
attribute - attribute name whose values will be returned by the LDAP search.
|
attribute - attribute name whose values will be returned by the LDAP search. 'cn', by default.
|
||||||
prefix - prefix, that will be expected to be in front of each string in the original list of strings returned by
|
prefix - prefix, that will be expected to be in front of each string in the original list of strings returned by
|
||||||
the LDAP search. Prefix will be removed from the original strings and resulting strings will be treated
|
the LDAP search. Prefix will be removed from the original strings and resulting strings will be treated
|
||||||
as local role names. Empty, by default.
|
as local role names. Empty, by default.
|
||||||
@ -471,6 +497,17 @@
|
|||||||
<prefix>clickhouse_</prefix>
|
<prefix>clickhouse_</prefix>
|
||||||
</role_mapping>
|
</role_mapping>
|
||||||
</ldap>
|
</ldap>
|
||||||
|
Example (typical Active Directory with role mapping that relies on the detected user DN):
|
||||||
|
<ldap>
|
||||||
|
<server>my_ad_server</server>
|
||||||
|
<role_mapping>
|
||||||
|
<base_dn>CN=Users,DC=example,DC=com</base_dn>
|
||||||
|
<attribute>CN</attribute>
|
||||||
|
<scope>subtree</scope>
|
||||||
|
<search_filter>(&(objectClass=group)(member={user_dn}))</search_filter>
|
||||||
|
<prefix>clickhouse_</prefix>
|
||||||
|
</role_mapping>
|
||||||
|
</ldap>
|
||||||
-->
|
-->
|
||||||
</user_directories>
|
</user_directories>
|
||||||
|
|
||||||
|
@ -143,11 +143,13 @@ ContextAccess::ContextAccess(const AccessControlManager & manager_, const Params
|
|||||||
: manager(&manager_)
|
: manager(&manager_)
|
||||||
, params(params_)
|
, params(params_)
|
||||||
{
|
{
|
||||||
|
std::lock_guard lock{mutex};
|
||||||
|
|
||||||
subscription_for_user_change = manager->subscribeForChanges(
|
subscription_for_user_change = manager->subscribeForChanges(
|
||||||
*params.user_id, [this](const UUID &, const AccessEntityPtr & entity)
|
*params.user_id, [this](const UUID &, const AccessEntityPtr & entity)
|
||||||
{
|
{
|
||||||
UserPtr changed_user = entity ? typeid_cast<UserPtr>(entity) : nullptr;
|
UserPtr changed_user = entity ? typeid_cast<UserPtr>(entity) : nullptr;
|
||||||
std::lock_guard lock{mutex};
|
std::lock_guard lock2{mutex};
|
||||||
setUser(changed_user);
|
setUser(changed_user);
|
||||||
});
|
});
|
||||||
|
|
||||||
@ -189,7 +191,7 @@ void ContextAccess::setUser(const UserPtr & user_) const
|
|||||||
current_roles_with_admin_option = user->granted_roles.findGrantedWithAdminOption(params.current_roles);
|
current_roles_with_admin_option = user->granted_roles.findGrantedWithAdminOption(params.current_roles);
|
||||||
}
|
}
|
||||||
|
|
||||||
subscription_for_roles_changes = {};
|
subscription_for_roles_changes.reset();
|
||||||
enabled_roles = manager->getEnabledRoles(current_roles, current_roles_with_admin_option);
|
enabled_roles = manager->getEnabledRoles(current_roles, current_roles_with_admin_option);
|
||||||
subscription_for_roles_changes = enabled_roles->subscribeForChanges([this](const std::shared_ptr<const EnabledRolesInfo> & roles_info_)
|
subscription_for_roles_changes = enabled_roles->subscribeForChanges([this](const std::shared_ptr<const EnabledRolesInfo> & roles_info_)
|
||||||
{
|
{
|
||||||
|
@ -20,13 +20,42 @@ namespace ErrorCodes
|
|||||||
namespace
|
namespace
|
||||||
{
|
{
|
||||||
|
|
||||||
auto parseLDAPServer(const Poco::Util::AbstractConfiguration & config, const String & name)
|
void parseLDAPSearchParams(LDAPClient::SearchParams & params, const Poco::Util::AbstractConfiguration & config, const String & prefix)
|
||||||
|
{
|
||||||
|
const bool has_base_dn = config.has(prefix + ".base_dn");
|
||||||
|
const bool has_search_filter = config.has(prefix + ".search_filter");
|
||||||
|
const bool has_attribute = config.has(prefix + ".attribute");
|
||||||
|
const bool has_scope = config.has(prefix + ".scope");
|
||||||
|
|
||||||
|
if (has_base_dn)
|
||||||
|
params.base_dn = config.getString(prefix + ".base_dn");
|
||||||
|
|
||||||
|
if (has_search_filter)
|
||||||
|
params.search_filter = config.getString(prefix + ".search_filter");
|
||||||
|
|
||||||
|
if (has_attribute)
|
||||||
|
params.attribute = config.getString(prefix + ".attribute");
|
||||||
|
|
||||||
|
if (has_scope)
|
||||||
|
{
|
||||||
|
auto scope = config.getString(prefix + ".scope");
|
||||||
|
boost::algorithm::to_lower(scope);
|
||||||
|
|
||||||
|
if (scope == "base") params.scope = LDAPClient::SearchParams::Scope::BASE;
|
||||||
|
else if (scope == "one_level") params.scope = LDAPClient::SearchParams::Scope::ONE_LEVEL;
|
||||||
|
else if (scope == "subtree") params.scope = LDAPClient::SearchParams::Scope::SUBTREE;
|
||||||
|
else if (scope == "children") params.scope = LDAPClient::SearchParams::Scope::CHILDREN;
|
||||||
|
else
|
||||||
|
throw Exception("Invalid value for 'scope' field of LDAP search parameters in '" + prefix +
|
||||||
|
"' section, must be one of 'base', 'one_level', 'subtree', or 'children'", ErrorCodes::BAD_ARGUMENTS);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void parseLDAPServer(LDAPClient::Params & params, const Poco::Util::AbstractConfiguration & config, const String & name)
|
||||||
{
|
{
|
||||||
if (name.empty())
|
if (name.empty())
|
||||||
throw Exception("LDAP server name cannot be empty", ErrorCodes::BAD_ARGUMENTS);
|
throw Exception("LDAP server name cannot be empty", ErrorCodes::BAD_ARGUMENTS);
|
||||||
|
|
||||||
LDAPClient::Params params;
|
|
||||||
|
|
||||||
const String ldap_server_config = "ldap_servers." + name;
|
const String ldap_server_config = "ldap_servers." + name;
|
||||||
|
|
||||||
const bool has_host = config.has(ldap_server_config + ".host");
|
const bool has_host = config.has(ldap_server_config + ".host");
|
||||||
@ -34,6 +63,7 @@ auto parseLDAPServer(const Poco::Util::AbstractConfiguration & config, const Str
|
|||||||
const bool has_bind_dn = config.has(ldap_server_config + ".bind_dn");
|
const bool has_bind_dn = config.has(ldap_server_config + ".bind_dn");
|
||||||
const bool has_auth_dn_prefix = config.has(ldap_server_config + ".auth_dn_prefix");
|
const bool has_auth_dn_prefix = config.has(ldap_server_config + ".auth_dn_prefix");
|
||||||
const bool has_auth_dn_suffix = config.has(ldap_server_config + ".auth_dn_suffix");
|
const bool has_auth_dn_suffix = config.has(ldap_server_config + ".auth_dn_suffix");
|
||||||
|
const bool has_user_dn_detection = config.has(ldap_server_config + ".user_dn_detection");
|
||||||
const bool has_verification_cooldown = config.has(ldap_server_config + ".verification_cooldown");
|
const bool has_verification_cooldown = config.has(ldap_server_config + ".verification_cooldown");
|
||||||
const bool has_enable_tls = config.has(ldap_server_config + ".enable_tls");
|
const bool has_enable_tls = config.has(ldap_server_config + ".enable_tls");
|
||||||
const bool has_tls_minimum_protocol_version = config.has(ldap_server_config + ".tls_minimum_protocol_version");
|
const bool has_tls_minimum_protocol_version = config.has(ldap_server_config + ".tls_minimum_protocol_version");
|
||||||
@ -66,6 +96,17 @@ auto parseLDAPServer(const Poco::Util::AbstractConfiguration & config, const Str
|
|||||||
params.bind_dn = auth_dn_prefix + "{user_name}" + auth_dn_suffix;
|
params.bind_dn = auth_dn_prefix + "{user_name}" + auth_dn_suffix;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (has_user_dn_detection)
|
||||||
|
{
|
||||||
|
if (!params.user_dn_detection)
|
||||||
|
{
|
||||||
|
params.user_dn_detection.emplace();
|
||||||
|
params.user_dn_detection->attribute = "dn";
|
||||||
|
}
|
||||||
|
|
||||||
|
parseLDAPSearchParams(*params.user_dn_detection, config, ldap_server_config + ".user_dn_detection");
|
||||||
|
}
|
||||||
|
|
||||||
if (has_verification_cooldown)
|
if (has_verification_cooldown)
|
||||||
params.verification_cooldown = std::chrono::seconds{config.getUInt64(ldap_server_config + ".verification_cooldown")};
|
params.verification_cooldown = std::chrono::seconds{config.getUInt64(ldap_server_config + ".verification_cooldown")};
|
||||||
|
|
||||||
@ -143,14 +184,10 @@ auto parseLDAPServer(const Poco::Util::AbstractConfiguration & config, const Str
|
|||||||
}
|
}
|
||||||
else
|
else
|
||||||
params.port = (params.enable_tls == LDAPClient::Params::TLSEnable::YES ? 636 : 389);
|
params.port = (params.enable_tls == LDAPClient::Params::TLSEnable::YES ? 636 : 389);
|
||||||
|
|
||||||
return params;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
auto parseKerberosParams(const Poco::Util::AbstractConfiguration & config)
|
void parseKerberosParams(GSSAcceptorContext::Params & params, const Poco::Util::AbstractConfiguration & config)
|
||||||
{
|
{
|
||||||
GSSAcceptorContext::Params params;
|
|
||||||
|
|
||||||
Poco::Util::AbstractConfiguration::Keys keys;
|
Poco::Util::AbstractConfiguration::Keys keys;
|
||||||
config.keys("kerberos", keys);
|
config.keys("kerberos", keys);
|
||||||
|
|
||||||
@ -180,12 +217,20 @@ auto parseKerberosParams(const Poco::Util::AbstractConfiguration & config)
|
|||||||
|
|
||||||
params.realm = config.getString("kerberos.realm", "");
|
params.realm = config.getString("kerberos.realm", "");
|
||||||
params.principal = config.getString("kerberos.principal", "");
|
params.principal = config.getString("kerberos.principal", "");
|
||||||
|
|
||||||
return params;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void parseLDAPRoleSearchParams(LDAPClient::RoleSearchParams & params, const Poco::Util::AbstractConfiguration & config, const String & prefix)
|
||||||
|
{
|
||||||
|
parseLDAPSearchParams(params, config, prefix);
|
||||||
|
|
||||||
|
const bool has_prefix = config.has(prefix + ".prefix");
|
||||||
|
|
||||||
|
if (has_prefix)
|
||||||
|
params.prefix = config.getString(prefix + ".prefix");
|
||||||
|
}
|
||||||
|
|
||||||
void ExternalAuthenticators::reset()
|
void ExternalAuthenticators::reset()
|
||||||
{
|
{
|
||||||
std::scoped_lock lock(mutex);
|
std::scoped_lock lock(mutex);
|
||||||
@ -229,7 +274,8 @@ void ExternalAuthenticators::setConfiguration(const Poco::Util::AbstractConfigur
|
|||||||
{
|
{
|
||||||
try
|
try
|
||||||
{
|
{
|
||||||
ldap_client_params_blueprint.insert_or_assign(ldap_server_name, parseLDAPServer(config, ldap_server_name));
|
ldap_client_params_blueprint.erase(ldap_server_name);
|
||||||
|
parseLDAPServer(ldap_client_params_blueprint.emplace(ldap_server_name, LDAPClient::Params{}).first->second, config, ldap_server_name);
|
||||||
}
|
}
|
||||||
catch (...)
|
catch (...)
|
||||||
{
|
{
|
||||||
@ -240,7 +286,7 @@ void ExternalAuthenticators::setConfiguration(const Poco::Util::AbstractConfigur
|
|||||||
try
|
try
|
||||||
{
|
{
|
||||||
if (kerberos_keys_count > 0)
|
if (kerberos_keys_count > 0)
|
||||||
kerberos_params = parseKerberosParams(config);
|
parseKerberosParams(kerberos_params.emplace(), config);
|
||||||
}
|
}
|
||||||
catch (...)
|
catch (...)
|
||||||
{
|
{
|
||||||
@ -249,7 +295,7 @@ void ExternalAuthenticators::setConfiguration(const Poco::Util::AbstractConfigur
|
|||||||
}
|
}
|
||||||
|
|
||||||
bool ExternalAuthenticators::checkLDAPCredentials(const String & server, const BasicCredentials & credentials,
|
bool ExternalAuthenticators::checkLDAPCredentials(const String & server, const BasicCredentials & credentials,
|
||||||
const LDAPClient::SearchParamsList * search_params, LDAPClient::SearchResultsList * search_results) const
|
const LDAPClient::RoleSearchParamsList * role_search_params, LDAPClient::SearchResultsList * role_search_results) const
|
||||||
{
|
{
|
||||||
std::optional<LDAPClient::Params> params;
|
std::optional<LDAPClient::Params> params;
|
||||||
std::size_t params_hash = 0;
|
std::size_t params_hash = 0;
|
||||||
@ -267,9 +313,9 @@ bool ExternalAuthenticators::checkLDAPCredentials(const String & server, const B
|
|||||||
params->password = credentials.getPassword();
|
params->password = credentials.getPassword();
|
||||||
|
|
||||||
params->combineCoreHash(params_hash);
|
params->combineCoreHash(params_hash);
|
||||||
if (search_params)
|
if (role_search_params)
|
||||||
{
|
{
|
||||||
for (const auto & params_instance : *search_params)
|
for (const auto & params_instance : *role_search_params)
|
||||||
{
|
{
|
||||||
params_instance.combineHash(params_hash);
|
params_instance.combineHash(params_hash);
|
||||||
}
|
}
|
||||||
@ -301,14 +347,14 @@ bool ExternalAuthenticators::checkLDAPCredentials(const String & server, const B
|
|||||||
|
|
||||||
// Ensure that search_params are compatible.
|
// Ensure that search_params are compatible.
|
||||||
(
|
(
|
||||||
search_params == nullptr ?
|
role_search_params == nullptr ?
|
||||||
entry.last_successful_search_results.empty() :
|
entry.last_successful_role_search_results.empty() :
|
||||||
search_params->size() == entry.last_successful_search_results.size()
|
role_search_params->size() == entry.last_successful_role_search_results.size()
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
{
|
{
|
||||||
if (search_results)
|
if (role_search_results)
|
||||||
*search_results = entry.last_successful_search_results;
|
*role_search_results = entry.last_successful_role_search_results;
|
||||||
|
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
@ -326,7 +372,7 @@ bool ExternalAuthenticators::checkLDAPCredentials(const String & server, const B
|
|||||||
}
|
}
|
||||||
|
|
||||||
LDAPSimpleAuthClient client(params.value());
|
LDAPSimpleAuthClient client(params.value());
|
||||||
const auto result = client.authenticate(search_params, search_results);
|
const auto result = client.authenticate(role_search_params, role_search_results);
|
||||||
const auto current_check_timestamp = std::chrono::steady_clock::now();
|
const auto current_check_timestamp = std::chrono::steady_clock::now();
|
||||||
|
|
||||||
// Update the cache, but only if this is the latest check and the server is still configured in a compatible way.
|
// Update the cache, but only if this is the latest check and the server is still configured in a compatible way.
|
||||||
@ -345,9 +391,9 @@ bool ExternalAuthenticators::checkLDAPCredentials(const String & server, const B
|
|||||||
|
|
||||||
std::size_t new_params_hash = 0;
|
std::size_t new_params_hash = 0;
|
||||||
new_params.combineCoreHash(new_params_hash);
|
new_params.combineCoreHash(new_params_hash);
|
||||||
if (search_params)
|
if (role_search_params)
|
||||||
{
|
{
|
||||||
for (const auto & params_instance : *search_params)
|
for (const auto & params_instance : *role_search_params)
|
||||||
{
|
{
|
||||||
params_instance.combineHash(new_params_hash);
|
params_instance.combineHash(new_params_hash);
|
||||||
}
|
}
|
||||||
@ -363,17 +409,17 @@ bool ExternalAuthenticators::checkLDAPCredentials(const String & server, const B
|
|||||||
entry.last_successful_params_hash = params_hash;
|
entry.last_successful_params_hash = params_hash;
|
||||||
entry.last_successful_authentication_timestamp = current_check_timestamp;
|
entry.last_successful_authentication_timestamp = current_check_timestamp;
|
||||||
|
|
||||||
if (search_results)
|
if (role_search_results)
|
||||||
entry.last_successful_search_results = *search_results;
|
entry.last_successful_role_search_results = *role_search_results;
|
||||||
else
|
else
|
||||||
entry.last_successful_search_results.clear();
|
entry.last_successful_role_search_results.clear();
|
||||||
}
|
}
|
||||||
else if (
|
else if (
|
||||||
entry.last_successful_params_hash != params_hash ||
|
entry.last_successful_params_hash != params_hash ||
|
||||||
(
|
(
|
||||||
search_params == nullptr ?
|
role_search_params == nullptr ?
|
||||||
!entry.last_successful_search_results.empty() :
|
!entry.last_successful_role_search_results.empty() :
|
||||||
search_params->size() != entry.last_successful_search_results.size()
|
role_search_params->size() != entry.last_successful_role_search_results.size()
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
{
|
{
|
||||||
|
@ -34,7 +34,7 @@ public:
|
|||||||
|
|
||||||
// The name and readiness of the credentials must be verified before calling these.
|
// The name and readiness of the credentials must be verified before calling these.
|
||||||
bool checkLDAPCredentials(const String & server, const BasicCredentials & credentials,
|
bool checkLDAPCredentials(const String & server, const BasicCredentials & credentials,
|
||||||
const LDAPClient::SearchParamsList * search_params = nullptr, LDAPClient::SearchResultsList * search_results = nullptr) const;
|
const LDAPClient::RoleSearchParamsList * role_search_params = nullptr, LDAPClient::SearchResultsList * role_search_results = nullptr) const;
|
||||||
bool checkKerberosCredentials(const String & realm, const GSSAcceptorContext & credentials) const;
|
bool checkKerberosCredentials(const String & realm, const GSSAcceptorContext & credentials) const;
|
||||||
|
|
||||||
GSSAcceptorContext::Params getKerberosParams() const;
|
GSSAcceptorContext::Params getKerberosParams() const;
|
||||||
@ -44,7 +44,7 @@ private:
|
|||||||
{
|
{
|
||||||
std::size_t last_successful_params_hash = 0;
|
std::size_t last_successful_params_hash = 0;
|
||||||
std::chrono::steady_clock::time_point last_successful_authentication_timestamp;
|
std::chrono::steady_clock::time_point last_successful_authentication_timestamp;
|
||||||
LDAPClient::SearchResultsList last_successful_search_results;
|
LDAPClient::SearchResultsList last_successful_role_search_results;
|
||||||
};
|
};
|
||||||
|
|
||||||
using LDAPCache = std::unordered_map<String, LDAPCacheEntry>; // user name -> cache entry
|
using LDAPCache = std::unordered_map<String, LDAPCacheEntry>; // user name -> cache entry
|
||||||
@ -58,4 +58,6 @@ private:
|
|||||||
std::optional<GSSAcceptorContext::Params> kerberos_params;
|
std::optional<GSSAcceptorContext::Params> kerberos_params;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
void parseLDAPRoleSearchParams(LDAPClient::RoleSearchParams & params, const Poco::Util::AbstractConfiguration & config, const String & prefix);
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -68,34 +68,15 @@ void LDAPAccessStorage::setConfiguration(AccessControlManager * access_control_m
|
|||||||
common_roles_cfg.insert(role_names.begin(), role_names.end());
|
common_roles_cfg.insert(role_names.begin(), role_names.end());
|
||||||
}
|
}
|
||||||
|
|
||||||
LDAPClient::SearchParamsList role_search_params_cfg;
|
LDAPClient::RoleSearchParamsList role_search_params_cfg;
|
||||||
if (has_role_mapping)
|
if (has_role_mapping)
|
||||||
{
|
{
|
||||||
Poco::Util::AbstractConfiguration::Keys all_keys;
|
Poco::Util::AbstractConfiguration::Keys all_keys;
|
||||||
config.keys(prefix, all_keys);
|
config.keys(prefix, all_keys);
|
||||||
for (const auto & key : all_keys)
|
for (const auto & key : all_keys)
|
||||||
{
|
{
|
||||||
if (key != "role_mapping" && key.find("role_mapping[") != 0)
|
if (key == "role_mapping" || key.find("role_mapping[") == 0)
|
||||||
continue;
|
parseLDAPRoleSearchParams(role_search_params_cfg.emplace_back(), config, prefix_str + key);
|
||||||
|
|
||||||
const String rm_prefix = prefix_str + key;
|
|
||||||
const String rm_prefix_str = rm_prefix + '.';
|
|
||||||
role_search_params_cfg.emplace_back();
|
|
||||||
auto & rm_params = role_search_params_cfg.back();
|
|
||||||
|
|
||||||
rm_params.base_dn = config.getString(rm_prefix_str + "base_dn", "");
|
|
||||||
rm_params.search_filter = config.getString(rm_prefix_str + "search_filter", "");
|
|
||||||
rm_params.attribute = config.getString(rm_prefix_str + "attribute", "cn");
|
|
||||||
rm_params.prefix = config.getString(rm_prefix_str + "prefix", "");
|
|
||||||
|
|
||||||
auto scope = config.getString(rm_prefix_str + "scope", "subtree");
|
|
||||||
boost::algorithm::to_lower(scope);
|
|
||||||
if (scope == "base") rm_params.scope = LDAPClient::SearchParams::Scope::BASE;
|
|
||||||
else if (scope == "one_level") rm_params.scope = LDAPClient::SearchParams::Scope::ONE_LEVEL;
|
|
||||||
else if (scope == "subtree") rm_params.scope = LDAPClient::SearchParams::Scope::SUBTREE;
|
|
||||||
else if (scope == "children") rm_params.scope = LDAPClient::SearchParams::Scope::CHILDREN;
|
|
||||||
else
|
|
||||||
throw Exception("Invalid value of 'scope' field in '" + key + "' section of LDAP user directory, must be one of 'base', 'one_level', 'subtree', or 'children'", ErrorCodes::BAD_ARGUMENTS);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -364,7 +345,7 @@ std::set<String> LDAPAccessStorage::mapExternalRolesNoLock(const LDAPClient::Sea
|
|||||||
|
|
||||||
|
|
||||||
bool LDAPAccessStorage::areLDAPCredentialsValidNoLock(const User & user, const Credentials & credentials,
|
bool LDAPAccessStorage::areLDAPCredentialsValidNoLock(const User & user, const Credentials & credentials,
|
||||||
const ExternalAuthenticators & external_authenticators, LDAPClient::SearchResultsList & search_results) const
|
const ExternalAuthenticators & external_authenticators, LDAPClient::SearchResultsList & role_search_results) const
|
||||||
{
|
{
|
||||||
if (!credentials.isReady())
|
if (!credentials.isReady())
|
||||||
return false;
|
return false;
|
||||||
@ -373,7 +354,7 @@ bool LDAPAccessStorage::areLDAPCredentialsValidNoLock(const User & user, const C
|
|||||||
return false;
|
return false;
|
||||||
|
|
||||||
if (const auto * basic_credentials = dynamic_cast<const BasicCredentials *>(&credentials))
|
if (const auto * basic_credentials = dynamic_cast<const BasicCredentials *>(&credentials))
|
||||||
return external_authenticators.checkLDAPCredentials(ldap_server_name, *basic_credentials, &role_search_params, &search_results);
|
return external_authenticators.checkLDAPCredentials(ldap_server_name, *basic_credentials, &role_search_params, &role_search_results);
|
||||||
|
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
@ -68,12 +68,12 @@ private:
|
|||||||
void updateAssignedRolesNoLock(const UUID & id, const String & user_name, const LDAPClient::SearchResultsList & external_roles) const;
|
void updateAssignedRolesNoLock(const UUID & id, const String & user_name, const LDAPClient::SearchResultsList & external_roles) const;
|
||||||
std::set<String> mapExternalRolesNoLock(const LDAPClient::SearchResultsList & external_roles) const;
|
std::set<String> mapExternalRolesNoLock(const LDAPClient::SearchResultsList & external_roles) const;
|
||||||
bool areLDAPCredentialsValidNoLock(const User & user, const Credentials & credentials,
|
bool areLDAPCredentialsValidNoLock(const User & user, const Credentials & credentials,
|
||||||
const ExternalAuthenticators & external_authenticators, LDAPClient::SearchResultsList & search_results) const;
|
const ExternalAuthenticators & external_authenticators, LDAPClient::SearchResultsList & role_search_results) const;
|
||||||
|
|
||||||
mutable std::recursive_mutex mutex;
|
mutable std::recursive_mutex mutex;
|
||||||
AccessControlManager * access_control_manager = nullptr;
|
AccessControlManager * access_control_manager = nullptr;
|
||||||
String ldap_server_name;
|
String ldap_server_name;
|
||||||
LDAPClient::SearchParamsList role_search_params;
|
LDAPClient::RoleSearchParamsList role_search_params;
|
||||||
std::set<String> common_role_names; // role name that should be granted to all users at all times
|
std::set<String> common_role_names; // role name that should be granted to all users at all times
|
||||||
mutable std::map<String, std::size_t> external_role_hashes; // user name -> LDAPClient::SearchResultsList hash (most recently retrieved and processed)
|
mutable std::map<String, std::size_t> external_role_hashes; // user name -> LDAPClient::SearchResultsList hash (most recently retrieved and processed)
|
||||||
mutable std::map<String, std::set<String>> users_per_roles; // role name -> user names (...it should be granted to; may but don't have to exist for common roles)
|
mutable std::map<String, std::set<String>> users_per_roles; // role name -> user names (...it should be granted to; may but don't have to exist for common roles)
|
||||||
|
@ -32,6 +32,11 @@ void LDAPClient::SearchParams::combineHash(std::size_t & seed) const
|
|||||||
boost::hash_combine(seed, static_cast<int>(scope));
|
boost::hash_combine(seed, static_cast<int>(scope));
|
||||||
boost::hash_combine(seed, search_filter);
|
boost::hash_combine(seed, search_filter);
|
||||||
boost::hash_combine(seed, attribute);
|
boost::hash_combine(seed, attribute);
|
||||||
|
}
|
||||||
|
|
||||||
|
void LDAPClient::RoleSearchParams::combineHash(std::size_t & seed) const
|
||||||
|
{
|
||||||
|
SearchParams::combineHash(seed);
|
||||||
boost::hash_combine(seed, prefix);
|
boost::hash_combine(seed, prefix);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -42,6 +47,9 @@ void LDAPClient::Params::combineCoreHash(std::size_t & seed) const
|
|||||||
boost::hash_combine(seed, bind_dn);
|
boost::hash_combine(seed, bind_dn);
|
||||||
boost::hash_combine(seed, user);
|
boost::hash_combine(seed, user);
|
||||||
boost::hash_combine(seed, password);
|
boost::hash_combine(seed, password);
|
||||||
|
|
||||||
|
if (user_dn_detection)
|
||||||
|
user_dn_detection->combineHash(seed);
|
||||||
}
|
}
|
||||||
|
|
||||||
LDAPClient::LDAPClient(const Params & params_)
|
LDAPClient::LDAPClient(const Params & params_)
|
||||||
@ -286,18 +294,33 @@ void LDAPClient::openConnection()
|
|||||||
if (params.enable_tls == LDAPClient::Params::TLSEnable::YES_STARTTLS)
|
if (params.enable_tls == LDAPClient::Params::TLSEnable::YES_STARTTLS)
|
||||||
diag(ldap_start_tls_s(handle, nullptr, nullptr));
|
diag(ldap_start_tls_s(handle, nullptr, nullptr));
|
||||||
|
|
||||||
|
final_user_name = escapeForLDAP(params.user);
|
||||||
|
final_bind_dn = replacePlaceholders(params.bind_dn, { {"{user_name}", final_user_name} });
|
||||||
|
final_user_dn = final_bind_dn; // The default value... may be updated right after a successful bind.
|
||||||
|
|
||||||
switch (params.sasl_mechanism)
|
switch (params.sasl_mechanism)
|
||||||
{
|
{
|
||||||
case LDAPClient::Params::SASLMechanism::SIMPLE:
|
case LDAPClient::Params::SASLMechanism::SIMPLE:
|
||||||
{
|
{
|
||||||
const auto escaped_user_name = escapeForLDAP(params.user);
|
|
||||||
const auto bind_dn = replacePlaceholders(params.bind_dn, { {"{user_name}", escaped_user_name} });
|
|
||||||
|
|
||||||
::berval cred;
|
::berval cred;
|
||||||
cred.bv_val = const_cast<char *>(params.password.c_str());
|
cred.bv_val = const_cast<char *>(params.password.c_str());
|
||||||
cred.bv_len = params.password.size();
|
cred.bv_len = params.password.size();
|
||||||
|
|
||||||
diag(ldap_sasl_bind_s(handle, bind_dn.c_str(), LDAP_SASL_SIMPLE, &cred, nullptr, nullptr, nullptr));
|
diag(ldap_sasl_bind_s(handle, final_bind_dn.c_str(), LDAP_SASL_SIMPLE, &cred, nullptr, nullptr, nullptr));
|
||||||
|
|
||||||
|
// Once bound, run the user DN search query and update the default value, if asked.
|
||||||
|
if (params.user_dn_detection)
|
||||||
|
{
|
||||||
|
const auto user_dn_search_results = search(*params.user_dn_detection);
|
||||||
|
|
||||||
|
if (user_dn_search_results.empty())
|
||||||
|
throw Exception("Failed to detect user DN: empty search results", ErrorCodes::LDAP_ERROR);
|
||||||
|
|
||||||
|
if (user_dn_search_results.size() > 1)
|
||||||
|
throw Exception("Failed to detect user DN: more than one entry in the search results", ErrorCodes::LDAP_ERROR);
|
||||||
|
|
||||||
|
final_user_dn = *user_dn_search_results.begin();
|
||||||
|
}
|
||||||
|
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
@ -316,6 +339,9 @@ void LDAPClient::closeConnection() noexcept
|
|||||||
|
|
||||||
ldap_unbind_ext_s(handle, nullptr, nullptr);
|
ldap_unbind_ext_s(handle, nullptr, nullptr);
|
||||||
handle = nullptr;
|
handle = nullptr;
|
||||||
|
final_user_name.clear();
|
||||||
|
final_bind_dn.clear();
|
||||||
|
final_user_dn.clear();
|
||||||
}
|
}
|
||||||
|
|
||||||
LDAPClient::SearchResults LDAPClient::search(const SearchParams & search_params)
|
LDAPClient::SearchResults LDAPClient::search(const SearchParams & search_params)
|
||||||
@ -333,10 +359,19 @@ LDAPClient::SearchResults LDAPClient::search(const SearchParams & search_params)
|
|||||||
case SearchParams::Scope::CHILDREN: scope = LDAP_SCOPE_CHILDREN; break;
|
case SearchParams::Scope::CHILDREN: scope = LDAP_SCOPE_CHILDREN; break;
|
||||||
}
|
}
|
||||||
|
|
||||||
const auto escaped_user_name = escapeForLDAP(params.user);
|
const auto final_base_dn = replacePlaceholders(search_params.base_dn, {
|
||||||
const auto bind_dn = replacePlaceholders(params.bind_dn, { {"{user_name}", escaped_user_name} });
|
{"{user_name}", final_user_name},
|
||||||
const auto base_dn = replacePlaceholders(search_params.base_dn, { {"{user_name}", escaped_user_name}, {"{bind_dn}", bind_dn} });
|
{"{bind_dn}", final_bind_dn},
|
||||||
const auto search_filter = replacePlaceholders(search_params.search_filter, { {"{user_name}", escaped_user_name}, {"{bind_dn}", bind_dn}, {"{base_dn}", base_dn} });
|
{"{user_dn}", final_user_dn}
|
||||||
|
});
|
||||||
|
|
||||||
|
const auto final_search_filter = replacePlaceholders(search_params.search_filter, {
|
||||||
|
{"{user_name}", final_user_name},
|
||||||
|
{"{bind_dn}", final_bind_dn},
|
||||||
|
{"{user_dn}", final_user_dn},
|
||||||
|
{"{base_dn}", final_base_dn}
|
||||||
|
});
|
||||||
|
|
||||||
char * attrs[] = { const_cast<char *>(search_params.attribute.c_str()), nullptr };
|
char * attrs[] = { const_cast<char *>(search_params.attribute.c_str()), nullptr };
|
||||||
::timeval timeout = { params.search_timeout.count(), 0 };
|
::timeval timeout = { params.search_timeout.count(), 0 };
|
||||||
LDAPMessage* msgs = nullptr;
|
LDAPMessage* msgs = nullptr;
|
||||||
@ -349,7 +384,7 @@ LDAPClient::SearchResults LDAPClient::search(const SearchParams & search_params)
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
diag(ldap_search_ext_s(handle, base_dn.c_str(), scope, search_filter.c_str(), attrs, 0, nullptr, nullptr, &timeout, params.search_limit, &msgs));
|
diag(ldap_search_ext_s(handle, final_base_dn.c_str(), scope, final_search_filter.c_str(), attrs, 0, nullptr, nullptr, &timeout, params.search_limit, &msgs));
|
||||||
|
|
||||||
for (
|
for (
|
||||||
auto * msg = ldap_first_message(handle, msgs);
|
auto * msg = ldap_first_message(handle, msgs);
|
||||||
@ -361,6 +396,27 @@ LDAPClient::SearchResults LDAPClient::search(const SearchParams & search_params)
|
|||||||
{
|
{
|
||||||
case LDAP_RES_SEARCH_ENTRY:
|
case LDAP_RES_SEARCH_ENTRY:
|
||||||
{
|
{
|
||||||
|
// Extract DN separately, if the requested attribute is DN.
|
||||||
|
if (boost::iequals("dn", search_params.attribute))
|
||||||
|
{
|
||||||
|
BerElement * ber = nullptr;
|
||||||
|
|
||||||
|
SCOPE_EXIT({
|
||||||
|
if (ber)
|
||||||
|
{
|
||||||
|
ber_free(ber, 0);
|
||||||
|
ber = nullptr;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
::berval bv;
|
||||||
|
|
||||||
|
diag(ldap_get_dn_ber(handle, msg, &ber, &bv));
|
||||||
|
|
||||||
|
if (bv.bv_val && bv.bv_len > 0)
|
||||||
|
result.emplace(bv.bv_val, bv.bv_len);
|
||||||
|
}
|
||||||
|
|
||||||
BerElement * ber = nullptr;
|
BerElement * ber = nullptr;
|
||||||
|
|
||||||
SCOPE_EXIT({
|
SCOPE_EXIT({
|
||||||
@ -471,12 +527,12 @@ LDAPClient::SearchResults LDAPClient::search(const SearchParams & search_params)
|
|||||||
return result;
|
return result;
|
||||||
}
|
}
|
||||||
|
|
||||||
bool LDAPSimpleAuthClient::authenticate(const SearchParamsList * search_params, SearchResultsList * search_results)
|
bool LDAPSimpleAuthClient::authenticate(const RoleSearchParamsList * role_search_params, SearchResultsList * role_search_results)
|
||||||
{
|
{
|
||||||
if (params.user.empty())
|
if (params.user.empty())
|
||||||
throw Exception("LDAP authentication of a user with empty name is not allowed", ErrorCodes::BAD_ARGUMENTS);
|
throw Exception("LDAP authentication of a user with empty name is not allowed", ErrorCodes::BAD_ARGUMENTS);
|
||||||
|
|
||||||
if (!search_params != !search_results)
|
if (!role_search_params != !role_search_results)
|
||||||
throw Exception("Cannot return LDAP search results", ErrorCodes::BAD_ARGUMENTS);
|
throw Exception("Cannot return LDAP search results", ErrorCodes::BAD_ARGUMENTS);
|
||||||
|
|
||||||
// Silently reject authentication attempt if the password is empty as if it didn't match.
|
// Silently reject authentication attempt if the password is empty as if it didn't match.
|
||||||
@ -489,21 +545,21 @@ bool LDAPSimpleAuthClient::authenticate(const SearchParamsList * search_params,
|
|||||||
openConnection();
|
openConnection();
|
||||||
|
|
||||||
// While connected, run search queries and save the results, if asked.
|
// While connected, run search queries and save the results, if asked.
|
||||||
if (search_params)
|
if (role_search_params)
|
||||||
{
|
{
|
||||||
search_results->clear();
|
role_search_results->clear();
|
||||||
search_results->reserve(search_params->size());
|
role_search_results->reserve(role_search_params->size());
|
||||||
|
|
||||||
try
|
try
|
||||||
{
|
{
|
||||||
for (const auto & single_search_params : *search_params)
|
for (const auto & params_instance : *role_search_params)
|
||||||
{
|
{
|
||||||
search_results->emplace_back(search(single_search_params));
|
role_search_results->emplace_back(search(params_instance));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
catch (...)
|
catch (...)
|
||||||
{
|
{
|
||||||
search_results->clear();
|
role_search_results->clear();
|
||||||
throw;
|
throw;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -532,7 +588,7 @@ LDAPClient::SearchResults LDAPClient::search(const SearchParams &)
|
|||||||
throw Exception("ClickHouse was built without LDAP support", ErrorCodes::FEATURE_IS_NOT_ENABLED_AT_BUILD_TIME);
|
throw Exception("ClickHouse was built without LDAP support", ErrorCodes::FEATURE_IS_NOT_ENABLED_AT_BUILD_TIME);
|
||||||
}
|
}
|
||||||
|
|
||||||
bool LDAPSimpleAuthClient::authenticate(const SearchParamsList *, SearchResultsList *)
|
bool LDAPSimpleAuthClient::authenticate(const RoleSearchParamsList *, SearchResultsList *)
|
||||||
{
|
{
|
||||||
throw Exception("ClickHouse was built without LDAP support", ErrorCodes::FEATURE_IS_NOT_ENABLED_AT_BUILD_TIME);
|
throw Exception("ClickHouse was built without LDAP support", ErrorCodes::FEATURE_IS_NOT_ENABLED_AT_BUILD_TIME);
|
||||||
}
|
}
|
||||||
|
@ -38,12 +38,20 @@ public:
|
|||||||
Scope scope = Scope::SUBTREE;
|
Scope scope = Scope::SUBTREE;
|
||||||
String search_filter;
|
String search_filter;
|
||||||
String attribute = "cn";
|
String attribute = "cn";
|
||||||
|
|
||||||
|
void combineHash(std::size_t & seed) const;
|
||||||
|
};
|
||||||
|
|
||||||
|
struct RoleSearchParams
|
||||||
|
: public SearchParams
|
||||||
|
{
|
||||||
String prefix;
|
String prefix;
|
||||||
|
|
||||||
void combineHash(std::size_t & seed) const;
|
void combineHash(std::size_t & seed) const;
|
||||||
};
|
};
|
||||||
|
|
||||||
using SearchParamsList = std::vector<SearchParams>;
|
using RoleSearchParamsList = std::vector<RoleSearchParams>;
|
||||||
|
|
||||||
using SearchResults = std::set<String>;
|
using SearchResults = std::set<String>;
|
||||||
using SearchResultsList = std::vector<SearchResults>;
|
using SearchResultsList = std::vector<SearchResults>;
|
||||||
|
|
||||||
@ -105,6 +113,8 @@ public:
|
|||||||
String user;
|
String user;
|
||||||
String password;
|
String password;
|
||||||
|
|
||||||
|
std::optional<SearchParams> user_dn_detection;
|
||||||
|
|
||||||
std::chrono::seconds verification_cooldown{0};
|
std::chrono::seconds verification_cooldown{0};
|
||||||
|
|
||||||
std::chrono::seconds operation_timeout{40};
|
std::chrono::seconds operation_timeout{40};
|
||||||
@ -134,6 +144,9 @@ protected:
|
|||||||
#if USE_LDAP
|
#if USE_LDAP
|
||||||
LDAP * handle = nullptr;
|
LDAP * handle = nullptr;
|
||||||
#endif
|
#endif
|
||||||
|
String final_user_name;
|
||||||
|
String final_bind_dn;
|
||||||
|
String final_user_dn;
|
||||||
};
|
};
|
||||||
|
|
||||||
class LDAPSimpleAuthClient
|
class LDAPSimpleAuthClient
|
||||||
@ -141,7 +154,7 @@ class LDAPSimpleAuthClient
|
|||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
using LDAPClient::LDAPClient;
|
using LDAPClient::LDAPClient;
|
||||||
bool authenticate(const SearchParamsList * search_params, SearchResultsList * search_results);
|
bool authenticate(const RoleSearchParamsList * role_search_params, SearchResultsList * role_search_results);
|
||||||
};
|
};
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -52,6 +52,9 @@ template <typename Value, bool float_return> using FuncQuantilesTDigest = Aggreg
|
|||||||
template <typename Value, bool float_return> using FuncQuantileTDigestWeighted = AggregateFunctionQuantile<Value, QuantileTDigest<Value>, NameQuantileTDigestWeighted, true, std::conditional_t<float_return, Float32, void>, false>;
|
template <typename Value, bool float_return> using FuncQuantileTDigestWeighted = AggregateFunctionQuantile<Value, QuantileTDigest<Value>, NameQuantileTDigestWeighted, true, std::conditional_t<float_return, Float32, void>, false>;
|
||||||
template <typename Value, bool float_return> using FuncQuantilesTDigestWeighted = AggregateFunctionQuantile<Value, QuantileTDigest<Value>, NameQuantilesTDigestWeighted, true, std::conditional_t<float_return, Float32, void>, true>;
|
template <typename Value, bool float_return> using FuncQuantilesTDigestWeighted = AggregateFunctionQuantile<Value, QuantileTDigest<Value>, NameQuantilesTDigestWeighted, true, std::conditional_t<float_return, Float32, void>, true>;
|
||||||
|
|
||||||
|
template <typename Value, bool float_return> using FuncQuantileBFloat16 = AggregateFunctionQuantile<Value, QuantileBFloat16Histogram<Value>, NameQuantileBFloat16, false, std::conditional_t<float_return, Float64, void>, false>;
|
||||||
|
template <typename Value, bool float_return> using FuncQuantilesBFloat16 = AggregateFunctionQuantile<Value, QuantileBFloat16Histogram<Value>, NameQuantilesBFloat16, false, std::conditional_t<float_return, Float64, void>, true>;
|
||||||
|
|
||||||
|
|
||||||
template <template <typename, bool> class Function>
|
template <template <typename, bool> class Function>
|
||||||
static constexpr bool supportDecimal()
|
static constexpr bool supportDecimal()
|
||||||
@ -156,6 +159,9 @@ void registerAggregateFunctionsQuantile(AggregateFunctionFactory & factory)
|
|||||||
factory.registerFunction(NameQuantileTDigestWeighted::name, createAggregateFunctionQuantile<FuncQuantileTDigestWeighted>);
|
factory.registerFunction(NameQuantileTDigestWeighted::name, createAggregateFunctionQuantile<FuncQuantileTDigestWeighted>);
|
||||||
factory.registerFunction(NameQuantilesTDigestWeighted::name, createAggregateFunctionQuantile<FuncQuantilesTDigestWeighted>);
|
factory.registerFunction(NameQuantilesTDigestWeighted::name, createAggregateFunctionQuantile<FuncQuantilesTDigestWeighted>);
|
||||||
|
|
||||||
|
factory.registerFunction(NameQuantileBFloat16::name, createAggregateFunctionQuantile<FuncQuantileBFloat16>);
|
||||||
|
factory.registerFunction(NameQuantilesBFloat16::name, createAggregateFunctionQuantile<FuncQuantilesBFloat16>);
|
||||||
|
|
||||||
/// 'median' is an alias for 'quantile'
|
/// 'median' is an alias for 'quantile'
|
||||||
factory.registerAlias("median", NameQuantile::name);
|
factory.registerAlias("median", NameQuantile::name);
|
||||||
factory.registerAlias("medianDeterministic", NameQuantileDeterministic::name);
|
factory.registerAlias("medianDeterministic", NameQuantileDeterministic::name);
|
||||||
@ -167,6 +173,7 @@ void registerAggregateFunctionsQuantile(AggregateFunctionFactory & factory)
|
|||||||
factory.registerAlias("medianTimingWeighted", NameQuantileTimingWeighted::name);
|
factory.registerAlias("medianTimingWeighted", NameQuantileTimingWeighted::name);
|
||||||
factory.registerAlias("medianTDigest", NameQuantileTDigest::name);
|
factory.registerAlias("medianTDigest", NameQuantileTDigest::name);
|
||||||
factory.registerAlias("medianTDigestWeighted", NameQuantileTDigestWeighted::name);
|
factory.registerAlias("medianTDigestWeighted", NameQuantileTDigestWeighted::name);
|
||||||
|
factory.registerAlias("medianBFloat16", NameQuantileBFloat16::name);
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -9,6 +9,7 @@
|
|||||||
#include <AggregateFunctions/QuantileExactWeighted.h>
|
#include <AggregateFunctions/QuantileExactWeighted.h>
|
||||||
#include <AggregateFunctions/QuantileTiming.h>
|
#include <AggregateFunctions/QuantileTiming.h>
|
||||||
#include <AggregateFunctions/QuantileTDigest.h>
|
#include <AggregateFunctions/QuantileTDigest.h>
|
||||||
|
#include <AggregateFunctions/QuantileBFloat16Histogram.h>
|
||||||
|
|
||||||
#include <AggregateFunctions/IAggregateFunction.h>
|
#include <AggregateFunctions/IAggregateFunction.h>
|
||||||
#include <AggregateFunctions/QuantilesCommon.h>
|
#include <AggregateFunctions/QuantilesCommon.h>
|
||||||
@ -228,4 +229,7 @@ struct NameQuantileTDigestWeighted { static constexpr auto name = "quantileTDige
|
|||||||
struct NameQuantilesTDigest { static constexpr auto name = "quantilesTDigest"; };
|
struct NameQuantilesTDigest { static constexpr auto name = "quantilesTDigest"; };
|
||||||
struct NameQuantilesTDigestWeighted { static constexpr auto name = "quantilesTDigestWeighted"; };
|
struct NameQuantilesTDigestWeighted { static constexpr auto name = "quantilesTDigestWeighted"; };
|
||||||
|
|
||||||
|
struct NameQuantileBFloat16 { static constexpr auto name = "quantileBFloat16"; };
|
||||||
|
struct NameQuantilesBFloat16 { static constexpr auto name = "quantilesBFloat16"; };
|
||||||
|
|
||||||
}
|
}
|
||||||
|
63
src/AggregateFunctions/AggregateFunctionSegmentLengthSum.cpp
Normal file
63
src/AggregateFunctions/AggregateFunctionSegmentLengthSum.cpp
Normal file
@ -0,0 +1,63 @@
|
|||||||
|
#include <AggregateFunctions/AggregateFunctionFactory.h>
|
||||||
|
#include <AggregateFunctions/AggregateFunctionSegmentLengthSum.h>
|
||||||
|
#include <AggregateFunctions/FactoryHelpers.h>
|
||||||
|
#include <AggregateFunctions/Helpers.h>
|
||||||
|
#include <DataTypes/DataTypeDate.h>
|
||||||
|
#include <DataTypes/DataTypeDateTime.h>
|
||||||
|
|
||||||
|
#include <ext/range.h>
|
||||||
|
|
||||||
|
|
||||||
|
namespace DB
|
||||||
|
{
|
||||||
|
namespace ErrorCodes
|
||||||
|
{
|
||||||
|
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||||
|
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||||
|
}
|
||||||
|
|
||||||
|
namespace
|
||||||
|
{
|
||||||
|
template <template <typename> class Data>
|
||||||
|
AggregateFunctionPtr createAggregateFunctionSegmentLengthSum(const std::string & name, const DataTypes & arguments, const Array &)
|
||||||
|
{
|
||||||
|
if (arguments.size() != 2)
|
||||||
|
throw Exception(
|
||||||
|
"Aggregate function " + name + " requires two timestamps argument.", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
|
||||||
|
|
||||||
|
auto args = {arguments[0].get(), arguments[1].get()};
|
||||||
|
|
||||||
|
if (WhichDataType{args.begin()[0]}.idx != WhichDataType{args.begin()[1]}.idx)
|
||||||
|
throw Exception(
|
||||||
|
"Illegal type " + args.begin()[0]->getName() + " and " + args.begin()[1]->getName() + " of arguments of aggregate function "
|
||||||
|
+ name + ", there two arguments should have same DataType",
|
||||||
|
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||||
|
|
||||||
|
for (const auto & arg : args)
|
||||||
|
{
|
||||||
|
if (!isNativeNumber(arg) && !isDateOrDateTime(arg))
|
||||||
|
throw Exception(
|
||||||
|
"Illegal type " + arg->getName() + " of argument of aggregate function " + name
|
||||||
|
+ ", must be Number, Date, DateTime or DateTime64",
|
||||||
|
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||||
|
}
|
||||||
|
|
||||||
|
AggregateFunctionPtr res(createWithBasicNumberOrDateOrDateTime<AggregateFunctionSegmentLengthSum, Data>(*arguments[0], arguments));
|
||||||
|
|
||||||
|
if (res)
|
||||||
|
return res;
|
||||||
|
|
||||||
|
throw Exception(
|
||||||
|
"Illegal type " + arguments.front().get()->getName() + " of first argument of aggregate function " + name
|
||||||
|
+ ", must be Native Unsigned Number",
|
||||||
|
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
void registerAggregateFunctionSegmentLengthSum(AggregateFunctionFactory & factory)
|
||||||
|
{
|
||||||
|
factory.registerFunction("segmentLengthSum", createAggregateFunctionSegmentLengthSum<AggregateFunctionSegmentLengthSumData>);
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
199
src/AggregateFunctions/AggregateFunctionSegmentLengthSum.h
Normal file
199
src/AggregateFunctions/AggregateFunctionSegmentLengthSum.h
Normal file
@ -0,0 +1,199 @@
|
|||||||
|
#pragma once
|
||||||
|
|
||||||
|
#include <unordered_set>
|
||||||
|
#include <Columns/ColumnsNumber.h>
|
||||||
|
#include <DataTypes/DataTypeDateTime.h>
|
||||||
|
#include <DataTypes/DataTypesNumber.h>
|
||||||
|
#include <IO/ReadHelpers.h>
|
||||||
|
#include <IO/WriteHelpers.h>
|
||||||
|
#include <Common/ArenaAllocator.h>
|
||||||
|
#include <Common/assert_cast.h>
|
||||||
|
|
||||||
|
#include <AggregateFunctions/AggregateFunctionNull.h>
|
||||||
|
|
||||||
|
namespace DB
|
||||||
|
{
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
struct AggregateFunctionSegmentLengthSumData
|
||||||
|
{
|
||||||
|
using Segment = std::pair<T, T>;
|
||||||
|
using Segments = PODArrayWithStackMemory<Segment, 64>;
|
||||||
|
|
||||||
|
bool sorted = false;
|
||||||
|
|
||||||
|
Segments segments;
|
||||||
|
|
||||||
|
size_t size() const { return segments.size(); }
|
||||||
|
|
||||||
|
void add(T start, T end)
|
||||||
|
{
|
||||||
|
if (sorted && segments.size() > 0)
|
||||||
|
{
|
||||||
|
sorted = segments.back().first <= start;
|
||||||
|
}
|
||||||
|
segments.emplace_back(start, end);
|
||||||
|
}
|
||||||
|
|
||||||
|
void merge(const AggregateFunctionSegmentLengthSumData & other)
|
||||||
|
{
|
||||||
|
if (other.segments.empty())
|
||||||
|
return;
|
||||||
|
|
||||||
|
const auto size = segments.size();
|
||||||
|
|
||||||
|
segments.insert(std::begin(other.segments), std::end(other.segments));
|
||||||
|
|
||||||
|
/// either sort whole container or do so partially merging ranges afterwards
|
||||||
|
if (!sorted && !other.sorted)
|
||||||
|
std::stable_sort(std::begin(segments), std::end(segments));
|
||||||
|
else
|
||||||
|
{
|
||||||
|
const auto begin = std::begin(segments);
|
||||||
|
const auto middle = std::next(begin, size);
|
||||||
|
const auto end = std::end(segments);
|
||||||
|
|
||||||
|
if (!sorted)
|
||||||
|
std::stable_sort(begin, middle);
|
||||||
|
|
||||||
|
if (!other.sorted)
|
||||||
|
std::stable_sort(middle, end);
|
||||||
|
|
||||||
|
std::inplace_merge(begin, middle, end);
|
||||||
|
}
|
||||||
|
|
||||||
|
sorted = true;
|
||||||
|
}
|
||||||
|
|
||||||
|
void sort()
|
||||||
|
{
|
||||||
|
if (!sorted)
|
||||||
|
{
|
||||||
|
std::stable_sort(std::begin(segments), std::end(segments));
|
||||||
|
sorted = true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void serialize(WriteBuffer & buf) const
|
||||||
|
{
|
||||||
|
writeBinary(sorted, buf);
|
||||||
|
writeBinary(segments.size(), buf);
|
||||||
|
|
||||||
|
for (const auto & time_gap : segments)
|
||||||
|
{
|
||||||
|
writeBinary(time_gap.first, buf);
|
||||||
|
writeBinary(time_gap.second, buf);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void deserialize(ReadBuffer & buf)
|
||||||
|
{
|
||||||
|
readBinary(sorted, buf);
|
||||||
|
|
||||||
|
size_t size;
|
||||||
|
readBinary(size, buf);
|
||||||
|
|
||||||
|
segments.clear();
|
||||||
|
segments.reserve(size);
|
||||||
|
|
||||||
|
T start, end;
|
||||||
|
|
||||||
|
for (size_t i = 0; i < size; ++i)
|
||||||
|
{
|
||||||
|
readBinary(start, buf);
|
||||||
|
readBinary(end, buf);
|
||||||
|
segments.emplace_back(start, end);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
template <typename T, typename Data>
|
||||||
|
class AggregateFunctionSegmentLengthSum final : public IAggregateFunctionDataHelper<Data, AggregateFunctionSegmentLengthSum<T, Data>>
|
||||||
|
{
|
||||||
|
private:
|
||||||
|
template <typename TResult>
|
||||||
|
TResult getSegmentLengthSum(Data & data) const
|
||||||
|
{
|
||||||
|
if (data.size() == 0)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
data.sort();
|
||||||
|
|
||||||
|
TResult res = 0;
|
||||||
|
|
||||||
|
typename Data::Segment cur_segment = data.segments[0];
|
||||||
|
|
||||||
|
for (size_t i = 1; i < data.segments.size(); ++i)
|
||||||
|
{
|
||||||
|
if (cur_segment.second < data.segments[i].first)
|
||||||
|
{
|
||||||
|
res += cur_segment.second - cur_segment.first;
|
||||||
|
cur_segment = data.segments[i];
|
||||||
|
}
|
||||||
|
else
|
||||||
|
cur_segment.second = std::max(cur_segment.second, data.segments[i].second);
|
||||||
|
}
|
||||||
|
|
||||||
|
res += cur_segment.second - cur_segment.first;
|
||||||
|
|
||||||
|
return res;
|
||||||
|
}
|
||||||
|
|
||||||
|
public:
|
||||||
|
String getName() const override { return "segmentLengthSum"; }
|
||||||
|
|
||||||
|
explicit AggregateFunctionSegmentLengthSum(const DataTypes & arguments)
|
||||||
|
: IAggregateFunctionDataHelper<Data, AggregateFunctionSegmentLengthSum<T, Data>>(arguments, {})
|
||||||
|
{
|
||||||
|
}
|
||||||
|
|
||||||
|
DataTypePtr getReturnType() const override
|
||||||
|
{
|
||||||
|
if constexpr (std::is_floating_point_v<T>)
|
||||||
|
return std::make_shared<DataTypeFloat64>();
|
||||||
|
return std::make_shared<DataTypeUInt64>();
|
||||||
|
}
|
||||||
|
|
||||||
|
bool allocatesMemoryInArena() const override { return false; }
|
||||||
|
|
||||||
|
AggregateFunctionPtr getOwnNullAdapter(
|
||||||
|
const AggregateFunctionPtr & nested_function,
|
||||||
|
const DataTypes & arguments,
|
||||||
|
const Array & params,
|
||||||
|
const AggregateFunctionProperties & /*properties*/) const override
|
||||||
|
{
|
||||||
|
return std::make_shared<AggregateFunctionNullVariadic<false, false, false>>(nested_function, arguments, params);
|
||||||
|
}
|
||||||
|
|
||||||
|
void add(AggregateDataPtr __restrict place, const IColumn ** columns, const size_t row_num, Arena *) const override
|
||||||
|
{
|
||||||
|
auto start = assert_cast<const ColumnVector<T> *>(columns[0])->getData()[row_num];
|
||||||
|
auto end = assert_cast<const ColumnVector<T> *>(columns[1])->getData()[row_num];
|
||||||
|
this->data(place).add(start, end);
|
||||||
|
}
|
||||||
|
|
||||||
|
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override
|
||||||
|
{
|
||||||
|
this->data(place).merge(this->data(rhs));
|
||||||
|
}
|
||||||
|
|
||||||
|
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf) const override
|
||||||
|
{
|
||||||
|
this->data(place).serialize(buf);
|
||||||
|
}
|
||||||
|
|
||||||
|
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, Arena *) const override
|
||||||
|
{
|
||||||
|
this->data(place).deserialize(buf);
|
||||||
|
}
|
||||||
|
|
||||||
|
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||||
|
{
|
||||||
|
if constexpr (std::is_floating_point_v<T>)
|
||||||
|
assert_cast<ColumnFloat64 &>(to).getData().push_back(getSegmentLengthSum<Float64>(this->data(place)));
|
||||||
|
else
|
||||||
|
assert_cast<ColumnUInt64 &>(to).getData().push_back(getSegmentLengthSum<UInt64>(this->data(place)));
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
}
|
@ -114,6 +114,24 @@ static IAggregateFunction * createWithUnsignedIntegerType(const IDataType & argu
|
|||||||
return nullptr;
|
return nullptr;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
template <template <typename, typename> class AggregateFunctionTemplate, template <typename> class Data, typename... TArgs>
|
||||||
|
static IAggregateFunction * createWithBasicNumberOrDateOrDateTime(const IDataType & argument_type, TArgs &&... args)
|
||||||
|
{
|
||||||
|
WhichDataType which(argument_type);
|
||||||
|
#define DISPATCH(TYPE) \
|
||||||
|
if (which.idx == TypeIndex::TYPE) \
|
||||||
|
return new AggregateFunctionTemplate<TYPE, Data<TYPE>>(std::forward<TArgs>(args)...);
|
||||||
|
FOR_BASIC_NUMERIC_TYPES(DISPATCH)
|
||||||
|
#undef DISPATCH
|
||||||
|
|
||||||
|
if (which.idx == TypeIndex::Date)
|
||||||
|
return new AggregateFunctionTemplate<UInt16, Data<UInt16>>(std::forward<TArgs>(args)...);
|
||||||
|
if (which.idx == TypeIndex::DateTime)
|
||||||
|
return new AggregateFunctionTemplate<UInt32, Data<UInt32>>(std::forward<TArgs>(args)...);
|
||||||
|
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
template <template <typename> class AggregateFunctionTemplate, typename... TArgs>
|
template <template <typename> class AggregateFunctionTemplate, typename... TArgs>
|
||||||
static IAggregateFunction * createWithNumericBasedType(const IDataType & argument_type, TArgs && ... args)
|
static IAggregateFunction * createWithNumericBasedType(const IDataType & argument_type, TArgs && ... args)
|
||||||
{
|
{
|
||||||
|
207
src/AggregateFunctions/QuantileBFloat16Histogram.h
Normal file
207
src/AggregateFunctions/QuantileBFloat16Histogram.h
Normal file
@ -0,0 +1,207 @@
|
|||||||
|
#pragma once
|
||||||
|
|
||||||
|
#include <IO/ReadBuffer.h>
|
||||||
|
#include <IO/WriteBuffer.h>
|
||||||
|
#include <Common/HashTable/HashMap.h>
|
||||||
|
#include <common/types.h>
|
||||||
|
#include <ext/bit_cast.h>
|
||||||
|
|
||||||
|
|
||||||
|
namespace DB
|
||||||
|
{
|
||||||
|
|
||||||
|
/** `bfloat16` is a 16-bit floating point data type that is the same as the corresponding most significant 16 bits of the `float`.
|
||||||
|
* https://en.wikipedia.org/wiki/Bfloat16_floating-point_format
|
||||||
|
*
|
||||||
|
* To calculate quantile, simply convert input value to 16 bit (convert to float, then take the most significant 16 bits),
|
||||||
|
* and calculate the histogram of these values.
|
||||||
|
*
|
||||||
|
* Hash table is the preferred way to store histogram, because the number of distinct values is small:
|
||||||
|
* ```
|
||||||
|
* SELECT uniq(bfloat)
|
||||||
|
* FROM
|
||||||
|
* (
|
||||||
|
* SELECT
|
||||||
|
* number,
|
||||||
|
* toFloat32(number) AS f,
|
||||||
|
* bitShiftRight(bitAnd(reinterpretAsUInt32(reinterpretAsFixedString(f)), 4294901760) AS cut, 16),
|
||||||
|
* reinterpretAsFloat32(reinterpretAsFixedString(cut)) AS bfloat
|
||||||
|
* FROM numbers(100000000)
|
||||||
|
* )
|
||||||
|
*
|
||||||
|
* ┌─uniq(bfloat)─┐
|
||||||
|
* │ 2623 │
|
||||||
|
* └──────────────┘
|
||||||
|
* ```
|
||||||
|
* (when increasing the range of values 1000 times, the number of distinct bfloat16 values increases just by 1280).
|
||||||
|
*
|
||||||
|
* Then calculate quantile from the histogram.
|
||||||
|
*
|
||||||
|
* This sketch is very simple and rough. Its relative precision is constant 1 / 256 = 0.390625%.
|
||||||
|
*/
|
||||||
|
template <typename Value>
|
||||||
|
struct QuantileBFloat16Histogram
|
||||||
|
{
|
||||||
|
using BFloat16 = UInt16;
|
||||||
|
using Weight = UInt64;
|
||||||
|
|
||||||
|
/// Make automatic memory for 16 elements to avoid allocations for small states.
|
||||||
|
/// The usage of trivial hash is ok, because we effectively take logarithm of the values and pathological cases are unlikely.
|
||||||
|
using Data = HashMapWithStackMemory<BFloat16, Weight, TrivialHash, 4>;
|
||||||
|
|
||||||
|
Data data;
|
||||||
|
|
||||||
|
void add(const Value & x)
|
||||||
|
{
|
||||||
|
add(x, 1);
|
||||||
|
}
|
||||||
|
|
||||||
|
void add(const Value & x, Weight w)
|
||||||
|
{
|
||||||
|
if (!isNaN(x))
|
||||||
|
data[toBFloat16(x)] += w;
|
||||||
|
}
|
||||||
|
|
||||||
|
void merge(const QuantileBFloat16Histogram & rhs)
|
||||||
|
{
|
||||||
|
for (const auto & pair : rhs.data)
|
||||||
|
data[pair.getKey()] += pair.getMapped();
|
||||||
|
}
|
||||||
|
|
||||||
|
void serialize(WriteBuffer & buf) const
|
||||||
|
{
|
||||||
|
data.write(buf);
|
||||||
|
}
|
||||||
|
|
||||||
|
void deserialize(ReadBuffer & buf)
|
||||||
|
{
|
||||||
|
data.read(buf);
|
||||||
|
}
|
||||||
|
|
||||||
|
Value get(Float64 level) const
|
||||||
|
{
|
||||||
|
return getImpl<Value>(level);
|
||||||
|
}
|
||||||
|
|
||||||
|
void getMany(const Float64 * levels, const size_t * indices, size_t size, Value * result) const
|
||||||
|
{
|
||||||
|
getManyImpl(levels, indices, size, result);
|
||||||
|
}
|
||||||
|
|
||||||
|
Float64 getFloat(Float64 level) const
|
||||||
|
{
|
||||||
|
return getImpl<Float64>(level);
|
||||||
|
}
|
||||||
|
|
||||||
|
void getManyFloat(const Float64 * levels, const size_t * indices, size_t size, Float64 * result) const
|
||||||
|
{
|
||||||
|
getManyImpl(levels, indices, size, result);
|
||||||
|
}
|
||||||
|
|
||||||
|
private:
|
||||||
|
/// Take the most significant 16 bits of the floating point number.
|
||||||
|
BFloat16 toBFloat16(const Value & x) const
|
||||||
|
{
|
||||||
|
return ext::bit_cast<UInt32>(static_cast<Float32>(x)) >> 16;
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Put the bits into most significant 16 bits of the floating point number and fill other bits with zeros.
|
||||||
|
Float32 toFloat32(const BFloat16 & x) const
|
||||||
|
{
|
||||||
|
return ext::bit_cast<Float32>(x << 16);
|
||||||
|
}
|
||||||
|
|
||||||
|
using Pair = PairNoInit<Float32, Weight>;
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
T getImpl(Float64 level) const
|
||||||
|
{
|
||||||
|
size_t size = data.size();
|
||||||
|
|
||||||
|
if (0 == size)
|
||||||
|
return std::numeric_limits<T>::quiet_NaN();
|
||||||
|
|
||||||
|
std::unique_ptr<Pair[]> array_holder(new Pair[size]);
|
||||||
|
Pair * array = array_holder.get();
|
||||||
|
|
||||||
|
Float64 sum_weight = 0;
|
||||||
|
Pair * arr_it = array;
|
||||||
|
for (const auto & pair : data)
|
||||||
|
{
|
||||||
|
sum_weight += pair.getMapped();
|
||||||
|
*arr_it = {toFloat32(pair.getKey()), pair.getMapped()};
|
||||||
|
++arr_it;
|
||||||
|
}
|
||||||
|
|
||||||
|
std::sort(array, array + size, [](const Pair & a, const Pair & b) { return a.first < b.first; });
|
||||||
|
|
||||||
|
Float64 threshold = std::ceil(sum_weight * level);
|
||||||
|
Float64 accumulated = 0;
|
||||||
|
|
||||||
|
for (const Pair * p = array; p != (array + size); ++p)
|
||||||
|
{
|
||||||
|
accumulated += p->second;
|
||||||
|
|
||||||
|
if (accumulated >= threshold)
|
||||||
|
return p->first;
|
||||||
|
}
|
||||||
|
|
||||||
|
return array[size - 1].first;
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
void getManyImpl(const Float64 * levels, const size_t * indices, size_t num_levels, T * result) const
|
||||||
|
{
|
||||||
|
size_t size = data.size();
|
||||||
|
|
||||||
|
if (0 == size)
|
||||||
|
{
|
||||||
|
for (size_t i = 0; i < num_levels; ++i)
|
||||||
|
result[i] = std::numeric_limits<T>::quiet_NaN();
|
||||||
|
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
std::unique_ptr<Pair[]> array_holder(new Pair[size]);
|
||||||
|
Pair * array = array_holder.get();
|
||||||
|
|
||||||
|
Float64 sum_weight = 0;
|
||||||
|
Pair * arr_it = array;
|
||||||
|
for (const auto & pair : data)
|
||||||
|
{
|
||||||
|
sum_weight += pair.getMapped();
|
||||||
|
*arr_it = {toFloat32(pair.getKey()), pair.getMapped()};
|
||||||
|
++arr_it;
|
||||||
|
}
|
||||||
|
|
||||||
|
std::sort(array, array + size, [](const Pair & a, const Pair & b) { return a.first < b.first; });
|
||||||
|
|
||||||
|
size_t level_index = 0;
|
||||||
|
Float64 accumulated = 0;
|
||||||
|
Float64 threshold = std::ceil(sum_weight * levels[indices[level_index]]);
|
||||||
|
|
||||||
|
for (const Pair * p = array; p != (array + size); ++p)
|
||||||
|
{
|
||||||
|
accumulated += p->second;
|
||||||
|
|
||||||
|
while (accumulated >= threshold)
|
||||||
|
{
|
||||||
|
result[indices[level_index]] = p->first;
|
||||||
|
++level_index;
|
||||||
|
|
||||||
|
if (level_index == num_levels)
|
||||||
|
return;
|
||||||
|
|
||||||
|
threshold = std::ceil(sum_weight * levels[indices[level_index]]);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
while (level_index < num_levels)
|
||||||
|
{
|
||||||
|
result[indices[level_index]] = array[size - 1].first;
|
||||||
|
++level_index;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
}
|
@ -62,6 +62,7 @@ void registerAggregateFunctionCombinatorDistinct(AggregateFunctionCombinatorFact
|
|||||||
|
|
||||||
void registerWindowFunctions(AggregateFunctionFactory & factory);
|
void registerWindowFunctions(AggregateFunctionFactory & factory);
|
||||||
|
|
||||||
|
void registerAggregateFunctionSegmentLengthSum(AggregateFunctionFactory &);
|
||||||
|
|
||||||
void registerAggregateFunctions()
|
void registerAggregateFunctions()
|
||||||
{
|
{
|
||||||
@ -111,6 +112,8 @@ void registerAggregateFunctions()
|
|||||||
registerAggregateFunctionStudentTTest(factory);
|
registerAggregateFunctionStudentTTest(factory);
|
||||||
|
|
||||||
registerWindowFunctions(factory);
|
registerWindowFunctions(factory);
|
||||||
|
|
||||||
|
registerAggregateFunctionSegmentLengthSum(factory);
|
||||||
}
|
}
|
||||||
|
|
||||||
{
|
{
|
||||||
|
@ -43,6 +43,7 @@ SRCS(
|
|||||||
AggregateFunctionRankCorrelation.cpp
|
AggregateFunctionRankCorrelation.cpp
|
||||||
AggregateFunctionResample.cpp
|
AggregateFunctionResample.cpp
|
||||||
AggregateFunctionRetention.cpp
|
AggregateFunctionRetention.cpp
|
||||||
|
AggregateFunctionSegmentLengthSum.cpp
|
||||||
AggregateFunctionSequenceMatch.cpp
|
AggregateFunctionSequenceMatch.cpp
|
||||||
AggregateFunctionSimpleLinearRegression.cpp
|
AggregateFunctionSimpleLinearRegression.cpp
|
||||||
AggregateFunctionSimpleState.cpp
|
AggregateFunctionSimpleState.cpp
|
||||||
|
@ -108,7 +108,7 @@ list (APPEND clickhouse_common_io_sources ${CONFIG_BUILD})
|
|||||||
list (APPEND clickhouse_common_io_headers ${CONFIG_VERSION} ${CONFIG_COMMON})
|
list (APPEND clickhouse_common_io_headers ${CONFIG_VERSION} ${CONFIG_COMMON})
|
||||||
|
|
||||||
list (APPEND dbms_sources Functions/IFunction.cpp Functions/FunctionFactory.cpp Functions/FunctionHelpers.cpp Functions/extractTimeZoneFromFunctionArguments.cpp Functions/replicate.cpp Functions/FunctionsLogical.cpp)
|
list (APPEND dbms_sources Functions/IFunction.cpp Functions/FunctionFactory.cpp Functions/FunctionHelpers.cpp Functions/extractTimeZoneFromFunctionArguments.cpp Functions/replicate.cpp Functions/FunctionsLogical.cpp)
|
||||||
list (APPEND dbms_headers Functions/IFunctionImpl.h Functions/FunctionFactory.h Functions/FunctionHelpers.h Functions/extractTimeZoneFromFunctionArguments.h Functions/replicate.h Functions/FunctionsLogical.h)
|
list (APPEND dbms_headers Functions/IFunction.h Functions/FunctionFactory.h Functions/FunctionHelpers.h Functions/extractTimeZoneFromFunctionArguments.h Functions/replicate.h Functions/FunctionsLogical.h)
|
||||||
|
|
||||||
list (APPEND dbms_sources
|
list (APPEND dbms_sources
|
||||||
AggregateFunctions/AggregateFunctionFactory.cpp
|
AggregateFunctions/AggregateFunctionFactory.cpp
|
||||||
@ -188,6 +188,7 @@ add_object_library(clickhouse_interpreters_clusterproxy Interpreters/ClusterProx
|
|||||||
add_object_library(clickhouse_interpreters_jit Interpreters/JIT)
|
add_object_library(clickhouse_interpreters_jit Interpreters/JIT)
|
||||||
add_object_library(clickhouse_columns Columns)
|
add_object_library(clickhouse_columns Columns)
|
||||||
add_object_library(clickhouse_storages Storages)
|
add_object_library(clickhouse_storages Storages)
|
||||||
|
add_object_library(clickhouse_storages_mysql Storages/MySQL)
|
||||||
add_object_library(clickhouse_storages_distributed Storages/Distributed)
|
add_object_library(clickhouse_storages_distributed Storages/Distributed)
|
||||||
add_object_library(clickhouse_storages_mergetree Storages/MergeTree)
|
add_object_library(clickhouse_storages_mergetree Storages/MergeTree)
|
||||||
add_object_library(clickhouse_storages_liveview Storages/LiveView)
|
add_object_library(clickhouse_storages_liveview Storages/LiveView)
|
||||||
|
@ -99,9 +99,17 @@ public:
|
|||||||
/// Free memory range.
|
/// Free memory range.
|
||||||
void free(void * buf, size_t size)
|
void free(void * buf, size_t size)
|
||||||
{
|
{
|
||||||
checkSize(size);
|
try
|
||||||
freeNoTrack(buf, size);
|
{
|
||||||
CurrentMemoryTracker::free(size);
|
checkSize(size);
|
||||||
|
freeNoTrack(buf, size);
|
||||||
|
CurrentMemoryTracker::free(size);
|
||||||
|
}
|
||||||
|
catch (...)
|
||||||
|
{
|
||||||
|
DB::tryLogCurrentException("Allocator::free");
|
||||||
|
throw;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/** Enlarge memory range.
|
/** Enlarge memory range.
|
||||||
|
@ -1,309 +0,0 @@
|
|||||||
#pragma once
|
|
||||||
|
|
||||||
#include <cstddef>
|
|
||||||
#include <cstdlib>
|
|
||||||
|
|
||||||
#include <Common/Exception.h>
|
|
||||||
#include <Common/formatReadable.h>
|
|
||||||
|
|
||||||
|
|
||||||
namespace DB
|
|
||||||
{
|
|
||||||
|
|
||||||
namespace ErrorCodes
|
|
||||||
{
|
|
||||||
extern const int CANNOT_ALLOCATE_MEMORY;
|
|
||||||
}
|
|
||||||
|
|
||||||
/** An array of (almost) unchangeable size:
|
|
||||||
* the size is specified in the constructor;
|
|
||||||
* `resize` method removes old data, and necessary only for
|
|
||||||
* so that you can first create an empty object using the default constructor,
|
|
||||||
* and then decide on the size.
|
|
||||||
*
|
|
||||||
* There is a possibility to not initialize elements by default, but create them inplace.
|
|
||||||
* Member destructors are called automatically.
|
|
||||||
*
|
|
||||||
* `sizeof` is equal to the size of one pointer.
|
|
||||||
*
|
|
||||||
* Not exception-safe.
|
|
||||||
*
|
|
||||||
* Copying is supported via assign() method. Moving empties the original object.
|
|
||||||
* That is, it is inconvenient to use this array in many cases.
|
|
||||||
*
|
|
||||||
* Designed for situations in which many arrays of the same small size are created,
|
|
||||||
* but the size is not known at compile time.
|
|
||||||
* Also gives a significant advantage in cases where it is important that `sizeof` is minimal.
|
|
||||||
* For example, if arrays are put in an open-addressing hash table with inplace storage of values (like HashMap)
|
|
||||||
*
|
|
||||||
* In this case, compared to std::vector:
|
|
||||||
* - for arrays of 1 element size - an advantage of about 2 times;
|
|
||||||
* - for arrays of 5 elements - an advantage of about 1.5 times
|
|
||||||
* (DB::Field, containing UInt64 and String, used as T);
|
|
||||||
*/
|
|
||||||
|
|
||||||
const size_t empty_auto_array_helper = 0;
|
|
||||||
|
|
||||||
template <typename T>
|
|
||||||
class AutoArray
|
|
||||||
{
|
|
||||||
public:
|
|
||||||
/// For deferred creation.
|
|
||||||
AutoArray()
|
|
||||||
{
|
|
||||||
setEmpty();
|
|
||||||
}
|
|
||||||
|
|
||||||
explicit AutoArray(size_t size_)
|
|
||||||
{
|
|
||||||
init(size_, false);
|
|
||||||
}
|
|
||||||
|
|
||||||
/** Initializes all elements with a copy constructor with the `value` parameter.
|
|
||||||
*/
|
|
||||||
AutoArray(size_t size_, const T & value)
|
|
||||||
{
|
|
||||||
init(size_, true);
|
|
||||||
|
|
||||||
for (size_t i = 0; i < size_; ++i)
|
|
||||||
{
|
|
||||||
new (place(i)) T(value);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/** `resize` removes all existing items.
|
|
||||||
*/
|
|
||||||
void resize(size_t size_, bool dont_init_elems = false)
|
|
||||||
{
|
|
||||||
uninit();
|
|
||||||
init(size_, dont_init_elems);
|
|
||||||
}
|
|
||||||
|
|
||||||
/** Move operations.
|
|
||||||
*/
|
|
||||||
AutoArray(AutoArray && src)
|
|
||||||
{
|
|
||||||
if (this == &src)
|
|
||||||
return;
|
|
||||||
setEmpty();
|
|
||||||
data_ptr = src.data_ptr;
|
|
||||||
src.setEmpty();
|
|
||||||
}
|
|
||||||
|
|
||||||
AutoArray & operator= (AutoArray && src)
|
|
||||||
{
|
|
||||||
if (this == &src)
|
|
||||||
return *this;
|
|
||||||
uninit();
|
|
||||||
data_ptr = src.data_ptr;
|
|
||||||
src.setEmpty();
|
|
||||||
|
|
||||||
return *this;
|
|
||||||
}
|
|
||||||
|
|
||||||
~AutoArray()
|
|
||||||
{
|
|
||||||
uninit();
|
|
||||||
}
|
|
||||||
|
|
||||||
size_t size() const
|
|
||||||
{
|
|
||||||
return m_size();
|
|
||||||
}
|
|
||||||
|
|
||||||
bool empty() const
|
|
||||||
{
|
|
||||||
return size() == 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
void clear()
|
|
||||||
{
|
|
||||||
uninit();
|
|
||||||
setEmpty();
|
|
||||||
}
|
|
||||||
|
|
||||||
template <typename It>
|
|
||||||
void assign(It from_begin, It from_end)
|
|
||||||
{
|
|
||||||
uninit();
|
|
||||||
|
|
||||||
size_t size = from_end - from_begin;
|
|
||||||
init(size, /* dont_init_elems = */ true);
|
|
||||||
|
|
||||||
It it = from_begin;
|
|
||||||
for (size_t i = 0; i < size; ++i, ++it)
|
|
||||||
new (place(i)) T(*it);
|
|
||||||
}
|
|
||||||
|
|
||||||
void assign(const AutoArray & from)
|
|
||||||
{
|
|
||||||
assign(from.begin(), from.end());
|
|
||||||
}
|
|
||||||
|
|
||||||
/** You can read and modify elements using the [] operator
|
|
||||||
* only if items were initialized
|
|
||||||
* (that is, into the constructor was not passed DontInitElemsTag,
|
|
||||||
* or you initialized them using `place` and `placement new`).
|
|
||||||
*/
|
|
||||||
T & operator[](size_t i)
|
|
||||||
{
|
|
||||||
return elem(i);
|
|
||||||
}
|
|
||||||
|
|
||||||
const T & operator[](size_t i) const
|
|
||||||
{
|
|
||||||
return elem(i);
|
|
||||||
}
|
|
||||||
|
|
||||||
T * data()
|
|
||||||
{
|
|
||||||
return elemPtr(0);
|
|
||||||
}
|
|
||||||
|
|
||||||
const T * data() const
|
|
||||||
{
|
|
||||||
return elemPtr(0);
|
|
||||||
}
|
|
||||||
|
|
||||||
/** Get the piece of memory in which the element should be located.
|
|
||||||
* The function is intended to initialize an element,
|
|
||||||
* which has not yet been initialized
|
|
||||||
* new (arr.place(i)) T(args);
|
|
||||||
*/
|
|
||||||
char * place(size_t i)
|
|
||||||
{
|
|
||||||
return data_ptr + sizeof(T) * i;
|
|
||||||
}
|
|
||||||
|
|
||||||
using iterator = T *;
|
|
||||||
using const_iterator = const T *;
|
|
||||||
|
|
||||||
iterator begin() { return elemPtr(0); }
|
|
||||||
iterator end() { return elemPtr(size()); }
|
|
||||||
|
|
||||||
const_iterator begin() const { return elemPtr(0); }
|
|
||||||
const_iterator end() const { return elemPtr(size()); }
|
|
||||||
|
|
||||||
bool operator== (const AutoArray<T> & rhs) const
|
|
||||||
{
|
|
||||||
size_t s = size();
|
|
||||||
|
|
||||||
if (s != rhs.size())
|
|
||||||
return false;
|
|
||||||
|
|
||||||
for (size_t i = 0; i < s; ++i)
|
|
||||||
if (elem(i) != rhs.elem(i))
|
|
||||||
return false;
|
|
||||||
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
|
|
||||||
bool operator!= (const AutoArray<T> & rhs) const
|
|
||||||
{
|
|
||||||
return !(*this == rhs);
|
|
||||||
}
|
|
||||||
|
|
||||||
bool operator< (const AutoArray<T> & rhs) const
|
|
||||||
{
|
|
||||||
size_t s = size();
|
|
||||||
size_t rhs_s = rhs.size();
|
|
||||||
|
|
||||||
if (s < rhs_s)
|
|
||||||
return true;
|
|
||||||
if (s > rhs_s)
|
|
||||||
return false;
|
|
||||||
|
|
||||||
for (size_t i = 0; i < s; ++i)
|
|
||||||
{
|
|
||||||
if (elem(i) < rhs.elem(i))
|
|
||||||
return true;
|
|
||||||
if (elem(i) > rhs.elem(i))
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
|
|
||||||
private:
|
|
||||||
static constexpr size_t alignment = alignof(T);
|
|
||||||
/// Bytes allocated to store size of array before data. It is padded to have minimum size as alignment.
|
|
||||||
/// Padding is at left and the size is stored at right (just before the first data element).
|
|
||||||
static constexpr size_t prefix_size = std::max(sizeof(size_t), alignment);
|
|
||||||
|
|
||||||
char * data_ptr;
|
|
||||||
|
|
||||||
size_t & m_size()
|
|
||||||
{
|
|
||||||
return reinterpret_cast<size_t *>(data_ptr)[-1];
|
|
||||||
}
|
|
||||||
|
|
||||||
size_t m_size() const
|
|
||||||
{
|
|
||||||
return reinterpret_cast<const size_t *>(data_ptr)[-1];
|
|
||||||
}
|
|
||||||
|
|
||||||
T * elemPtr(size_t i)
|
|
||||||
{
|
|
||||||
return reinterpret_cast<T *>(data_ptr) + i;
|
|
||||||
}
|
|
||||||
|
|
||||||
const T * elemPtr(size_t i) const
|
|
||||||
{
|
|
||||||
return reinterpret_cast<const T *>(data_ptr) + i;
|
|
||||||
}
|
|
||||||
|
|
||||||
T & elem(size_t i)
|
|
||||||
{
|
|
||||||
return *elemPtr(i);
|
|
||||||
}
|
|
||||||
|
|
||||||
const T & elem(size_t i) const
|
|
||||||
{
|
|
||||||
return *elemPtr(i);
|
|
||||||
}
|
|
||||||
|
|
||||||
void setEmpty()
|
|
||||||
{
|
|
||||||
data_ptr = const_cast<char *>(reinterpret_cast<const char *>(&empty_auto_array_helper)) + sizeof(size_t);
|
|
||||||
}
|
|
||||||
|
|
||||||
void init(size_t new_size, bool dont_init_elems)
|
|
||||||
{
|
|
||||||
if (!new_size)
|
|
||||||
{
|
|
||||||
setEmpty();
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
void * new_data = nullptr;
|
|
||||||
int res = posix_memalign(&new_data, alignment, prefix_size + new_size * sizeof(T));
|
|
||||||
if (0 != res)
|
|
||||||
throwFromErrno(fmt::format("Cannot allocate memory (posix_memalign) {}.", ReadableSize(new_size)),
|
|
||||||
ErrorCodes::CANNOT_ALLOCATE_MEMORY, res);
|
|
||||||
|
|
||||||
data_ptr = static_cast<char *>(new_data);
|
|
||||||
data_ptr += prefix_size;
|
|
||||||
|
|
||||||
m_size() = new_size;
|
|
||||||
|
|
||||||
if (!dont_init_elems)
|
|
||||||
for (size_t i = 0; i < new_size; ++i)
|
|
||||||
new (place(i)) T();
|
|
||||||
}
|
|
||||||
|
|
||||||
void uninit()
|
|
||||||
{
|
|
||||||
size_t s = size();
|
|
||||||
|
|
||||||
if (s)
|
|
||||||
{
|
|
||||||
for (size_t i = 0; i < s; ++i)
|
|
||||||
elem(i).~T();
|
|
||||||
|
|
||||||
data_ptr -= prefix_size;
|
|
||||||
free(data_ptr);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
}
|
|
@ -3,6 +3,7 @@ set (SRCS
|
|||||||
ConfigProcessor.cpp
|
ConfigProcessor.cpp
|
||||||
configReadClient.cpp
|
configReadClient.cpp
|
||||||
ConfigReloader.cpp
|
ConfigReloader.cpp
|
||||||
|
YAMLParser.cpp
|
||||||
)
|
)
|
||||||
|
|
||||||
add_library(clickhouse_common_config ${SRCS})
|
add_library(clickhouse_common_config ${SRCS})
|
||||||
@ -15,3 +16,10 @@ target_link_libraries(clickhouse_common_config
|
|||||||
PRIVATE
|
PRIVATE
|
||||||
string_utils
|
string_utils
|
||||||
)
|
)
|
||||||
|
|
||||||
|
if (USE_YAML_CPP)
|
||||||
|
target_link_libraries(clickhouse_common_config
|
||||||
|
PRIVATE
|
||||||
|
yaml-cpp
|
||||||
|
)
|
||||||
|
endif()
|
||||||
|
@ -1,4 +1,8 @@
|
|||||||
|
#if !defined(ARCADIA_BUILD)
|
||||||
|
#include <Common/config.h>
|
||||||
|
#endif
|
||||||
#include "ConfigProcessor.h"
|
#include "ConfigProcessor.h"
|
||||||
|
#include "YAMLParser.h"
|
||||||
|
|
||||||
#include <sys/utsname.h>
|
#include <sys/utsname.h>
|
||||||
#include <cerrno>
|
#include <cerrno>
|
||||||
@ -20,10 +24,8 @@
|
|||||||
#include <IO/WriteBufferFromString.h>
|
#include <IO/WriteBufferFromString.h>
|
||||||
#include <IO/Operators.h>
|
#include <IO/Operators.h>
|
||||||
|
|
||||||
|
|
||||||
#define PREPROCESSED_SUFFIX "-preprocessed"
|
#define PREPROCESSED_SUFFIX "-preprocessed"
|
||||||
|
|
||||||
|
|
||||||
namespace fs = std::filesystem;
|
namespace fs = std::filesystem;
|
||||||
|
|
||||||
using namespace Poco::XML;
|
using namespace Poco::XML;
|
||||||
@ -438,8 +440,10 @@ ConfigProcessor::Files ConfigProcessor::getConfigMergeFiles(const std::string &
|
|||||||
std::string base_name = path.getBaseName();
|
std::string base_name = path.getBaseName();
|
||||||
|
|
||||||
// Skip non-config and temporary files
|
// Skip non-config and temporary files
|
||||||
if (file.isFile() && (extension == "xml" || extension == "conf") && !startsWith(base_name, "."))
|
if (file.isFile() && (extension == "xml" || extension == "conf" || extension == "yaml" || extension == "yml") && !startsWith(base_name, "."))
|
||||||
files.push_back(file.path());
|
{
|
||||||
|
files.push_back(file.path());
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -453,19 +457,37 @@ XMLDocumentPtr ConfigProcessor::processConfig(
|
|||||||
zkutil::ZooKeeperNodeCache * zk_node_cache,
|
zkutil::ZooKeeperNodeCache * zk_node_cache,
|
||||||
const zkutil::EventPtr & zk_changed_event)
|
const zkutil::EventPtr & zk_changed_event)
|
||||||
{
|
{
|
||||||
XMLDocumentPtr config;
|
|
||||||
LOG_DEBUG(log, "Processing configuration file '{}'.", path);
|
LOG_DEBUG(log, "Processing configuration file '{}'.", path);
|
||||||
|
|
||||||
|
XMLDocumentPtr config;
|
||||||
|
|
||||||
if (fs::exists(path))
|
if (fs::exists(path))
|
||||||
{
|
{
|
||||||
config = dom_parser.parse(path);
|
fs::path p(path);
|
||||||
|
if (p.extension() == ".xml")
|
||||||
|
{
|
||||||
|
config = dom_parser.parse(path);
|
||||||
|
}
|
||||||
|
else if (p.extension() == ".yaml" || p.extension() == ".yml")
|
||||||
|
{
|
||||||
|
config = YAMLParser::parse(path);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
/// When we can use config embedded in binary.
|
/// These embedded files added during build with some cmake magic.
|
||||||
|
/// Look at the end of programs/sever/CMakeLists.txt.
|
||||||
|
std::string embedded_name;
|
||||||
if (path == "config.xml")
|
if (path == "config.xml")
|
||||||
|
embedded_name = "embedded.xml";
|
||||||
|
|
||||||
|
if (path == "keeper_config.xml")
|
||||||
|
embedded_name = "keeper_embedded.xml";
|
||||||
|
|
||||||
|
/// When we can use config embedded in binary.
|
||||||
|
if (!embedded_name.empty())
|
||||||
{
|
{
|
||||||
auto resource = getResource("embedded.xml");
|
auto resource = getResource(embedded_name);
|
||||||
if (resource.empty())
|
if (resource.empty())
|
||||||
throw Exception(ErrorCodes::FILE_DOESNT_EXIST, "Configuration file {} doesn't exist and there is no embedded config", path);
|
throw Exception(ErrorCodes::FILE_DOESNT_EXIST, "Configuration file {} doesn't exist and there is no embedded config", path);
|
||||||
LOG_DEBUG(log, "There is no file '{}', will use embedded config.", path);
|
LOG_DEBUG(log, "There is no file '{}', will use embedded config.", path);
|
||||||
@ -484,8 +506,20 @@ XMLDocumentPtr ConfigProcessor::processConfig(
|
|||||||
{
|
{
|
||||||
LOG_DEBUG(log, "Merging configuration file '{}'.", merge_file);
|
LOG_DEBUG(log, "Merging configuration file '{}'.", merge_file);
|
||||||
|
|
||||||
XMLDocumentPtr with = dom_parser.parse(merge_file);
|
XMLDocumentPtr with;
|
||||||
|
|
||||||
|
fs::path p(merge_file);
|
||||||
|
if (p.extension() == ".yaml" || p.extension() == ".yml")
|
||||||
|
{
|
||||||
|
with = YAMLParser::parse(merge_file);
|
||||||
|
}
|
||||||
|
else
|
||||||
|
{
|
||||||
|
with = dom_parser.parse(merge_file);
|
||||||
|
}
|
||||||
|
|
||||||
merge(config, with);
|
merge(config, with);
|
||||||
|
|
||||||
contributing_files.push_back(merge_file);
|
contributing_files.push_back(merge_file);
|
||||||
}
|
}
|
||||||
catch (Exception & e)
|
catch (Exception & e)
|
||||||
|
@ -1,5 +1,9 @@
|
|||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
|
#if !defined(ARCADIA_BUILD)
|
||||||
|
#include <Common/config.h>
|
||||||
|
#endif
|
||||||
|
|
||||||
#include <string>
|
#include <string>
|
||||||
#include <unordered_set>
|
#include <unordered_set>
|
||||||
#include <vector>
|
#include <vector>
|
||||||
@ -141,3 +145,4 @@ private:
|
|||||||
};
|
};
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
166
src/Common/Config/YAMLParser.cpp
Normal file
166
src/Common/Config/YAMLParser.cpp
Normal file
@ -0,0 +1,166 @@
|
|||||||
|
#if !defined(ARCADIA_BUILD)
|
||||||
|
#include <Common/config.h>
|
||||||
|
#endif
|
||||||
|
|
||||||
|
#if USE_YAML_CPP
|
||||||
|
#include "YAMLParser.h"
|
||||||
|
|
||||||
|
#include <string>
|
||||||
|
#include <cstring>
|
||||||
|
#include <vector>
|
||||||
|
|
||||||
|
#include <Poco/DOM/Document.h>
|
||||||
|
#include <Poco/DOM/DOMParser.h>
|
||||||
|
#include <Poco/DOM/DOMWriter.h>
|
||||||
|
#include <Poco/DOM/NodeList.h>
|
||||||
|
#include <Poco/DOM/Element.h>
|
||||||
|
#include <Poco/DOM/AutoPtr.h>
|
||||||
|
#include <Poco/DOM/NamedNodeMap.h>
|
||||||
|
#include <Poco/DOM/Text.h>
|
||||||
|
#include <Common/Exception.h>
|
||||||
|
|
||||||
|
#include <yaml-cpp/yaml.h> // Y_IGNORE
|
||||||
|
|
||||||
|
#include <common/logger_useful.h>
|
||||||
|
|
||||||
|
using namespace Poco::XML;
|
||||||
|
|
||||||
|
namespace DB
|
||||||
|
{
|
||||||
|
|
||||||
|
namespace ErrorCodes
|
||||||
|
{
|
||||||
|
extern const int CANNOT_OPEN_FILE;
|
||||||
|
extern const int CANNOT_PARSE_YAML;
|
||||||
|
}
|
||||||
|
|
||||||
|
/// A prefix symbol in yaml key
|
||||||
|
/// We add attributes to nodes by using a prefix symbol in the key part.
|
||||||
|
/// Currently we use @ as a prefix symbol. Note, that @ is reserved
|
||||||
|
/// by YAML standard, so we need to write a key-value pair like this: "@attribute": attr_value
|
||||||
|
const char YAML_ATTRIBUTE_PREFIX = '@';
|
||||||
|
|
||||||
|
namespace
|
||||||
|
{
|
||||||
|
|
||||||
|
Poco::AutoPtr<Poco::XML::Element> createCloneNode(Poco::XML::Element & original_node)
|
||||||
|
{
|
||||||
|
Poco::AutoPtr<Poco::XML::Element> clone_node = original_node.ownerDocument()->createElement(original_node.nodeName());
|
||||||
|
original_node.parentNode()->appendChild(clone_node);
|
||||||
|
return clone_node;
|
||||||
|
}
|
||||||
|
|
||||||
|
void processNode(const YAML::Node & node, Poco::XML::Element & parent_xml_element)
|
||||||
|
{
|
||||||
|
auto * xml_document = parent_xml_element.ownerDocument();
|
||||||
|
switch (node.Type())
|
||||||
|
{
|
||||||
|
case YAML::NodeType::Scalar:
|
||||||
|
{
|
||||||
|
auto value = node.as<std::string>();
|
||||||
|
Poco::AutoPtr<Poco::XML::Text> xml_value = xml_document->createTextNode(value);
|
||||||
|
parent_xml_element.appendChild(xml_value);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
/// We process YAML Sequences as a
|
||||||
|
/// list of <key>value</key> tags with same key and different values.
|
||||||
|
/// For example, we translate this sequence
|
||||||
|
/// seq:
|
||||||
|
/// - val1
|
||||||
|
/// - val2
|
||||||
|
///
|
||||||
|
/// into this:
|
||||||
|
/// <seq>val1</seq>
|
||||||
|
/// <seq>val2</seq>
|
||||||
|
case YAML::NodeType::Sequence:
|
||||||
|
{
|
||||||
|
for (const auto & child_node : node)
|
||||||
|
if (parent_xml_element.hasChildNodes())
|
||||||
|
{
|
||||||
|
/// We want to process sequences like that:
|
||||||
|
/// seq:
|
||||||
|
/// - val1
|
||||||
|
/// - k2: val2
|
||||||
|
/// - val3
|
||||||
|
/// - k4: val4
|
||||||
|
/// - val5
|
||||||
|
/// into xml like this:
|
||||||
|
/// <seq>val1</seq>
|
||||||
|
/// <seq>
|
||||||
|
/// <k2>val2</k2>
|
||||||
|
/// </seq>
|
||||||
|
/// <seq>val3</seq>
|
||||||
|
/// <seq>
|
||||||
|
/// <k4>val4</k4>
|
||||||
|
/// </seq>
|
||||||
|
/// <seq>val5</seq>
|
||||||
|
/// So, we create a new parent node with same tag for each child node
|
||||||
|
processNode(child_node, *createCloneNode(parent_xml_element));
|
||||||
|
}
|
||||||
|
else
|
||||||
|
{
|
||||||
|
processNode(child_node, parent_xml_element);
|
||||||
|
}
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case YAML::NodeType::Map:
|
||||||
|
{
|
||||||
|
for (const auto & key_value_pair : node)
|
||||||
|
{
|
||||||
|
const auto & key_node = key_value_pair.first;
|
||||||
|
const auto & value_node = key_value_pair.second;
|
||||||
|
auto key = key_node.as<std::string>();
|
||||||
|
bool is_attribute = (key.starts_with(YAML_ATTRIBUTE_PREFIX) && value_node.IsScalar());
|
||||||
|
if (is_attribute)
|
||||||
|
{
|
||||||
|
/// we use substr(1) here to remove YAML_ATTRIBUTE_PREFIX from key
|
||||||
|
auto attribute_name = key.substr(1);
|
||||||
|
auto value = value_node.as<std::string>();
|
||||||
|
parent_xml_element.setAttribute(attribute_name, value);
|
||||||
|
}
|
||||||
|
else
|
||||||
|
{
|
||||||
|
Poco::AutoPtr<Poco::XML::Element> xml_key = xml_document->createElement(key);
|
||||||
|
parent_xml_element.appendChild(xml_key);
|
||||||
|
processNode(value_node, *xml_key);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case YAML::NodeType::Null: break;
|
||||||
|
case YAML::NodeType::Undefined:
|
||||||
|
{
|
||||||
|
throw Exception(ErrorCodes::CANNOT_PARSE_YAML, "YAMLParser has encountered node with undefined type and cannot continue parsing of the file");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
Poco::AutoPtr<Poco::XML::Document> YAMLParser::parse(const String& path)
|
||||||
|
{
|
||||||
|
YAML::Node node_yml;
|
||||||
|
try
|
||||||
|
{
|
||||||
|
node_yml = YAML::LoadFile(path);
|
||||||
|
}
|
||||||
|
catch (const YAML::ParserException& e)
|
||||||
|
{
|
||||||
|
/// yaml-cpp cannot parse the file because its contents are incorrect
|
||||||
|
throw Exception(ErrorCodes::CANNOT_PARSE_YAML, "Unable to parse YAML configuration file {}", path, e.what());
|
||||||
|
}
|
||||||
|
catch (const YAML::BadFile&)
|
||||||
|
{
|
||||||
|
/// yaml-cpp cannot open the file even though it exists
|
||||||
|
throw Exception(ErrorCodes::CANNOT_OPEN_FILE, "Unable to open YAML configuration file {}", path);
|
||||||
|
}
|
||||||
|
Poco::AutoPtr<Poco::XML::Document> xml = new Document;
|
||||||
|
Poco::AutoPtr<Poco::XML::Element> root_node = xml->createElement("yandex");
|
||||||
|
xml->appendChild(root_node);
|
||||||
|
processNode(node_yml, *root_node);
|
||||||
|
return xml;
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
#endif
|
55
src/Common/Config/YAMLParser.h
Normal file
55
src/Common/Config/YAMLParser.h
Normal file
@ -0,0 +1,55 @@
|
|||||||
|
#pragma once
|
||||||
|
|
||||||
|
#if !defined(ARCADIA_BUILD)
|
||||||
|
#include <Common/config.h>
|
||||||
|
#endif
|
||||||
|
|
||||||
|
#include <string>
|
||||||
|
|
||||||
|
#include <Poco/DOM/Document.h>
|
||||||
|
#include "Poco/DOM/AutoPtr.h"
|
||||||
|
#include <common/logger_useful.h>
|
||||||
|
|
||||||
|
#if USE_YAML_CPP
|
||||||
|
|
||||||
|
namespace DB
|
||||||
|
{
|
||||||
|
|
||||||
|
/// Real YAML parser: loads yaml file into a YAML::Node
|
||||||
|
class YAMLParserImpl
|
||||||
|
{
|
||||||
|
public:
|
||||||
|
static Poco::AutoPtr<Poco::XML::Document> parse(const String& path);
|
||||||
|
};
|
||||||
|
|
||||||
|
using YAMLParser = YAMLParserImpl;
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
#else
|
||||||
|
|
||||||
|
namespace DB
|
||||||
|
{
|
||||||
|
|
||||||
|
namespace ErrorCodes
|
||||||
|
{
|
||||||
|
extern const int CANNOT_PARSE_YAML;
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Fake YAML parser: throws an exception if we try to parse YAML configs in a build without yaml-cpp
|
||||||
|
class DummyYAMLParser
|
||||||
|
{
|
||||||
|
public:
|
||||||
|
static Poco::AutoPtr<Poco::XML::Document> parse(const String& path)
|
||||||
|
{
|
||||||
|
Poco::AutoPtr<Poco::XML::Document> xml = new Poco::XML::Document;
|
||||||
|
throw Exception(ErrorCodes::CANNOT_PARSE_YAML, "Unable to parse YAML configuration file {} without usage of yaml-cpp library", path);
|
||||||
|
return xml;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
using YAMLParser = DummyYAMLParser;
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
#endif
|
@ -87,9 +87,20 @@ static DNSResolver::IPAddresses resolveIPAddressImpl(const std::string & host)
|
|||||||
{
|
{
|
||||||
Poco::Net::IPAddress ip;
|
Poco::Net::IPAddress ip;
|
||||||
|
|
||||||
/// NOTE: Poco::Net::DNS::resolveOne(host) doesn't work for IP addresses like 127.0.0.2
|
/// NOTE:
|
||||||
if (Poco::Net::IPAddress::tryParse(host, ip))
|
/// - Poco::Net::DNS::resolveOne(host) doesn't work for IP addresses like 127.0.0.2
|
||||||
return DNSResolver::IPAddresses(1, ip);
|
/// - Poco::Net::IPAddress::tryParse() expect hex string for IPv6 (w/o brackets)
|
||||||
|
if (host.starts_with('['))
|
||||||
|
{
|
||||||
|
assert(host.ends_with(']'));
|
||||||
|
if (Poco::Net::IPAddress::tryParse(host.substr(1, host.size() - 2), ip))
|
||||||
|
return DNSResolver::IPAddresses(1, ip);
|
||||||
|
}
|
||||||
|
else
|
||||||
|
{
|
||||||
|
if (Poco::Net::IPAddress::tryParse(host, ip))
|
||||||
|
return DNSResolver::IPAddresses(1, ip);
|
||||||
|
}
|
||||||
|
|
||||||
/// Family: AF_UNSPEC
|
/// Family: AF_UNSPEC
|
||||||
/// AI_ALL is required for checking if client is allowed to connect from an address
|
/// AI_ALL is required for checking if client is allowed to connect from an address
|
||||||
|
@ -552,6 +552,7 @@
|
|||||||
M(582, NO_SUCH_PROJECTION_IN_TABLE) \
|
M(582, NO_SUCH_PROJECTION_IN_TABLE) \
|
||||||
M(583, ILLEGAL_PROJECTION) \
|
M(583, ILLEGAL_PROJECTION) \
|
||||||
M(584, PROJECTION_NOT_USED) \
|
M(584, PROJECTION_NOT_USED) \
|
||||||
|
M(585, CANNOT_PARSE_YAML) \
|
||||||
\
|
\
|
||||||
M(998, POSTGRESQL_CONNECTION_FAILURE) \
|
M(998, POSTGRESQL_CONNECTION_FAILURE) \
|
||||||
M(999, KEEPER_EXCEPTION) \
|
M(999, KEEPER_EXCEPTION) \
|
||||||
|
@ -220,6 +220,12 @@ public:
|
|||||||
return find(key) != nullptr;
|
return find(key) != nullptr;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
Value & ALWAYS_INLINE operator[](const Key & key)
|
||||||
|
{
|
||||||
|
auto [it, _] = emplace(key);
|
||||||
|
return it->getMapped();
|
||||||
|
}
|
||||||
|
|
||||||
bool ALWAYS_INLINE erase(const Key & key)
|
bool ALWAYS_INLINE erase(const Key & key)
|
||||||
{
|
{
|
||||||
auto key_hash = Base::hash(key);
|
auto key_hash = Base::hash(key);
|
||||||
|
@ -90,17 +90,16 @@ private:
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
template <size_t MaxNumHints, class Self>
|
|
||||||
|
template <size_t MaxNumHints, typename Self>
|
||||||
class IHints
|
class IHints
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
|
|
||||||
virtual std::vector<String> getAllRegisteredNames() const = 0;
|
virtual std::vector<String> getAllRegisteredNames() const = 0;
|
||||||
|
|
||||||
std::vector<String> getHints(const String & name) const
|
std::vector<String> getHints(const String & name) const
|
||||||
{
|
{
|
||||||
static const auto registered_names = getAllRegisteredNames();
|
return prompter.getHints(name, getAllRegisteredNames());
|
||||||
return prompter.getHints(name, registered_names);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
virtual ~IHints() = default;
|
virtual ~IHints() = default;
|
||||||
|
@ -513,7 +513,7 @@ public:
|
|||||||
insertPrepare(from_begin, from_end);
|
insertPrepare(from_begin, from_end);
|
||||||
|
|
||||||
if (unlikely(bytes_to_move))
|
if (unlikely(bytes_to_move))
|
||||||
memcpy(this->c_end + bytes_to_copy - bytes_to_move, this->c_end - bytes_to_move, bytes_to_move);
|
memmove(this->c_end + bytes_to_copy - bytes_to_move, this->c_end - bytes_to_move, bytes_to_move);
|
||||||
|
|
||||||
memcpy(this->c_end - bytes_to_move, reinterpret_cast<const void *>(&*from_begin), bytes_to_copy);
|
memcpy(this->c_end - bytes_to_move, reinterpret_cast<const void *>(&*from_begin), bytes_to_copy);
|
||||||
|
|
||||||
|
@ -123,7 +123,7 @@ inline bool isWhitespaceASCII(char c)
|
|||||||
/// Since |isWhiteSpaceASCII()| is used inside algorithms it's easier to implement another function than add extra argument.
|
/// Since |isWhiteSpaceASCII()| is used inside algorithms it's easier to implement another function than add extra argument.
|
||||||
inline bool isWhitespaceASCIIOneLine(char c)
|
inline bool isWhitespaceASCIIOneLine(char c)
|
||||||
{
|
{
|
||||||
return c == ' ' || c == '\t' || c == '\r' || c == '\f' || c == '\v';
|
return c == ' ' || c == '\t' || c == '\f' || c == '\v';
|
||||||
}
|
}
|
||||||
|
|
||||||
inline bool isControlASCII(char c)
|
inline bool isControlASCII(char c)
|
||||||
|
@ -16,3 +16,4 @@
|
|||||||
#cmakedefine01 USE_STATS
|
#cmakedefine01 USE_STATS
|
||||||
#cmakedefine01 CLICKHOUSE_SPLIT_BINARY
|
#cmakedefine01 CLICKHOUSE_SPLIT_BINARY
|
||||||
#cmakedefine01 USE_DATASKETCHES
|
#cmakedefine01 USE_DATASKETCHES
|
||||||
|
#cmakedefine01 USE_YAML_CPP
|
||||||
|
@ -7,9 +7,6 @@ endif()
|
|||||||
add_executable (sip_hash_perf sip_hash_perf.cpp)
|
add_executable (sip_hash_perf sip_hash_perf.cpp)
|
||||||
target_link_libraries (sip_hash_perf PRIVATE clickhouse_common_io)
|
target_link_libraries (sip_hash_perf PRIVATE clickhouse_common_io)
|
||||||
|
|
||||||
add_executable (auto_array auto_array.cpp)
|
|
||||||
target_link_libraries (auto_array PRIVATE clickhouse_common_io)
|
|
||||||
|
|
||||||
add_executable (small_table small_table.cpp)
|
add_executable (small_table small_table.cpp)
|
||||||
target_link_libraries (small_table PRIVATE clickhouse_common_io)
|
target_link_libraries (small_table PRIVATE clickhouse_common_io)
|
||||||
|
|
||||||
@ -36,7 +33,7 @@ add_executable (arena_with_free_lists arena_with_free_lists.cpp)
|
|||||||
target_link_libraries (arena_with_free_lists PRIVATE dbms)
|
target_link_libraries (arena_with_free_lists PRIVATE dbms)
|
||||||
|
|
||||||
add_executable (lru_hash_map_perf lru_hash_map_perf.cpp)
|
add_executable (lru_hash_map_perf lru_hash_map_perf.cpp)
|
||||||
target_link_libraries (lru_hash_map_perf PRIVATE clickhouse_common_io)
|
target_link_libraries (lru_hash_map_perf PRIVATE dbms)
|
||||||
|
|
||||||
add_executable (thread_creation_latency thread_creation_latency.cpp)
|
add_executable (thread_creation_latency thread_creation_latency.cpp)
|
||||||
target_link_libraries (thread_creation_latency PRIVATE clickhouse_common_io)
|
target_link_libraries (thread_creation_latency PRIVATE clickhouse_common_io)
|
||||||
|
@ -1,197 +0,0 @@
|
|||||||
#include <iostream>
|
|
||||||
#include <iomanip>
|
|
||||||
#include <map>
|
|
||||||
|
|
||||||
#include <pcg_random.hpp>
|
|
||||||
#include <Core/Field.h>
|
|
||||||
#include <Common/HashTable/HashMap.h>
|
|
||||||
#include <Common/AutoArray.h>
|
|
||||||
#include <IO/WriteHelpers.h>
|
|
||||||
|
|
||||||
#include <Common/Stopwatch.h>
|
|
||||||
|
|
||||||
|
|
||||||
int main(int argc, char ** argv)
|
|
||||||
{
|
|
||||||
pcg64 rng;
|
|
||||||
|
|
||||||
{
|
|
||||||
size_t n = 10;
|
|
||||||
using T = std::string;
|
|
||||||
DB::AutoArray<T> arr(n);
|
|
||||||
|
|
||||||
for (size_t i = 0; i < arr.size(); ++i)
|
|
||||||
arr[i] = "Hello, world! " + DB::toString(i);
|
|
||||||
|
|
||||||
for (auto & elem : arr)
|
|
||||||
std::cerr << elem << std::endl;
|
|
||||||
}
|
|
||||||
|
|
||||||
std::cerr << std::endl;
|
|
||||||
|
|
||||||
{
|
|
||||||
size_t n = 10;
|
|
||||||
using T = std::string;
|
|
||||||
using Arr = DB::AutoArray<T>;
|
|
||||||
Arr arr;
|
|
||||||
|
|
||||||
arr.resize(n);
|
|
||||||
for (size_t i = 0; i < arr.size(); ++i)
|
|
||||||
arr[i] = "Hello, world! " + DB::toString(i);
|
|
||||||
|
|
||||||
for (auto & elem : arr)
|
|
||||||
std::cerr << elem << std::endl;
|
|
||||||
|
|
||||||
std::cerr << std::endl;
|
|
||||||
|
|
||||||
Arr arr2 = std::move(arr);
|
|
||||||
|
|
||||||
std::cerr << arr.size() << ", " << arr2.size() << std::endl; // NOLINT
|
|
||||||
|
|
||||||
for (auto & elem : arr2)
|
|
||||||
std::cerr << elem << std::endl;
|
|
||||||
}
|
|
||||||
|
|
||||||
std::cerr << std::endl;
|
|
||||||
|
|
||||||
{
|
|
||||||
size_t n = 10;
|
|
||||||
size_t keys = 10;
|
|
||||||
using T = std::string;
|
|
||||||
using Arr = DB::AutoArray<T>;
|
|
||||||
using Map = std::map<Arr, T>;
|
|
||||||
Map map;
|
|
||||||
|
|
||||||
for (size_t i = 0; i < keys; ++i)
|
|
||||||
{
|
|
||||||
Arr key(n);
|
|
||||||
for (size_t j = 0; j < n; ++j)
|
|
||||||
key[j] = DB::toString(rng());
|
|
||||||
|
|
||||||
map[std::move(key)] = "Hello, world! " + DB::toString(i);
|
|
||||||
}
|
|
||||||
|
|
||||||
for (const auto & kv : map)
|
|
||||||
{
|
|
||||||
std::cerr << "[";
|
|
||||||
for (size_t j = 0; j < n; ++j)
|
|
||||||
std::cerr << (j == 0 ? "" : ", ") << kv.first[j];
|
|
||||||
std::cerr << "]";
|
|
||||||
|
|
||||||
std::cerr << ":\t" << kv.second << std::endl;
|
|
||||||
}
|
|
||||||
|
|
||||||
std::cerr << std::endl;
|
|
||||||
|
|
||||||
Map map2 = std::move(map);
|
|
||||||
|
|
||||||
for (const auto & kv : map2)
|
|
||||||
{
|
|
||||||
std::cerr << "[";
|
|
||||||
for (size_t j = 0; j < n; ++j)
|
|
||||||
std::cerr << (j == 0 ? "" : ", ") << kv.first[j];
|
|
||||||
std::cerr << "]";
|
|
||||||
|
|
||||||
std::cerr << ":\t" << kv.second << std::endl;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
std::cerr << std::endl;
|
|
||||||
|
|
||||||
{
|
|
||||||
size_t n = 10;
|
|
||||||
size_t keys = 10;
|
|
||||||
using T = std::string;
|
|
||||||
using Arr = DB::AutoArray<T>;
|
|
||||||
using Vec = std::vector<Arr>;
|
|
||||||
Vec vec;
|
|
||||||
|
|
||||||
for (size_t i = 0; i < keys; ++i)
|
|
||||||
{
|
|
||||||
Arr key(n);
|
|
||||||
for (size_t j = 0; j < n; ++j)
|
|
||||||
key[j] = DB::toString(rng());
|
|
||||||
|
|
||||||
vec.push_back(std::move(key));
|
|
||||||
}
|
|
||||||
|
|
||||||
for (const auto & elem : vec)
|
|
||||||
{
|
|
||||||
std::cerr << "[";
|
|
||||||
for (size_t j = 0; j < n; ++j)
|
|
||||||
std::cerr << (j == 0 ? "" : ", ") << elem[j];
|
|
||||||
std::cerr << "]" << std::endl;
|
|
||||||
}
|
|
||||||
|
|
||||||
std::cerr << std::endl;
|
|
||||||
|
|
||||||
Vec vec2 = std::move(vec);
|
|
||||||
|
|
||||||
for (const auto & elem : vec2)
|
|
||||||
{
|
|
||||||
std::cerr << "[";
|
|
||||||
for (size_t j = 0; j < n; ++j)
|
|
||||||
std::cerr << (j == 0 ? "" : ", ") << elem[j];
|
|
||||||
std::cerr << "]" << std::endl;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if (argc == 2 && !strcmp(argv[1], "1"))
|
|
||||||
{
|
|
||||||
size_t n = 5;
|
|
||||||
size_t map_size = 1000000;
|
|
||||||
|
|
||||||
using T = DB::Field;
|
|
||||||
T field = std::string("Hello, world");
|
|
||||||
|
|
||||||
using Arr = std::vector<T>;
|
|
||||||
using Map = HashMap<UInt64, Arr>;
|
|
||||||
|
|
||||||
Stopwatch watch;
|
|
||||||
|
|
||||||
Map map;
|
|
||||||
for (size_t i = 0; i < map_size; ++i)
|
|
||||||
{
|
|
||||||
Map::LookupResult it;
|
|
||||||
bool inserted;
|
|
||||||
|
|
||||||
map.emplace(rng(), it, inserted);
|
|
||||||
if (inserted)
|
|
||||||
{
|
|
||||||
new (&it->getMapped()) Arr(n);
|
|
||||||
|
|
||||||
for (size_t j = 0; j < n; ++j)
|
|
||||||
(it->getMapped())[j] = field;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
std::cerr << std::fixed << std::setprecision(2)
|
|
||||||
<< "Vector: Elapsed: " << watch.elapsedSeconds()
|
|
||||||
<< " (" << map_size / watch.elapsedSeconds() << " rows/sec., "
|
|
||||||
<< "sizeof(Map::value_type) = " << sizeof(Map::value_type)
|
|
||||||
<< std::endl;
|
|
||||||
}
|
|
||||||
|
|
||||||
{
|
|
||||||
size_t n = 10000;
|
|
||||||
using Arr = DB::AutoArray<std::string>;
|
|
||||||
Arr arr1(n);
|
|
||||||
Arr arr2(n);
|
|
||||||
|
|
||||||
for (size_t i = 0; i < n; ++i)
|
|
||||||
{
|
|
||||||
arr1[i] = "Hello, world! " + DB::toString(i);
|
|
||||||
arr2[i] = "Goodbye, world! " + DB::toString(i);
|
|
||||||
}
|
|
||||||
|
|
||||||
arr2 = std::move(arr1);
|
|
||||||
arr1.resize(n); // NOLINT
|
|
||||||
|
|
||||||
std::cerr
|
|
||||||
<< "arr1.size(): " << arr1.size() << ", arr2.size(): " << arr2.size() << std::endl
|
|
||||||
<< "arr1.data(): " << arr1.data() << ", arr2.data(): " << arr2.data() << std::endl
|
|
||||||
<< "arr1[0]: " << arr1[0] << ", arr2[0]: " << arr2[0] << std::endl;
|
|
||||||
}
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
|
@ -61,6 +61,10 @@ static void NO_INLINE testForType(size_t method, size_t rows_size)
|
|||||||
test<Key, ::absl::flat_hash_map<Key, UInt64>>(data.data(), data.size(), "Abseil HashMap");
|
test<Key, ::absl::flat_hash_map<Key, UInt64>>(data.data(), data.size(), "Abseil HashMap");
|
||||||
}
|
}
|
||||||
else if (method == 3)
|
else if (method == 3)
|
||||||
|
{
|
||||||
|
test<Key, ::absl::flat_hash_map<Key, UInt64, DefaultHash<Key>>>(data.data(), data.size(), "Abseil HashMap with CH Hash");
|
||||||
|
}
|
||||||
|
else if (method == 4)
|
||||||
{
|
{
|
||||||
test<Key, std::unordered_map<Key, UInt64>>(data.data(), data.size(), "std::unordered_map");
|
test<Key, std::unordered_map<Key, UInt64>>(data.data(), data.size(), "std::unordered_map");
|
||||||
}
|
}
|
||||||
@ -81,50 +85,110 @@ static void NO_INLINE testForType(size_t method, size_t rows_size)
|
|||||||
* ./integer_hash_tables_benchmark 1 $2 100000000 < $1
|
* ./integer_hash_tables_benchmark 1 $2 100000000 < $1
|
||||||
* ./integer_hash_tables_benchmark 2 $2 100000000 < $1
|
* ./integer_hash_tables_benchmark 2 $2 100000000 < $1
|
||||||
* ./integer_hash_tables_benchmark 3 $2 100000000 < $1
|
* ./integer_hash_tables_benchmark 3 $2 100000000 < $1
|
||||||
|
* ./integer_hash_tables_benchmark 4 $2 100000000 < $1
|
||||||
*
|
*
|
||||||
* Results of this benchmark on hits_100m_obfuscated
|
* Results of this benchmark on hits_100m_obfuscated X86-64
|
||||||
*
|
*
|
||||||
* File hits_100m_obfuscated/201307_1_96_4/WatchID.bin
|
* File hits_100m_obfuscated/201307_1_96_4/WatchID.bin
|
||||||
* CH HashMap: Elapsed: 7.366 (13575745.933 elem/sec.), map size: 99997493
|
* CH HashMap: Elapsed: 7.416 (13484217.815 elem/sec.), map size: 99997493
|
||||||
* Google DenseMap: Elapsed: 10.089 (9911817.125 elem/sec.), map size: 99997493
|
* Google DenseMap: Elapsed: 10.303 (9706022.031 elem/sec.), map size: 99997493
|
||||||
* Abseil HashMap: Elapsed: 9.011 (11097794.073 elem/sec.), map size: 99997493
|
* Abseil HashMap: Elapsed: 9.106 (10982139.229 elem/sec.), map size: 99997493
|
||||||
* std::unordered_map: Elapsed: 44.758 (2234223.189 elem/sec.), map size: 99997493
|
* Abseil HashMap with CH Hash: Elapsed: 9.221 (10845360.669 elem/sec.), map size: 99997493
|
||||||
|
* std::unordered_map: Elapsed: 45.213 (2211758.706 elem/sec.), map size: 9999749
|
||||||
*
|
*
|
||||||
* File hits_100m_obfuscated/201307_1_96_4/URLHash.bin
|
* File hits_100m_obfuscated/201307_1_96_4/URLHash.bin
|
||||||
* CH HashMap: Elapsed: 2.672 (37421588.347 elem/sec.), map size: 20714865
|
* CH HashMap: Elapsed: 2.620 (38168135.308 elem/sec.), map size: 20714865
|
||||||
* Google DenseMap: Elapsed: 3.409 (29333308.209 elem/sec.), map size: 20714865
|
* Google DenseMap: Elapsed: 3.426 (29189309.058 elem/sec.), map size: 20714865
|
||||||
* Abseil HashMap: Elapsed: 2.778 (36000540.035 elem/sec.), map size: 20714865
|
* Abseil HashMap: Elapsed: 2.788 (35870495.097 elem/sec.), map size: 20714865
|
||||||
* std::unordered_map: Elapsed: 8.643 (11570012.207 elem/sec.), map size: 20714865
|
* Abseil HashMap with CH Hash: Elapsed: 2.991 (33428850.155 elem/sec.), map size: 20714865
|
||||||
|
* std::unordered_map: Elapsed: 8.503 (11760331.346 elem/sec.), map size: 20714865
|
||||||
*
|
*
|
||||||
* File hits_100m_obfuscated/201307_1_96_4/UserID.bin
|
* File hits_100m_obfuscated/201307_1_96_4/UserID.bin
|
||||||
* CH HashMap: Elapsed: 2.116 (47267659.076 elem/sec.), map size: 17630976
|
* CH HashMap: Elapsed: 2.157 (46352039.753 elem/sec.), map size: 17630976
|
||||||
* Google DenseMap: Elapsed: 2.722 (36740693.786 elem/sec.), map size: 17630976
|
* Google DenseMap: Elapsed: 2.725 (36694226.782 elem/sec.), map size: 17630976
|
||||||
* Abseil HashMap: Elapsed: 2.597 (38509988.663 elem/sec.), map size: 17630976
|
* Abseil HashMap: Elapsed: 2.590 (38604284.187 elem/sec.), map size: 17630976
|
||||||
* std::unordered_map: Elapsed: 7.327 (13647271.471 elem/sec.), map size: 17630976
|
* Abseil HashMap with CH Hash: Elapsed: 2.785 (35904856.137 elem/sec.), map size: 17630976
|
||||||
|
* std::unordered_map: Elapsed: 7.268 (13759557.609 elem/sec.), map size: 17630976
|
||||||
*
|
*
|
||||||
* File hits_100m_obfuscated/201307_1_96_4/RegionID.bin
|
* File hits_100m_obfuscated/201307_1_96_4/RegionID.bin
|
||||||
* CH HashMap: Elapsed: 0.201 (498144193.695 elem/sec.), map size: 9040
|
* CH HashMap: Elapsed: 0.192 (521583315.810 elem/sec.), map size: 9040
|
||||||
* Google DenseMap: Elapsed: 0.261 (382656387.016 elem/sec.), map size: 9046
|
* Google DenseMap: Elapsed: 0.297 (337081407.799 elem/sec.), map size: 9046
|
||||||
* Abseil HashMap: Elapsed: 0.307 (325874545.117 elem/sec.), map size: 9040
|
* Abseil HashMap: Elapsed: 0.295 (338805623.511 elem/sec.), map size: 9040
|
||||||
* std::unordered_map: Elapsed: 0.466 (214379083.420 elem/sec.), map size: 9040
|
* Abseil HashMap with CH Hash: Elapsed: 0.331 (302155391.036 elem/sec.), map size: 9040
|
||||||
|
* std::unordered_map: Elapsed: 0.455 (219971555.390 elem/sec.), map size: 9040
|
||||||
*
|
*
|
||||||
* File hits_100m_obfuscated/201307_1_96_4/CounterID.bin
|
* File hits_100m_obfuscated/201307_1_96_4/CounterID.bin
|
||||||
* CH HashMap: Elapsed: 0.220 (455344735.648 elem/sec.), map size: 6506
|
* CH HashMap: Elapsed: 0.217 (460216823.609 elem/sec.), map size: 6506
|
||||||
* Google DenseMap: Elapsed: 0.297 (336187522.818 elem/sec.), map size: 6506
|
* Google DenseMap: Elapsed: 0.373 (267838665.098 elem/sec.), map size: 6506
|
||||||
* Abseil HashMap: Elapsed: 0.307 (325264214.480 elem/sec.), map size: 6506
|
* Abseil HashMap: Elapsed: 0.325 (308124728.989 elem/sec.), map size: 6506
|
||||||
* std::unordered_map: Elapsed: 0.389 (257195996.114 elem/sec.), map size: 6506
|
* Abseil HashMap with CH Hash: Elapsed: 0.354 (282167144.801 elem/sec.), map size: 6506
|
||||||
|
* std::unordered_map: Elapsed: 0.390 (256573354.171 elem/sec.), map size: 6506
|
||||||
*
|
*
|
||||||
* File hits_100m_obfuscated/201307_1_96_4/TraficSourceID.bin
|
* File hits_100m_obfuscated/201307_1_96_4/TraficSourceID.bin
|
||||||
* CH HashMap: Elapsed: 0.274 (365196673.729 elem/sec.), map size: 10
|
* CH HashMap: Elapsed: 0.246 (406714566.282 elem/sec.), map size: 10
|
||||||
* Google DenseMap: Elapsed: 0.782 (127845746.927 elem/sec.), map size: 1565609 /// Broken because there is 0 key in dataset
|
* Google DenseMap: Elapsed: 0.760 (131615151.233 elem/sec.), map size: 1565609 /// Broken because there is 0 key in dataset
|
||||||
* Abseil HashMap: Elapsed: 0.303 (330461565.053 elem/sec.), map size: 10
|
* Abseil HashMap: Elapsed: 0.309 (324068156.680 elem/sec.), map size: 10
|
||||||
* std::unordered_map: Elapsed: 0.843 (118596530.649 elem/sec.), map size: 10
|
* Abseil HashMap with CH Hash: Elapsed: 0.339 (295108223.814 elem/sec.), map size: 10
|
||||||
|
* std::unordered_map: Elapsed: 0.811 (123304031.195 elem/sec.), map size: 10
|
||||||
*
|
*
|
||||||
* File hits_100m_obfuscated/201307_1_96_4/AdvEngineID.bin
|
* File hits_100m_obfuscated/201307_1_96_4/AdvEngineID.bin
|
||||||
* CH HashMap: Elapsed: 0.160 (623399865.019 elem/sec.), map size: 19
|
* CH HashMap: Elapsed: 0.155 (643245257.748 elem/sec.), map size: 19
|
||||||
* Google DenseMap: Elapsed: 1.673 (59757144.027 elem/sec.), map size: 32260732 /// Broken because there is 0 key in dataset
|
* Google DenseMap: Elapsed: 1.629 (61395025.417 elem/sec.), map size: 32260732 // Broken because there is 0 key in dataset
|
||||||
* Abseil HashMap: Elapsed: 0.297 (336589258.845 elem/sec.), map size: 19
|
* Abseil HashMap: Elapsed: 0.292 (342765027.204 elem/sec.), map size: 19
|
||||||
* std::unordered_map: Elapsed: 0.332 (301114451.384 elem/sec.), map size: 19
|
* Abseil HashMap with CH Hash: Elapsed: 0.330 (302822020.210 elem/sec.), map size: 19
|
||||||
|
* std::unordered_map: Elapsed: 0.308 (325059333.730 elem/sec.), map size: 19
|
||||||
|
*
|
||||||
|
*
|
||||||
|
* Results of this benchmark on hits_100m_obfuscated AARCH64
|
||||||
|
*
|
||||||
|
* File hits_100m_obfuscated/201307_1_96_4/WatchID.bin
|
||||||
|
* CH HashMap: Elapsed: 9.530 (10493528.533 elem/sec.), map size: 99997493
|
||||||
|
* Google DenseMap: Elapsed: 14.436 (6927091.135 elem/sec.), map size: 99997493
|
||||||
|
* Abseil HashMap: Elapsed: 16.671 (5998504.085 elem/sec.), map size: 99997493
|
||||||
|
* Abseil HashMap with CH Hash: Elapsed: 16.803 (5951365.711 elem/sec.), map size: 99997493
|
||||||
|
* std::unordered_map: Elapsed: 50.805 (1968305.658 elem/sec.), map size: 99997493
|
||||||
|
*
|
||||||
|
* File hits_100m_obfuscated/201307_1_96_4/URLHash.bin
|
||||||
|
* CH HashMap: Elapsed: 3.693 (27076878.092 elem/sec.), map size: 20714865
|
||||||
|
* Google DenseMap: Elapsed: 5.051 (19796401.694 elem/sec.), map size: 20714865
|
||||||
|
* Abseil HashMap: Elapsed: 5.617 (17804528.625 elem/sec.), map size: 20714865
|
||||||
|
* Abseil HashMap with CH Hash: Elapsed: 5.702 (17537013.639 elem/sec.), map size: 20714865
|
||||||
|
* std::unordered_map: Elapsed: 10.757 (9296040.953 elem/sec.), map size: 2071486
|
||||||
|
*
|
||||||
|
* File hits_100m_obfuscated/201307_1_96_4/UserID.bin
|
||||||
|
* CH HashMap: Elapsed: 2.982 (33535795.695 elem/sec.), map size: 17630976
|
||||||
|
* Google DenseMap: Elapsed: 3.940 (25381557.959 elem/sec.), map size: 17630976
|
||||||
|
* Abseil HashMap: Elapsed: 4.493 (22259078.458 elem/sec.), map size: 17630976
|
||||||
|
* Abseil HashMap with CH Hash: Elapsed: 4.596 (21759738.710 elem/sec.), map size: 17630976
|
||||||
|
* std::unordered_map: Elapsed: 9.035 (11067903.596 elem/sec.), map size: 17630976
|
||||||
|
*
|
||||||
|
* File hits_100m_obfuscated/201307_1_96_4/RegionID.bin
|
||||||
|
* CH HashMap: Elapsed: 0.302 (331026285.361 elem/sec.), map size: 9040
|
||||||
|
* Google DenseMap: Elapsed: 0.623 (160419421.840 elem/sec.), map size: 9046
|
||||||
|
* Abseil HashMap: Elapsed: 0.981 (101971186.758 elem/sec.), map size: 9040
|
||||||
|
* Abseil HashMap with CH Hash: Elapsed: 0.991 (100932993.199 elem/sec.), map size: 9040
|
||||||
|
* std::unordered_map: Elapsed: 0.809 (123541402.715 elem/sec.), map size: 9040
|
||||||
|
*
|
||||||
|
* File hits_100m_obfuscated/201307_1_96_4/CounterID.bin
|
||||||
|
* CH HashMap: Elapsed: 0.343 (291821742.078 elem/sec.), map size: 6506
|
||||||
|
* Google DenseMap: Elapsed: 0.718 (139191105.450 elem/sec.), map size: 6506
|
||||||
|
* Abseil HashMap: Elapsed: 1.019 (98148285.278 elem/sec.), map size: 6506
|
||||||
|
* Abseil HashMap with CH Hash: Elapsed: 1.048 (95446843.667 elem/sec.), map size: 6506
|
||||||
|
* std::unordered_map: Elapsed: 0.701 (142701070.085 elem/sec.), map size: 6506
|
||||||
|
*
|
||||||
|
* File hits_100m_obfuscated/201307_1_96_4/TraficSourceID.bin
|
||||||
|
* CH HashMap: Elapsed: 0.376 (265905243.103 elem/sec.), map size: 10
|
||||||
|
* Google DenseMap: Elapsed: 1.309 (76420707.298 elem/sec.), map size: 1565609 /// Broken because there is 0 key in dataset
|
||||||
|
* Abseil HashMap: Elapsed: 0.955 (104668109.775 elem/sec.), map size: 10
|
||||||
|
* Abseil HashMap with CH Hash: Elapsed: 0.967 (103456305.391 elem/sec.), map size: 10
|
||||||
|
* std::unordered_map: Elapsed: 1.241 (80591305.890 elem/sec.), map size: 10
|
||||||
|
*
|
||||||
|
* File hits_100m_obfuscated/201307_1_96_4/AdvEngineID.bin
|
||||||
|
* CH HashMap: Elapsed: 0.213 (470208130.105 elem/sec.), map size: 19
|
||||||
|
* Google DenseMap: Elapsed: 2.525 (39607131.523 elem/sec.), map size: 32260732 /// Broken because there is 0 key in dataset
|
||||||
|
* Abseil HashMap: Elapsed: 0.950 (105233678.618 elem/sec.), map size: 19
|
||||||
|
* Abseil HashMap with CH Hash: Elapsed: 0.962 (104001230.717 elem/sec.), map size: 19
|
||||||
|
* std::unordered_map: Elapsed: 0.585 (171059989.837 elem/sec.), map size: 19
|
||||||
*/
|
*/
|
||||||
|
|
||||||
int main(int argc, char ** argv)
|
int main(int argc, char ** argv)
|
||||||
|
@ -7,23 +7,26 @@
|
|||||||
#include <Common/Stopwatch.h>
|
#include <Common/Stopwatch.h>
|
||||||
#include <Common/HashTable/LRUHashMap.h>
|
#include <Common/HashTable/LRUHashMap.h>
|
||||||
|
|
||||||
|
#include <IO/ReadBufferFromFile.h>
|
||||||
|
#include <Compression/CompressedReadBuffer.h>
|
||||||
|
|
||||||
template<class Key, class Value>
|
template<class Key, class Value>
|
||||||
class LRUHashMapBasic
|
class LRUHashMapBasic
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
using key_type = Key;
|
using key_type = Key;
|
||||||
using value_type = Value;
|
using value_type = Value;
|
||||||
using list_type = std::list<key_type>;
|
using list_type = std::list<std::pair<key_type, value_type>>;
|
||||||
using node = std::pair<value_type, typename list_type::iterator>;
|
using map_type = std::unordered_map<key_type, typename list_type::iterator>;
|
||||||
using map_type = std::unordered_map<key_type, node, DefaultHash<Key>>;
|
|
||||||
|
|
||||||
LRUHashMapBasic(size_t max_size_, bool preallocated)
|
LRUHashMapBasic(size_t max_size_, bool preallocated = false)
|
||||||
: hash_map(preallocated ? max_size_ : 32)
|
: hash_map(preallocated ? max_size_ : 32)
|
||||||
, max_size(max_size_)
|
, max_size(max_size_)
|
||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
void insert(const Key &key, const Value &value)
|
template<typename ...Args>
|
||||||
|
std::pair<Value *, bool> emplace(const Key &key, Args &&... args)
|
||||||
{
|
{
|
||||||
auto it = hash_map.find(key);
|
auto it = hash_map.find(key);
|
||||||
|
|
||||||
@ -33,40 +36,39 @@ public:
|
|||||||
{
|
{
|
||||||
auto iterator_to_remove = list.begin();
|
auto iterator_to_remove = list.begin();
|
||||||
|
|
||||||
hash_map.erase(*iterator_to_remove);
|
auto & key_to_remove = iterator_to_remove->first;
|
||||||
|
hash_map.erase(key_to_remove);
|
||||||
|
|
||||||
list.erase(iterator_to_remove);
|
list.erase(iterator_to_remove);
|
||||||
}
|
}
|
||||||
|
|
||||||
list.push_back(key);
|
|
||||||
hash_map[key] = std::make_pair(value, --list.end());
|
Value value(std::forward<Args>(args)...);
|
||||||
|
auto node = std::make_pair(key, std::move(value));
|
||||||
|
|
||||||
|
list.push_back(std::move(node));
|
||||||
|
|
||||||
|
auto inserted_iterator = --list.end();
|
||||||
|
|
||||||
|
hash_map[key] = inserted_iterator;
|
||||||
|
|
||||||
|
return std::make_pair(&inserted_iterator->second, true);
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
auto & [value_to_update, iterator_in_list_to_update] = it->second;
|
auto & iterator_in_list_to_update = it->second;
|
||||||
|
|
||||||
list.splice(list.end(), list, iterator_in_list_to_update);
|
list.splice(list.end(), list, iterator_in_list_to_update);
|
||||||
|
iterator_in_list_to_update = --list.end();
|
||||||
|
|
||||||
iterator_in_list_to_update = list.end();
|
return std::make_pair(&iterator_in_list_to_update->second, false);
|
||||||
value_to_update = value;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
value_type & get(const key_type &key)
|
value_type & operator[](const key_type & key)
|
||||||
{
|
{
|
||||||
auto iterator_in_map = hash_map.find(key);
|
auto [it, _] = emplace(key);
|
||||||
assert(iterator_in_map != hash_map.end());
|
return *it;
|
||||||
|
|
||||||
auto & [value_to_return, iterator_in_list_to_update] = iterator_in_map->second;
|
|
||||||
|
|
||||||
list.splice(list.end(), list, iterator_in_list_to_update);
|
|
||||||
iterator_in_list_to_update = list.end();
|
|
||||||
|
|
||||||
return value_to_return;
|
|
||||||
}
|
|
||||||
|
|
||||||
const value_type & get(const key_type & key) const
|
|
||||||
{
|
|
||||||
return const_cast<std::decay_t<decltype(*this)> *>(this)->get(key);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
size_t getMaxSize() const
|
size_t getMaxSize() const
|
||||||
@ -101,110 +103,45 @@ private:
|
|||||||
size_t max_size;
|
size_t max_size;
|
||||||
};
|
};
|
||||||
|
|
||||||
std::vector<UInt64> generateNumbersToInsert(size_t numbers_to_insert_size)
|
template <typename Key, typename Map>
|
||||||
|
static void NO_INLINE test(const Key * data, size_t size, const std::string & name)
|
||||||
{
|
{
|
||||||
std::vector<UInt64> numbers;
|
size_t cache_size = size / 10;
|
||||||
numbers.reserve(numbers_to_insert_size);
|
Map cache(cache_size);
|
||||||
|
|
||||||
std::random_device rd;
|
|
||||||
pcg64 gen(rd());
|
|
||||||
|
|
||||||
UInt64 min = std::numeric_limits<UInt64>::min();
|
|
||||||
UInt64 max = std::numeric_limits<UInt64>::max();
|
|
||||||
|
|
||||||
auto distribution = std::uniform_int_distribution<>(min, max);
|
|
||||||
|
|
||||||
for (size_t i = 0; i < numbers_to_insert_size; ++i)
|
|
||||||
{
|
|
||||||
UInt64 number = distribution(gen);
|
|
||||||
numbers.emplace_back(number);
|
|
||||||
}
|
|
||||||
|
|
||||||
return numbers;
|
|
||||||
}
|
|
||||||
|
|
||||||
void testInsertElementsIntoHashMap(size_t map_size, const std::vector<UInt64> & numbers_to_insert, bool preallocated)
|
|
||||||
{
|
|
||||||
size_t numbers_to_insert_size = numbers_to_insert.size();
|
|
||||||
std::cout << "TestInsertElementsIntoHashMap preallocated map size: " << map_size << " numbers to insert size: " << numbers_to_insert_size;
|
|
||||||
std::cout << std::endl;
|
|
||||||
|
|
||||||
HashMap<int, int> hash_map(preallocated ? map_size : 32);
|
|
||||||
|
|
||||||
Stopwatch watch;
|
Stopwatch watch;
|
||||||
|
|
||||||
for (size_t i = 0; i < numbers_to_insert_size; ++i)
|
for (size_t i = 0; i < size; ++i)
|
||||||
hash_map.insert({ numbers_to_insert[i], numbers_to_insert[i] });
|
++cache[data[i]];
|
||||||
|
|
||||||
std::cout << "Inserted in " << watch.elapsedMilliseconds() << " milliseconds" << std::endl;
|
watch.stop();
|
||||||
|
|
||||||
UInt64 summ = 0;
|
std::cerr << name
|
||||||
|
<< ":\nElapsed: " << watch.elapsedSeconds()
|
||||||
for (size_t i = 0; i < numbers_to_insert_size; ++i)
|
<< " (" << size / watch.elapsedSeconds() << " elem/sec.)"
|
||||||
{
|
<< ", map size: " << cache.size() << "\n";
|
||||||
auto * it = hash_map.find(numbers_to_insert[i]);
|
|
||||||
|
|
||||||
if (it)
|
|
||||||
summ += it->getMapped();
|
|
||||||
}
|
|
||||||
|
|
||||||
std::cout << "Calculated summ: " << summ << " in " << watch.elapsedMilliseconds() << " milliseconds" << std::endl;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
void testInsertElementsIntoStandardMap(size_t map_size, const std::vector<UInt64> & numbers_to_insert, bool preallocated)
|
template <typename Key>
|
||||||
|
static void NO_INLINE testForType(size_t method, size_t rows_size)
|
||||||
{
|
{
|
||||||
size_t numbers_to_insert_size = numbers_to_insert.size();
|
std::cerr << std::fixed << std::setprecision(3);
|
||||||
std::cout << "TestInsertElementsIntoStandardMap map size: " << map_size << " numbers to insert size: " << numbers_to_insert_size;
|
|
||||||
std::cout << std::endl;
|
|
||||||
|
|
||||||
std::unordered_map<int, int> hash_map(preallocated ? map_size : 32);
|
std::vector<Key> data(rows_size);
|
||||||
|
|
||||||
Stopwatch watch;
|
|
||||||
|
|
||||||
for (size_t i = 0; i < numbers_to_insert_size; ++i)
|
|
||||||
hash_map.insert({ numbers_to_insert[i], numbers_to_insert[i] });
|
|
||||||
|
|
||||||
std::cout << "Inserted in " << watch.elapsedMilliseconds() << " milliseconds" << std::endl;
|
|
||||||
|
|
||||||
UInt64 summ = 0;
|
|
||||||
|
|
||||||
for (size_t i = 0; i < numbers_to_insert_size; ++i)
|
|
||||||
{
|
{
|
||||||
auto it = hash_map.find(numbers_to_insert[i]);
|
DB::ReadBufferFromFileDescriptor in1(STDIN_FILENO);
|
||||||
|
DB::CompressedReadBuffer in2(in1);
|
||||||
if (it != hash_map.end())
|
in2.readStrict(reinterpret_cast<char*>(data.data()), sizeof(data[0]) * rows_size);
|
||||||
summ += it->second;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
std::cout << "Calculated summ: " << summ << " in " << watch.elapsedMilliseconds() << " milliseconds" << std::endl;
|
if (method == 0)
|
||||||
}
|
|
||||||
|
|
||||||
template<typename LRUCache>
|
|
||||||
UInt64 testInsertIntoEmptyCache(size_t map_size, const std::vector<UInt64> & numbers_to_insert, bool preallocated)
|
|
||||||
{
|
|
||||||
size_t numbers_to_insert_size = numbers_to_insert.size();
|
|
||||||
std::cout << "Test testInsertPreallocated preallocated map size: " << map_size << " numbers to insert size: " << numbers_to_insert_size;
|
|
||||||
std::cout << std::endl;
|
|
||||||
|
|
||||||
LRUCache cache(map_size, preallocated);
|
|
||||||
Stopwatch watch;
|
|
||||||
|
|
||||||
for (size_t i = 0; i < numbers_to_insert_size; ++i)
|
|
||||||
{
|
{
|
||||||
cache.insert(numbers_to_insert[i], numbers_to_insert[i]);
|
test<Key, LRUHashMap<Key, UInt64>>(data.data(), data.size(), "CH HashMap");
|
||||||
|
}
|
||||||
|
else if (method == 1)
|
||||||
|
{
|
||||||
|
test<Key, LRUHashMapBasic<Key, UInt64>>(data.data(), data.size(), "BasicLRU");
|
||||||
}
|
}
|
||||||
|
|
||||||
std::cout << "Inserted in " << watch.elapsedMilliseconds() << " milliseconds" << std::endl;
|
|
||||||
|
|
||||||
UInt64 summ = 0;
|
|
||||||
|
|
||||||
for (size_t i = 0; i < numbers_to_insert_size; ++i)
|
|
||||||
if (cache.contains(numbers_to_insert[i]))
|
|
||||||
summ += cache.get(numbers_to_insert[i]);
|
|
||||||
|
|
||||||
std::cout << "Calculated summ: " << summ << " in " << watch.elapsedMilliseconds() << " milliseconds" << std::endl;
|
|
||||||
|
|
||||||
return summ;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
int main(int argc, char ** argv)
|
int main(int argc, char ** argv)
|
||||||
@ -212,33 +149,34 @@ int main(int argc, char ** argv)
|
|||||||
(void)(argc);
|
(void)(argc);
|
||||||
(void)(argv);
|
(void)(argv);
|
||||||
|
|
||||||
size_t hash_map_size = 1200000;
|
if (argc < 4)
|
||||||
size_t numbers_to_insert_size = 12000000;
|
{
|
||||||
std::vector<UInt64> numbers = generateNumbersToInsert(numbers_to_insert_size);
|
std::cerr << "Usage: program method column_type_name rows_count < input_column.bin \n";
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
|
||||||
std::cout << "Test insert into HashMap preallocated=0" << std::endl;
|
size_t method = std::stoull(argv[1]);
|
||||||
testInsertElementsIntoHashMap(hash_map_size, numbers, true);
|
std::string type_name = std::string(argv[2]);
|
||||||
std::cout << std::endl;
|
size_t n = std::stoull(argv[3]);
|
||||||
|
|
||||||
std::cout << "Test insert into HashMap preallocated=1" << std::endl;
|
if (type_name == "UInt8")
|
||||||
testInsertElementsIntoHashMap(hash_map_size, numbers, true);
|
testForType<UInt8>(method, n);
|
||||||
std::cout << std::endl;
|
else if (type_name == "UInt16")
|
||||||
|
testForType<UInt16>(method, n);
|
||||||
std::cout << "Test LRUHashMap preallocated=0" << std::endl;
|
else if (type_name == "UInt32")
|
||||||
testInsertIntoEmptyCache<LRUHashMap<UInt64, UInt64>>(hash_map_size, numbers, false);
|
testForType<UInt32>(method, n);
|
||||||
std::cout << std::endl;
|
else if (type_name == "UInt64")
|
||||||
|
testForType<UInt64>(method, n);
|
||||||
std::cout << "Test LRUHashMap preallocated=1" << std::endl;
|
else if (type_name == "Int8")
|
||||||
testInsertIntoEmptyCache<LRUHashMap<UInt64, UInt64>>(hash_map_size, numbers, true);
|
testForType<Int8>(method, n);
|
||||||
std::cout << std::endl;
|
else if (type_name == "Int16")
|
||||||
|
testForType<Int16>(method, n);
|
||||||
std::cout << "Test LRUHashMapBasic preallocated=0" << std::endl;
|
else if (type_name == "Int32")
|
||||||
testInsertIntoEmptyCache<LRUHashMapBasic<UInt64, UInt64>>(hash_map_size, numbers, false);
|
testForType<Int32>(method, n);
|
||||||
std::cout << std::endl;
|
else if (type_name == "Int64")
|
||||||
|
testForType<Int64>(method, n);
|
||||||
std::cout << "Test LRUHashMapBasic preallocated=1" << std::endl;
|
else
|
||||||
testInsertIntoEmptyCache<LRUHashMapBasic<UInt64, UInt64>>(hash_map_size, numbers, true);
|
std::cerr << "Unexpected type passed " << type_name << std::endl;
|
||||||
std::cout << std::endl;
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@ -1,31 +1,121 @@
|
|||||||
#include <Common/isLocalAddress.h>
|
#include <Common/isLocalAddress.h>
|
||||||
|
|
||||||
|
#include <ifaddrs.h>
|
||||||
#include <cstring>
|
#include <cstring>
|
||||||
|
#include <optional>
|
||||||
#include <common/types.h>
|
#include <common/types.h>
|
||||||
#include <Poco/Util/Application.h>
|
#include <Common/Exception.h>
|
||||||
#include <Poco/Net/NetworkInterface.h>
|
#include <Poco/Net/IPAddress.h>
|
||||||
#include <Poco/Net/SocketAddress.h>
|
#include <Poco/Net/SocketAddress.h>
|
||||||
|
|
||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
|
|
||||||
|
namespace ErrorCodes
|
||||||
|
{
|
||||||
|
extern const int SYSTEM_ERROR;
|
||||||
|
}
|
||||||
|
|
||||||
|
namespace
|
||||||
|
{
|
||||||
|
|
||||||
|
struct NetworkInterfaces
|
||||||
|
{
|
||||||
|
ifaddrs * ifaddr;
|
||||||
|
NetworkInterfaces()
|
||||||
|
{
|
||||||
|
if (getifaddrs(&ifaddr) == -1)
|
||||||
|
{
|
||||||
|
throwFromErrno("Cannot getifaddrs", ErrorCodes::SYSTEM_ERROR);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
bool hasAddress(const Poco::Net::IPAddress & address) const
|
||||||
|
{
|
||||||
|
ifaddrs * iface;
|
||||||
|
for (iface = ifaddr; iface != nullptr; iface = iface->ifa_next)
|
||||||
|
{
|
||||||
|
/// Point-to-point (VPN) addresses may have NULL ifa_addr
|
||||||
|
if (!iface->ifa_addr)
|
||||||
|
continue;
|
||||||
|
|
||||||
|
auto family = iface->ifa_addr->sa_family;
|
||||||
|
std::optional<Poco::Net::IPAddress> interface_address;
|
||||||
|
switch (family)
|
||||||
|
{
|
||||||
|
/// We interested only in IP-adresses
|
||||||
|
case AF_INET:
|
||||||
|
{
|
||||||
|
interface_address.emplace(*(iface->ifa_addr));
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case AF_INET6:
|
||||||
|
{
|
||||||
|
interface_address.emplace(&reinterpret_cast<const struct sockaddr_in6*>(iface->ifa_addr)->sin6_addr, sizeof(struct in6_addr));
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Compare the addresses without taking into account `scope`.
|
||||||
|
* Theoretically, this may not be correct - depends on `route` setting
|
||||||
|
* - through which interface we will actually access the specified address.
|
||||||
|
*/
|
||||||
|
if (interface_address->length() == address.length()
|
||||||
|
&& 0 == memcmp(interface_address->addr(), address.addr(), address.length()))
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
~NetworkInterfaces()
|
||||||
|
{
|
||||||
|
freeifaddrs(ifaddr);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
bool isLocalAddress(const Poco::Net::IPAddress & address)
|
bool isLocalAddress(const Poco::Net::IPAddress & address)
|
||||||
{
|
{
|
||||||
static auto interfaces = Poco::Net::NetworkInterface::list();
|
/** 127.0.0.1 is treat as local address unconditionally.
|
||||||
|
* ::1 is also treat as local address unconditionally.
|
||||||
|
*
|
||||||
|
* 127.0.0.{2..255} are not treat as local addresses, because they are used in tests
|
||||||
|
* to emulate distributed queries across localhost.
|
||||||
|
*
|
||||||
|
* But 127.{0,1}.{0,1}.{0,1} are treat as local addresses,
|
||||||
|
* because they are used in Debian for localhost.
|
||||||
|
*/
|
||||||
|
if (address.isLoopback())
|
||||||
|
{
|
||||||
|
if (address.family() == Poco::Net::AddressFamily::IPv4)
|
||||||
|
{
|
||||||
|
/// The address is located in memory in big endian form (network byte order).
|
||||||
|
const unsigned char * digits = static_cast<const unsigned char *>(address.addr());
|
||||||
|
|
||||||
return interfaces.end() != std::find_if(interfaces.begin(), interfaces.end(),
|
if (digits[0] == 127
|
||||||
[&] (const Poco::Net::NetworkInterface & interface)
|
&& digits[1] <= 1
|
||||||
{
|
&& digits[2] <= 1
|
||||||
/** Compare the addresses without taking into account `scope`.
|
&& digits[3] <= 1)
|
||||||
* Theoretically, this may not be correct - depends on `route` setting
|
{
|
||||||
* - through which interface we will actually access the specified address.
|
return true;
|
||||||
*/
|
}
|
||||||
return interface.address().length() == address.length()
|
}
|
||||||
&& 0 == memcmp(interface.address().addr(), address.addr(), address.length());
|
else if (address.family() == Poco::Net::AddressFamily::IPv6)
|
||||||
});
|
{
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
NetworkInterfaces interfaces;
|
||||||
|
return interfaces.hasAddress(address);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
bool isLocalAddress(const Poco::Net::SocketAddress & address, UInt16 clickhouse_port)
|
bool isLocalAddress(const Poco::Net::SocketAddress & address, UInt16 clickhouse_port)
|
||||||
{
|
{
|
||||||
return clickhouse_port == address.port() && isLocalAddress(address.host());
|
return clickhouse_port == address.port() && isLocalAddress(address.host());
|
||||||
|
@ -28,15 +28,27 @@ std::pair<std::string, UInt16> parseAddress(const std::string & str, UInt16 defa
|
|||||||
throw Exception("Illegal address passed to function parseAddress: "
|
throw Exception("Illegal address passed to function parseAddress: "
|
||||||
"the address begins with opening square bracket, but no closing square bracket found", ErrorCodes::BAD_ARGUMENTS);
|
"the address begins with opening square bracket, but no closing square bracket found", ErrorCodes::BAD_ARGUMENTS);
|
||||||
|
|
||||||
port = find_first_symbols<':'>(closing_square_bracket + 1, end);
|
port = closing_square_bracket + 1;
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
port = find_first_symbols<':'>(begin, end);
|
port = find_first_symbols<':'>(begin, end);
|
||||||
|
|
||||||
if (port != end)
|
if (port != end)
|
||||||
{
|
{
|
||||||
UInt16 port_number = parse<UInt16>(port + 1);
|
if (*port != ':')
|
||||||
return { std::string(begin, port), port_number };
|
throw Exception(ErrorCodes::BAD_ARGUMENTS,
|
||||||
|
"Illegal port prefix passed to function parseAddress: {}", port);
|
||||||
|
|
||||||
|
++port;
|
||||||
|
|
||||||
|
UInt16 port_number;
|
||||||
|
ReadBufferFromMemory port_buf(port, end - port);
|
||||||
|
if (!tryReadText<UInt16>(port_number, port_buf) || !port_buf.eof())
|
||||||
|
{
|
||||||
|
throw Exception(ErrorCodes::BAD_ARGUMENTS,
|
||||||
|
"Illegal port passed to function parseAddress: {}", port);
|
||||||
|
}
|
||||||
|
return { std::string(begin, port - 1), port_number };
|
||||||
}
|
}
|
||||||
else if (default_port)
|
else if (default_port)
|
||||||
{
|
{
|
||||||
|
40
src/Common/tests/gtest_local_address.cpp
Normal file
40
src/Common/tests/gtest_local_address.cpp
Normal file
@ -0,0 +1,40 @@
|
|||||||
|
#include <gtest/gtest.h>
|
||||||
|
#include <Common/isLocalAddress.h>
|
||||||
|
#include <Common/ShellCommand.h>
|
||||||
|
#include <Poco/Net/IPAddress.h>
|
||||||
|
#include <IO/ReadHelpers.h>
|
||||||
|
|
||||||
|
|
||||||
|
TEST(LocalAddress, SmokeTest)
|
||||||
|
{
|
||||||
|
auto cmd = DB::ShellCommand::executeDirect("/bin/hostname", {"-i"});
|
||||||
|
std::string address_str;
|
||||||
|
DB::readString(address_str, cmd->out);
|
||||||
|
cmd->wait();
|
||||||
|
std::cerr << "Got Address: " << address_str << std::endl;
|
||||||
|
|
||||||
|
Poco::Net::IPAddress address(address_str);
|
||||||
|
|
||||||
|
EXPECT_TRUE(DB::isLocalAddress(address));
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST(LocalAddress, Localhost)
|
||||||
|
{
|
||||||
|
EXPECT_TRUE(DB::isLocalAddress(Poco::Net::IPAddress{"127.0.0.1"}));
|
||||||
|
EXPECT_TRUE(DB::isLocalAddress(Poco::Net::IPAddress{"127.0.1.1"}));
|
||||||
|
EXPECT_TRUE(DB::isLocalAddress(Poco::Net::IPAddress{"127.1.1.1"}));
|
||||||
|
EXPECT_TRUE(DB::isLocalAddress(Poco::Net::IPAddress{"127.1.0.1"}));
|
||||||
|
EXPECT_TRUE(DB::isLocalAddress(Poco::Net::IPAddress{"127.1.0.0"}));
|
||||||
|
EXPECT_TRUE(DB::isLocalAddress(Poco::Net::IPAddress{"::1"}));
|
||||||
|
|
||||||
|
/// Make sure we don't mess with the byte order.
|
||||||
|
EXPECT_FALSE(DB::isLocalAddress(Poco::Net::IPAddress{"1.0.0.127"}));
|
||||||
|
EXPECT_FALSE(DB::isLocalAddress(Poco::Net::IPAddress{"1.1.1.127"}));
|
||||||
|
|
||||||
|
EXPECT_FALSE(DB::isLocalAddress(Poco::Net::IPAddress{"0.0.0.0"}));
|
||||||
|
EXPECT_FALSE(DB::isLocalAddress(Poco::Net::IPAddress{"::"}));
|
||||||
|
EXPECT_FALSE(DB::isLocalAddress(Poco::Net::IPAddress{"::2"}));
|
||||||
|
|
||||||
|
/// See the comment in the implementation of isLocalAddress.
|
||||||
|
EXPECT_FALSE(DB::isLocalAddress(Poco::Net::IPAddress{"127.0.0.2"}));
|
||||||
|
}
|
@ -419,31 +419,56 @@ TEST(Common, PODArrayBasicSwapMoveConstructor)
|
|||||||
|
|
||||||
TEST(Common, PODArrayInsert)
|
TEST(Common, PODArrayInsert)
|
||||||
{
|
{
|
||||||
std::string str = "test_string_abacaba";
|
|
||||||
PODArray<char> chars;
|
|
||||||
chars.insert(chars.end(), str.begin(), str.end());
|
|
||||||
EXPECT_EQ(str, std::string(chars.data(), chars.size()));
|
|
||||||
|
|
||||||
std::string insert_in_the_middle = "insert_in_the_middle";
|
|
||||||
auto pos = str.size() / 2;
|
|
||||||
str.insert(str.begin() + pos, insert_in_the_middle.begin(), insert_in_the_middle.end());
|
|
||||||
chars.insert(chars.begin() + pos, insert_in_the_middle.begin(), insert_in_the_middle.end());
|
|
||||||
EXPECT_EQ(str, std::string(chars.data(), chars.size()));
|
|
||||||
|
|
||||||
std::string insert_with_resize;
|
|
||||||
insert_with_resize.reserve(chars.capacity() * 2);
|
|
||||||
char cur_char = 'a';
|
|
||||||
while (insert_with_resize.size() < insert_with_resize.capacity())
|
|
||||||
{
|
{
|
||||||
insert_with_resize += cur_char;
|
std::string str = "test_string_abacaba";
|
||||||
if (cur_char == 'z')
|
PODArray<char> chars;
|
||||||
cur_char = 'a';
|
chars.insert(chars.end(), str.begin(), str.end());
|
||||||
else
|
EXPECT_EQ(str, std::string(chars.data(), chars.size()));
|
||||||
++cur_char;
|
|
||||||
|
std::string insert_in_the_middle = "insert_in_the_middle";
|
||||||
|
auto pos = str.size() / 2;
|
||||||
|
str.insert(str.begin() + pos, insert_in_the_middle.begin(), insert_in_the_middle.end());
|
||||||
|
chars.insert(chars.begin() + pos, insert_in_the_middle.begin(), insert_in_the_middle.end());
|
||||||
|
EXPECT_EQ(str, std::string(chars.data(), chars.size()));
|
||||||
|
|
||||||
|
std::string insert_with_resize;
|
||||||
|
insert_with_resize.reserve(chars.capacity() * 2);
|
||||||
|
char cur_char = 'a';
|
||||||
|
while (insert_with_resize.size() < insert_with_resize.capacity())
|
||||||
|
{
|
||||||
|
insert_with_resize += cur_char;
|
||||||
|
if (cur_char == 'z')
|
||||||
|
cur_char = 'a';
|
||||||
|
else
|
||||||
|
++cur_char;
|
||||||
|
}
|
||||||
|
str.insert(str.begin(), insert_with_resize.begin(), insert_with_resize.end());
|
||||||
|
chars.insert(chars.begin(), insert_with_resize.begin(), insert_with_resize.end());
|
||||||
|
EXPECT_EQ(str, std::string(chars.data(), chars.size()));
|
||||||
|
}
|
||||||
|
{
|
||||||
|
PODArray<UInt64> values;
|
||||||
|
PODArray<UInt64> values_to_insert;
|
||||||
|
|
||||||
|
for (size_t i = 0; i < 120; ++i)
|
||||||
|
values.emplace_back(i);
|
||||||
|
|
||||||
|
values.insert(values.begin() + 1, values_to_insert.begin(), values_to_insert.end());
|
||||||
|
ASSERT_EQ(values.size(), 120);
|
||||||
|
|
||||||
|
values_to_insert.emplace_back(0);
|
||||||
|
values_to_insert.emplace_back(1);
|
||||||
|
|
||||||
|
values.insert(values.begin() + 1, values_to_insert.begin(), values_to_insert.end());
|
||||||
|
ASSERT_EQ(values.size(), 122);
|
||||||
|
|
||||||
|
values_to_insert.clear();
|
||||||
|
for (size_t i = 0; i < 240; ++i)
|
||||||
|
values_to_insert.emplace_back(i);
|
||||||
|
|
||||||
|
values.insert(values.begin() + 1, values_to_insert.begin(), values_to_insert.end());
|
||||||
|
ASSERT_EQ(values.size(), 362);
|
||||||
}
|
}
|
||||||
str.insert(str.begin(), insert_with_resize.begin(), insert_with_resize.end());
|
|
||||||
chars.insert(chars.begin(), insert_with_resize.begin(), insert_with_resize.end());
|
|
||||||
EXPECT_EQ(str, std::string(chars.data(), chars.size()));
|
|
||||||
}
|
}
|
||||||
|
|
||||||
TEST(Common, PODArrayInsertFromItself)
|
TEST(Common, PODArrayInsertFromItself)
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user