Merge branch 'master' into uniq-theta-sketch

This commit is contained in:
mergify[bot] 2021-05-21 20:52:02 +00:00 committed by GitHub
commit ee939d9b5f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
955 changed files with 18292 additions and 6674 deletions

1
.gitmodules vendored
View File

@ -17,6 +17,7 @@
[submodule "contrib/zlib-ng"]
path = contrib/zlib-ng
url = https://github.com/ClickHouse-Extras/zlib-ng.git
branch = clickhouse-new
[submodule "contrib/googletest"]
path = contrib/googletest
url = https://github.com/google/googletest.git

View File

@ -1,3 +1,142 @@
## ClickHouse release 21.5, 2021-05-20
#### Backward Incompatible Change
* Change comparison of integers and floating point numbers when integer is not exactly representable in the floating point data type. In new version comparison will return false as the rounding error will occur. Example: `9223372036854775808.0 != 9223372036854775808`, because the number `9223372036854775808` is not representable as floating point number exactly (and `9223372036854775808.0` is rounded to `9223372036854776000.0`). But in previous version the comparison will return as the numbers are equal, because if the floating point number `9223372036854776000.0` get converted back to UInt64, it will yield `9223372036854775808`. For the reference, the Python programming language also treats these numbers as equal. But this behaviour was dependend on CPU model (different results on AMD64 and AArch64 for some out-of-range numbers), so we make the comparison more precise. It will treat int and float numbers equal only if int is represented in floating point type exactly. [#22595](https://github.com/ClickHouse/ClickHouse/pull/22595) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Remove support for `argMin` and `argMax` for single `Tuple` argument. The code was not memory-safe. The feature was added by mistake and it is confusing for people. These functions can be reintroduced under different names later. This fixes [#22384](https://github.com/ClickHouse/ClickHouse/issues/22384) and reverts [#17359](https://github.com/ClickHouse/ClickHouse/issues/17359). [#23393](https://github.com/ClickHouse/ClickHouse/pull/23393) ([alexey-milovidov](https://github.com/alexey-milovidov)).
#### New Feature
* Added functions `dictGetChildren(dictionary, key)`, `dictGetDescendants(dictionary, key, level)`. Function `dictGetChildren` return all children as an array if indexes. It is a inverse transformation for `dictGetHierarchy`. Function `dictGetDescendants` return all descendants as if `dictGetChildren` was applied `level` times recursively. Zero `level` value is equivalent to infinity. Improved performance of `dictGetHierarchy`, `dictIsIn` functions. Closes [#14656](https://github.com/ClickHouse/ClickHouse/issues/14656). [#22096](https://github.com/ClickHouse/ClickHouse/pull/22096) ([Maksim Kita](https://github.com/kitaisreal)).
* Added function `dictGetOrNull`. It works like `dictGet`, but return `Null` in case key was not found in dictionary. Closes [#22375](https://github.com/ClickHouse/ClickHouse/issues/22375). [#22413](https://github.com/ClickHouse/ClickHouse/pull/22413) ([Maksim Kita](https://github.com/kitaisreal)).
* Added a table function `s3Cluster`, which allows to process files from `s3` in parallel on every node of a specified cluster. [#22012](https://github.com/ClickHouse/ClickHouse/pull/22012) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Added support for replicas and shards in MySQL/PostgreSQL table engine / table function. You can write `SELECT * FROM mysql('host{1,2}-{1|2}', ...)`. Closes [#20969](https://github.com/ClickHouse/ClickHouse/issues/20969). [#22217](https://github.com/ClickHouse/ClickHouse/pull/22217) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Added `ALTER TABLE ... FETCH PART ...` query. It's similar to `FETCH PARTITION`, but fetches only one part. [#22706](https://github.com/ClickHouse/ClickHouse/pull/22706) ([turbo jason](https://github.com/songenjie)).
* Added a setting `max_distributed_depth` that limits the depth of recursive queries to `Distributed` tables. Closes [#20229](https://github.com/ClickHouse/ClickHouse/issues/20229). [#21942](https://github.com/ClickHouse/ClickHouse/pull/21942) ([flynn](https://github.com/ucasFL)).
#### Performance Improvement
* Improved performance of `intDiv` by dynamic dispatch for AVX2. This closes [#22314](https://github.com/ClickHouse/ClickHouse/issues/22314). [#23000](https://github.com/ClickHouse/ClickHouse/pull/23000) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Improved performance of reading from `ArrowStream` input format for sources other then local file (e.g. URL). [#22673](https://github.com/ClickHouse/ClickHouse/pull/22673) ([nvartolomei](https://github.com/nvartolomei)).
* Disabled compression by default when interacting with localhost (with clickhouse-client or server to server with distributed queries) via native protocol. It may improve performance of some import/export operations. This closes [#22234](https://github.com/ClickHouse/ClickHouse/issues/22234). [#22237](https://github.com/ClickHouse/ClickHouse/pull/22237) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Exclude values that does not belong to the shard from right part of IN section for distributed queries (under `optimize_skip_unused_shards_rewrite_in`, enabled by default, since it still requires `optimize_skip_unused_shards`). [#21511](https://github.com/ClickHouse/ClickHouse/pull/21511) ([Azat Khuzhin](https://github.com/azat)).
* Improved performance of reading a subset of columns with File-like table engine and column-oriented format like Parquet, Arrow or ORC. This closes [#issue:20129](https://github.com/ClickHouse/ClickHouse/issues/20129). [#21302](https://github.com/ClickHouse/ClickHouse/pull/21302) ([keenwolf](https://github.com/keen-wolf)).
* Allow to move more conditions to `PREWHERE` as it was before version 21.1 (adjustment of internal heuristics). Insufficient number of moved condtions could lead to worse performance. [#23397](https://github.com/ClickHouse/ClickHouse/pull/23397) ([Anton Popov](https://github.com/CurtizJ)).
* Improved performance of ODBC connections and fixed all the outstanding issues from the backlog. Using `nanodbc` library instead of `Poco::ODBC`. Closes [#9678](https://github.com/ClickHouse/ClickHouse/issues/9678). Add support for DateTime64 and Decimal* for ODBC table engine. Closes [#21961](https://github.com/ClickHouse/ClickHouse/issues/21961). Fixed issue with cyrillic text being truncated. Closes [#16246](https://github.com/ClickHouse/ClickHouse/issues/16246). Added connection pools for odbc bridge. [#21972](https://github.com/ClickHouse/ClickHouse/pull/21972) ([Kseniia Sumarokova](https://github.com/kssenii)).
#### Improvement
* Increase `max_uri_size` (the maximum size of URL in HTTP interface) to 1 MiB by default. This closes [#21197](https://github.com/ClickHouse/ClickHouse/issues/21197). [#22997](https://github.com/ClickHouse/ClickHouse/pull/22997) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Set `background_fetches_pool_size` to `8` that is better for production usage with frequent small insertions or slow ZooKeeper cluster. [#22945](https://github.com/ClickHouse/ClickHouse/pull/22945) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* FlatDictionary added `initial_array_size`, `max_array_size` options. [#22521](https://github.com/ClickHouse/ClickHouse/pull/22521) ([Maksim Kita](https://github.com/kitaisreal)).
* Add new setting `non_replicated_deduplication_window` for non-replicated MergeTree inserts deduplication. [#22514](https://github.com/ClickHouse/ClickHouse/pull/22514) ([alesapin](https://github.com/alesapin)).
* Update paths to the `CatBoost` model configs in config reloading. [#22434](https://github.com/ClickHouse/ClickHouse/pull/22434) ([Kruglov Pavel](https://github.com/Avogar)).
* Added `Decimal256` type support in dictionaries. `Decimal256` is experimental feature. Closes [#20979](https://github.com/ClickHouse/ClickHouse/issues/20979). [#22960](https://github.com/ClickHouse/ClickHouse/pull/22960) ([Maksim Kita](https://github.com/kitaisreal)).
* Enabled `async_socket_for_remote` by default (using less amount of OS threads for distributed queries). [#23683](https://github.com/ClickHouse/ClickHouse/pull/23683) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fixed `quantile(s)TDigest`. Added special handling of singleton centroids according to tdunning/t-digest 3.2+. Also a bug with over-compression of centroids in implementation of earlier version of the algorithm was fixed. [#23314](https://github.com/ClickHouse/ClickHouse/pull/23314) ([Vladimir Chebotarev](https://github.com/excitoon)).
* Make function name `unhex` case insensitive for compatibility with MySQL. [#23229](https://github.com/ClickHouse/ClickHouse/pull/23229) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Implement functions `arrayHasAny`, `arrayHasAll`, `has`, `indexOf`, `countEqual` for generic case when types of array elements are different. In previous versions the functions `arrayHasAny`, `arrayHasAll` returned false and `has`, `indexOf`, `countEqual` thrown exception. Also add support for `Decimal` and big integer types in functions `has` and similar. This closes [#20272](https://github.com/ClickHouse/ClickHouse/issues/20272). [#23044](https://github.com/ClickHouse/ClickHouse/pull/23044) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Raised the threshold on max number of matches in result of the function `extractAllGroupsHorizontal`. [#23036](https://github.com/ClickHouse/ClickHouse/pull/23036) ([Vasily Nemkov](https://github.com/Enmk)).
* Do not perform `optimize_skip_unused_shards` for cluster with one node. [#22999](https://github.com/ClickHouse/ClickHouse/pull/22999) ([Azat Khuzhin](https://github.com/azat)).
* Added ability to run clickhouse-keeper (experimental drop-in replacement to ZooKeeper) with SSL. Config settings `keeper_server.tcp_port_secure` can be used for secure interaction between client and keeper-server. `keeper_server.raft_configuration.secure` can be used to enable internal secure communication between nodes. [#22992](https://github.com/ClickHouse/ClickHouse/pull/22992) ([alesapin](https://github.com/alesapin)).
* Added ability to flush buffer only in background for `Buffer` tables. [#22986](https://github.com/ClickHouse/ClickHouse/pull/22986) ([Azat Khuzhin](https://github.com/azat)).
* When selecting from MergeTree table with NULL in WHERE condition, in rare cases, exception was thrown. This closes [#20019](https://github.com/ClickHouse/ClickHouse/issues/20019). [#22978](https://github.com/ClickHouse/ClickHouse/pull/22978) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix error handling in Poco HTTP Client for AWS. [#22973](https://github.com/ClickHouse/ClickHouse/pull/22973) ([kreuzerkrieg](https://github.com/kreuzerkrieg)).
* Respect `max_part_removal_threads` for `ReplicatedMergeTree`. [#22971](https://github.com/ClickHouse/ClickHouse/pull/22971) ([Azat Khuzhin](https://github.com/azat)).
* Fix obscure corner case of MergeTree settings inactive_parts_to_throw_insert = 0 with inactive_parts_to_delay_insert > 0. [#22947](https://github.com/ClickHouse/ClickHouse/pull/22947) ([Azat Khuzhin](https://github.com/azat)).
* `dateDiff` now works with `DateTime64` arguments (even for values outside of `DateTime` range) [#22931](https://github.com/ClickHouse/ClickHouse/pull/22931) ([Vasily Nemkov](https://github.com/Enmk)).
* MaterializeMySQL (experimental feature): added an ability to replicate MySQL databases containing views without failing. This is accomplished by ignoring the views. [#22760](https://github.com/ClickHouse/ClickHouse/pull/22760) ([Christian](https://github.com/cfroystad)).
* Allow RBAC row policy via postgresql protocol. Closes [#22658](https://github.com/ClickHouse/ClickHouse/issues/22658). PostgreSQL protocol is enabled in configuration by default. [#22755](https://github.com/ClickHouse/ClickHouse/pull/22755) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Add metric to track how much time is spend during waiting for Buffer layer lock. [#22725](https://github.com/ClickHouse/ClickHouse/pull/22725) ([Azat Khuzhin](https://github.com/azat)).
* Allow to use CTE in VIEW definition. This closes [#22491](https://github.com/ClickHouse/ClickHouse/issues/22491). [#22657](https://github.com/ClickHouse/ClickHouse/pull/22657) ([Amos Bird](https://github.com/amosbird)).
* Clear the rest of the screen and show cursor in `clickhouse-client` if previous program has left garbage in terminal. This closes [#16518](https://github.com/ClickHouse/ClickHouse/issues/16518). [#22634](https://github.com/ClickHouse/ClickHouse/pull/22634) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Make `round` function to behave consistently on non-x86_64 platforms. Rounding half to nearest even (Banker's rounding) is used. [#22582](https://github.com/ClickHouse/ClickHouse/pull/22582) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Correctly check structure of blocks of data that are sending by Distributed tables. [#22325](https://github.com/ClickHouse/ClickHouse/pull/22325) ([Azat Khuzhin](https://github.com/azat)).
* Allow publishing Kafka errors to a virtual column of Kafka engine, controlled by the `kafka_handle_error_mode` setting. [#21850](https://github.com/ClickHouse/ClickHouse/pull/21850) ([fastio](https://github.com/fastio)).
* Add aliases `simpleJSONExtract/simpleJSONHas` to `visitParam/visitParamExtract{UInt, Int, Bool, Float, Raw, String}`. Fixes [#21383](https://github.com/ClickHouse/ClickHouse/issues/21383). [#21519](https://github.com/ClickHouse/ClickHouse/pull/21519) ([fastio](https://github.com/fastio)).
* Add `clickhouse-library-bridge` for library dictionary source. Closes [#9502](https://github.com/ClickHouse/ClickHouse/issues/9502). [#21509](https://github.com/ClickHouse/ClickHouse/pull/21509) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Forbid to drop a column if it's referenced by materialized view. Closes [#21164](https://github.com/ClickHouse/ClickHouse/issues/21164). [#21303](https://github.com/ClickHouse/ClickHouse/pull/21303) ([flynn](https://github.com/ucasFL)).
* Support dynamic interserver credentials (rotating credentials without downtime). [#14113](https://github.com/ClickHouse/ClickHouse/pull/14113) ([johnskopis](https://github.com/johnskopis)).
* Add support for Kafka storage with `Arrow` and `ArrowStream` format messages. [#23415](https://github.com/ClickHouse/ClickHouse/pull/23415) ([Chao Ma](https://github.com/godliness)).
* Fixed missing semicolon in exception message. The user may find this exception message unpleasant to read. [#23208](https://github.com/ClickHouse/ClickHouse/pull/23208) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed missing whitespace in some exception messages about `LowCardinality` type. [#23207](https://github.com/ClickHouse/ClickHouse/pull/23207) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Some values were formatted with alignment in center in table cells in `Markdown` format. Not anymore. [#23096](https://github.com/ClickHouse/ClickHouse/pull/23096) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Remove non-essential details from suggestions in clickhouse-client. This closes [#22158](https://github.com/ClickHouse/ClickHouse/issues/22158). [#23040](https://github.com/ClickHouse/ClickHouse/pull/23040) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Correct calculation of `bytes_allocated` field in system.dictionaries for sparse_hashed dictionaries. [#22867](https://github.com/ClickHouse/ClickHouse/pull/22867) ([Azat Khuzhin](https://github.com/azat)).
* Fixed approximate total rows accounting for reverse reading from MergeTree. [#22726](https://github.com/ClickHouse/ClickHouse/pull/22726) ([Azat Khuzhin](https://github.com/azat)).
* Fix the case when it was possible to configure dictionary with clickhouse source that was looking to itself that leads to infinite loop. Closes [#14314](https://github.com/ClickHouse/ClickHouse/issues/14314). [#22479](https://github.com/ClickHouse/ClickHouse/pull/22479) ([Maksim Kita](https://github.com/kitaisreal)).
#### Bug Fix
* Multiple fixes for hedged requests. Fixed an error `Can't initialize pipeline with empty pipe` for queries with `GLOBAL IN/JOIN` when the setting `use_hedged_requests` is enabled. Fixes [#23431](https://github.com/ClickHouse/ClickHouse/issues/23431). [#23805](https://github.com/ClickHouse/ClickHouse/pull/23805) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). Fixed a race condition in hedged connections which leads to crash. This fixes [#22161](https://github.com/ClickHouse/ClickHouse/issues/22161). [#22443](https://github.com/ClickHouse/ClickHouse/pull/22443) ([Kruglov Pavel](https://github.com/Avogar)). Fix possible crash in case if `unknown packet` was received from remote query (with `async_socket_for_remote` enabled). Fixes [#21167](https://github.com/ClickHouse/ClickHouse/issues/21167). [#23309](https://github.com/ClickHouse/ClickHouse/pull/23309) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fixed the behavior when disabling `input_format_with_names_use_header ` setting discards all the input with CSVWithNames format. This fixes [#22406](https://github.com/ClickHouse/ClickHouse/issues/22406). [#23202](https://github.com/ClickHouse/ClickHouse/pull/23202) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Fixed remote JDBC bridge timeout connection issue. Closes [#9609](https://github.com/ClickHouse/ClickHouse/issues/9609). [#23771](https://github.com/ClickHouse/ClickHouse/pull/23771) ([Maksim Kita](https://github.com/kitaisreal), [alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix the logic of initial load of `complex_key_hashed` if `update_field` is specified. Closes [#23800](https://github.com/ClickHouse/ClickHouse/issues/23800). [#23824](https://github.com/ClickHouse/ClickHouse/pull/23824) ([Maksim Kita](https://github.com/kitaisreal)).
* Fixed crash when `PREWHERE` and row policy filter are both in effect with empty result. [#23763](https://github.com/ClickHouse/ClickHouse/pull/23763) ([Amos Bird](https://github.com/amosbird)).
* Avoid possible "Cannot schedule a task" error (in case some exception had been occurred) on INSERT into Distributed. [#23744](https://github.com/ClickHouse/ClickHouse/pull/23744) ([Azat Khuzhin](https://github.com/azat)).
* Added an exception in case of completely the same values in both samples in aggregate function `mannWhitneyUTest`. This fixes [#23646](https://github.com/ClickHouse/ClickHouse/issues/23646). [#23654](https://github.com/ClickHouse/ClickHouse/pull/23654) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Fixed server fault when inserting data through HTTP caused an exception. This fixes [#23512](https://github.com/ClickHouse/ClickHouse/issues/23512). [#23643](https://github.com/ClickHouse/ClickHouse/pull/23643) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Fixed misinterpretation of some `LIKE` expressions with escape sequences. [#23610](https://github.com/ClickHouse/ClickHouse/pull/23610) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed restart / stop command hanging. Closes [#20214](https://github.com/ClickHouse/ClickHouse/issues/20214). [#23552](https://github.com/ClickHouse/ClickHouse/pull/23552) ([filimonov](https://github.com/filimonov)).
* Fixed `COLUMNS` matcher in case of multiple JOINs in select query. Closes [#22736](https://github.com/ClickHouse/ClickHouse/issues/22736). [#23501](https://github.com/ClickHouse/ClickHouse/pull/23501) ([Maksim Kita](https://github.com/kitaisreal)).
* Fixed a crash when modifying column's default value when a column itself is used as `ReplacingMergeTree`'s parameter. [#23483](https://github.com/ClickHouse/ClickHouse/pull/23483) ([hexiaoting](https://github.com/hexiaoting)).
* Fixed corner cases in vertical merges with `ReplacingMergeTree`. In rare cases they could lead to fails of merges with exceptions like `Incomplete granules are not allowed while blocks are granules size`. [#23459](https://github.com/ClickHouse/ClickHouse/pull/23459) ([Anton Popov](https://github.com/CurtizJ)).
* Fixed bug that does not allow cast from empty array literal, to array with dimensions greater than 1, e.g. `CAST([] AS Array(Array(String)))`. Closes [#14476](https://github.com/ClickHouse/ClickHouse/issues/14476). [#23456](https://github.com/ClickHouse/ClickHouse/pull/23456) ([Maksim Kita](https://github.com/kitaisreal)).
* Fixed a bug when `deltaSum` aggregate function produced incorrect result after resetting the counter. [#23437](https://github.com/ClickHouse/ClickHouse/pull/23437) ([Russ Frank](https://github.com/rf)).
* Fixed `Cannot unlink file` error on unsuccessful creation of ReplicatedMergeTree table with multidisk configuration. This closes [#21755](https://github.com/ClickHouse/ClickHouse/issues/21755). [#23433](https://github.com/ClickHouse/ClickHouse/pull/23433) ([tavplubix](https://github.com/tavplubix)).
* Fixed incompatible constant expression generation during partition pruning based on virtual columns. This fixes https://github.com/ClickHouse/ClickHouse/pull/21401#discussion_r611888913. [#23366](https://github.com/ClickHouse/ClickHouse/pull/23366) ([Amos Bird](https://github.com/amosbird)).
* Fixed a crash when setting join_algorithm is set to 'auto' and Join is performed with a Dictionary. Close [#23002](https://github.com/ClickHouse/ClickHouse/issues/23002). [#23312](https://github.com/ClickHouse/ClickHouse/pull/23312) ([Vladimir](https://github.com/vdimir)).
* Don't relax NOT conditions during partition pruning. This fixes [#23305](https://github.com/ClickHouse/ClickHouse/issues/23305) and [#21539](https://github.com/ClickHouse/ClickHouse/issues/21539). [#23310](https://github.com/ClickHouse/ClickHouse/pull/23310) ([Amos Bird](https://github.com/amosbird)).
* Fixed very rare race condition on background cleanup of old blocks. It might cause a block not to be deduplicated if it's too close to the end of deduplication window. [#23301](https://github.com/ClickHouse/ClickHouse/pull/23301) ([tavplubix](https://github.com/tavplubix)).
* Fixed very rare (distributed) race condition between creation and removal of ReplicatedMergeTree tables. It might cause exceptions like `node doesn't exist` on attempt to create replicated table. Fixes [#21419](https://github.com/ClickHouse/ClickHouse/issues/21419). [#23294](https://github.com/ClickHouse/ClickHouse/pull/23294) ([tavplubix](https://github.com/tavplubix)).
* Fixed simple key dictionary from DDL creation if primary key is not first attribute. Fixes [#23236](https://github.com/ClickHouse/ClickHouse/issues/23236). [#23262](https://github.com/ClickHouse/ClickHouse/pull/23262) ([Maksim Kita](https://github.com/kitaisreal)).
* Fixed reading from ODBC when there are many long column names in a table. Closes [#8853](https://github.com/ClickHouse/ClickHouse/issues/8853). [#23215](https://github.com/ClickHouse/ClickHouse/pull/23215) ([Kseniia Sumarokova](https://github.com/kssenii)).
* MaterializeMySQL (experimental feature): fixed `Not found column` error when selecting from `MaterializeMySQL` with condition on key column. Fixes [#22432](https://github.com/ClickHouse/ClickHouse/issues/22432). [#23200](https://github.com/ClickHouse/ClickHouse/pull/23200) ([tavplubix](https://github.com/tavplubix)).
* Correct aliases handling if subquery was optimized to constant. Fixes [#22924](https://github.com/ClickHouse/ClickHouse/issues/22924). Fixes [#10401](https://github.com/ClickHouse/ClickHouse/issues/10401). [#23191](https://github.com/ClickHouse/ClickHouse/pull/23191) ([Maksim Kita](https://github.com/kitaisreal)).
* Server might fail to start if `data_type_default_nullable` setting is enabled in default profile, it's fixed. Fixes [#22573](https://github.com/ClickHouse/ClickHouse/issues/22573). [#23185](https://github.com/ClickHouse/ClickHouse/pull/23185) ([tavplubix](https://github.com/tavplubix)).
* Fixed a crash on shutdown which happened because of wrong accounting of current connections. [#23154](https://github.com/ClickHouse/ClickHouse/pull/23154) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fixed `Table .inner_id... doesn't exist` error when selecting from Materialized View after detaching it from Atomic database and attaching back. [#23047](https://github.com/ClickHouse/ClickHouse/pull/23047) ([tavplubix](https://github.com/tavplubix)).
* Fix error `Cannot find column in ActionsDAG result` which may happen if subquery uses `untuple`. Fixes [#22290](https://github.com/ClickHouse/ClickHouse/issues/22290). [#22991](https://github.com/ClickHouse/ClickHouse/pull/22991) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix usage of constant columns of type `Map` with nullable values. [#22939](https://github.com/ClickHouse/ClickHouse/pull/22939) ([Anton Popov](https://github.com/CurtizJ)).
* fixed `formatDateTime()` on `DateTime64` and "%C" format specifier fixed `toDateTime64()` for large values and non-zero scale. [#22937](https://github.com/ClickHouse/ClickHouse/pull/22937) ([Vasily Nemkov](https://github.com/Enmk)).
* Fixed a crash when using `mannWhitneyUTest` and `rankCorr` with window functions. This fixes [#22728](https://github.com/ClickHouse/ClickHouse/issues/22728). [#22876](https://github.com/ClickHouse/ClickHouse/pull/22876) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* LIVE VIEW (experimental feature): fixed possible hanging in concurrent DROP/CREATE of TEMPORARY LIVE VIEW in `TemporaryLiveViewCleaner`, [see](https://gist.github.com/vzakaznikov/0c03195960fc86b56bfe2bc73a90019e). [#22858](https://github.com/ClickHouse/ClickHouse/pull/22858) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fixed pushdown of `HAVING` in case, when filter column is used in aggregation. [#22763](https://github.com/ClickHouse/ClickHouse/pull/22763) ([Anton Popov](https://github.com/CurtizJ)).
* Fixed possible hangs in Zookeeper requests in case of OOM exception. Fixes [#22438](https://github.com/ClickHouse/ClickHouse/issues/22438). [#22684](https://github.com/ClickHouse/ClickHouse/pull/22684) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fixed wait for mutations on several replicas for ReplicatedMergeTree table engines. Previously, mutation/alter query may finish before mutation actually executed on other replicas. [#22669](https://github.com/ClickHouse/ClickHouse/pull/22669) ([alesapin](https://github.com/alesapin)).
* Fixed exception for Log with nested types without columns in the SELECT clause. [#22654](https://github.com/ClickHouse/ClickHouse/pull/22654) ([Azat Khuzhin](https://github.com/azat)).
* Fix unlimited wait for auxiliary AWS requests. [#22594](https://github.com/ClickHouse/ClickHouse/pull/22594) ([Vladimir Chebotarev](https://github.com/excitoon)).
* Fixed a crash when client closes connection very early [#22579](https://github.com/ClickHouse/ClickHouse/issues/22579). [#22591](https://github.com/ClickHouse/ClickHouse/pull/22591) ([nvartolomei](https://github.com/nvartolomei)).
* `Map` data type (experimental feature): fixed an incorrect formatting of function `map` in distributed queries. [#22588](https://github.com/ClickHouse/ClickHouse/pull/22588) ([foolchi](https://github.com/foolchi)).
* Fixed deserialization of empty string without newline at end of TSV format. This closes [#20244](https://github.com/ClickHouse/ClickHouse/issues/20244). Possible workaround without version update: set `input_format_null_as_default` to zero. It was zero in old versions. [#22527](https://github.com/ClickHouse/ClickHouse/pull/22527) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed wrong cast of a column of `LowCardinality` type in Merge Join algorithm. Close [#22386](https://github.com/ClickHouse/ClickHouse/issues/22386), close [#22388](https://github.com/ClickHouse/ClickHouse/issues/22388). [#22510](https://github.com/ClickHouse/ClickHouse/pull/22510) ([Vladimir](https://github.com/vdimir)).
* Buffer overflow (on read) was possible in `tokenbf_v1` full text index. The excessive bytes are not used but the read operation may lead to crash in rare cases. This closes [#19233](https://github.com/ClickHouse/ClickHouse/issues/19233). [#22421](https://github.com/ClickHouse/ClickHouse/pull/22421) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Do not limit HTTP chunk size. Fixes [#21907](https://github.com/ClickHouse/ClickHouse/issues/21907). [#22322](https://github.com/ClickHouse/ClickHouse/pull/22322) ([Ivan](https://github.com/abyss7)).
* Fixed a bug, which leads to underaggregation of data in case of enabled `optimize_aggregation_in_order` and many parts in table. Slightly improve performance of aggregation with enabled `optimize_aggregation_in_order`. [#21889](https://github.com/ClickHouse/ClickHouse/pull/21889) ([Anton Popov](https://github.com/CurtizJ)).
* Check if table function view is used as a column. This complements #20350. [#21465](https://github.com/ClickHouse/ClickHouse/pull/21465) ([Amos Bird](https://github.com/amosbird)).
* Fix "unknown column" error for tables with `Merge` engine in queris with `JOIN` and aggregation. Closes [#18368](https://github.com/ClickHouse/ClickHouse/issues/18368), close [#22226](https://github.com/ClickHouse/ClickHouse/issues/22226). [#21370](https://github.com/ClickHouse/ClickHouse/pull/21370) ([Vladimir](https://github.com/vdimir)).
* Fixed name clashes in pushdown optimization. It caused incorrect `WHERE` filtration after FULL JOIN. Close [#20497](https://github.com/ClickHouse/ClickHouse/issues/20497). [#20622](https://github.com/ClickHouse/ClickHouse/pull/20622) ([Vladimir](https://github.com/vdimir)).
* Fixed very rare bug when quorum insert with `quorum_parallel=1` is not really "quorum" because of deduplication. [#18215](https://github.com/ClickHouse/ClickHouse/pull/18215) ([filimonov](https://github.com/filimonov) - reported, [alesapin](https://github.com/alesapin) - fixed).
#### Build/Testing/Packaging Improvement
* Run stateless tests in parallel in CI. [#22300](https://github.com/ClickHouse/ClickHouse/pull/22300) ([alesapin](https://github.com/alesapin)).
* Simplify debian packages. This fixes [#21698](https://github.com/ClickHouse/ClickHouse/issues/21698). [#22976](https://github.com/ClickHouse/ClickHouse/pull/22976) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Added support for ClickHouse build on Apple M1. [#21639](https://github.com/ClickHouse/ClickHouse/pull/21639) ([changvvb](https://github.com/changvvb)).
* Fixed ClickHouse Keeper build for MacOS. [#22860](https://github.com/ClickHouse/ClickHouse/pull/22860) ([alesapin](https://github.com/alesapin)).
* Fixed some tests on AArch64 platform. [#22596](https://github.com/ClickHouse/ClickHouse/pull/22596) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Added function alignment for possibly better performance. [#21431](https://github.com/ClickHouse/ClickHouse/pull/21431) ([Danila Kutenin](https://github.com/danlark1)).
* Adjust some tests to output identical results on amd64 and aarch64 (qemu). The result was depending on implementation specific CPU behaviour. [#22590](https://github.com/ClickHouse/ClickHouse/pull/22590) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Allow query profiling only on x86_64. See [#15174](https://github.com/ClickHouse/ClickHouse/issues/15174#issuecomment-812954965) and [#15638](https://github.com/ClickHouse/ClickHouse/issues/15638#issuecomment-703805337). This closes [#15638](https://github.com/ClickHouse/ClickHouse/issues/15638). [#22580](https://github.com/ClickHouse/ClickHouse/pull/22580) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Allow building with unbundled xz (lzma) using `USE_INTERNAL_XZ_LIBRARY=OFF` CMake option. [#22571](https://github.com/ClickHouse/ClickHouse/pull/22571) ([Kfir Itzhak](https://github.com/mastertheknife)).
* Enable bundled `openldap` on `ppc64le` [#22487](https://github.com/ClickHouse/ClickHouse/pull/22487) ([Kfir Itzhak](https://github.com/mastertheknife)).
* Disable incompatible libraries (platform specific typically) on `ppc64le` [#22475](https://github.com/ClickHouse/ClickHouse/pull/22475) ([Kfir Itzhak](https://github.com/mastertheknife)).
* Add Jepsen test in CI for clickhouse Keeper. [#22373](https://github.com/ClickHouse/ClickHouse/pull/22373) ([alesapin](https://github.com/alesapin)).
* Build `jemalloc` with support for [heap profiling](https://github.com/jemalloc/jemalloc/wiki/Use-Case%3A-Heap-Profiling). [#22834](https://github.com/ClickHouse/ClickHouse/pull/22834) ([nvartolomei](https://github.com/nvartolomei)).
* Avoid UB in `*Log` engines for rwlock unlock due to unlock from another thread. [#22583](https://github.com/ClickHouse/ClickHouse/pull/22583) ([Azat Khuzhin](https://github.com/azat)).
* Fixed UB by unlocking the rwlock of the TinyLog from the same thread. [#22560](https://github.com/ClickHouse/ClickHouse/pull/22560) ([Azat Khuzhin](https://github.com/azat)).
## ClickHouse release 21.4
### ClickHouse release 21.4.1 2021-04-12

View File

@ -593,6 +593,9 @@ include_directories(${ConfigIncludePath})
# Add as many warnings as possible for our own code.
include (cmake/warnings.cmake)
# Check if needed compiler flags are supported
include (cmake/check_flags.cmake)
add_subdirectory (base)
add_subdirectory (src)
add_subdirectory (programs)

View File

@ -8,7 +8,7 @@ ClickHouse® is an open-source column-oriented database management system that a
* [Tutorial](https://clickhouse.tech/docs/en/getting_started/tutorial/) shows how to set up and query small ClickHouse cluster.
* [Documentation](https://clickhouse.tech/docs/en/) provides more in-depth information.
* [YouTube channel](https://www.youtube.com/c/ClickHouseDB) has a lot of content about ClickHouse in video format.
* [Slack](https://join.slack.com/t/clickhousedb/shared_invite/zt-nwwakmk4-xOJ6cdy0sJC3It8j348~IA) and [Telegram](https://telegram.me/clickhouse_en) allow to chat with ClickHouse users in real-time.
* [Slack](https://join.slack.com/t/clickhousedb/shared_invite/zt-qfort0u8-TWqK4wIP0YSdoDE0btKa1w) and [Telegram](https://telegram.me/clickhouse_en) allow to chat with ClickHouse users in real-time.
* [Blog](https://clickhouse.yandex/blog/en/) contains various ClickHouse-related articles, as well as announcements and reports about events.
* [Code Browser](https://clickhouse.tech/codebrowser/html_report/ClickHouse/index.html) with syntax highlight and navigation.
* [Contacts](https://clickhouse.tech/#contacts) can help to get your questions answered if there are any.

View File

@ -468,7 +468,7 @@ void BaseDaemon::reloadConfiguration()
* instead of using files specified in config.xml.
* (It's convenient to log in console when you start server without any command line parameters.)
*/
config_path = config().getString("config-file", "config.xml");
config_path = config().getString("config-file", getDefaultConfigFileName());
DB::ConfigProcessor config_processor(config_path, false, true);
config_processor.setConfigPath(Poco::Path(config_path).makeParent().toString());
loaded_config = config_processor.loadConfig(/* allow_zk_includes = */ true);
@ -516,6 +516,11 @@ std::string BaseDaemon::getDefaultCorePath() const
return "/opt/cores/";
}
std::string BaseDaemon::getDefaultConfigFileName() const
{
return "config.xml";
}
void BaseDaemon::closeFDs()
{
#if defined(OS_FREEBSD) || defined(OS_DARWIN)

View File

@ -149,6 +149,8 @@ protected:
virtual std::string getDefaultCorePath() const;
virtual std::string getDefaultConfigFileName() const;
std::optional<DB::StatusFile> pid_file;
std::atomic_bool is_cancelled{false};

View File

@ -51,12 +51,22 @@ void Loggers::buildLoggers(Poco::Util::AbstractConfiguration & config, Poco::Log
/// Use extended interface of Channel for more comprehensive logging.
split = new DB::OwnSplitChannel();
auto log_level = config.getString("logger.level", "trace");
auto log_level_string = config.getString("logger.level", "trace");
/// different channels (log, console, syslog) may have different loglevels configured
/// The maximum (the most verbose) of those will be used as default for Poco loggers
int max_log_level = 0;
const auto log_path = config.getString("logger.log", "");
if (!log_path.empty())
{
createDirectory(log_path);
std::cerr << "Logging " << log_level << " to " << log_path << std::endl;
std::cerr << "Logging " << log_level_string << " to " << log_path << std::endl;
auto log_level = Poco::Logger::parseLevel(log_level_string);
if (log_level > max_log_level)
{
max_log_level = log_level;
}
// Set up two channel chains.
log_file = new Poco::FileChannel;
@ -72,6 +82,7 @@ void Loggers::buildLoggers(Poco::Util::AbstractConfiguration & config, Poco::Log
Poco::AutoPtr<OwnPatternFormatter> pf = new OwnPatternFormatter;
Poco::AutoPtr<DB::OwnFormattingChannel> log = new DB::OwnFormattingChannel(pf, log_file);
log->setLevel(log_level);
split->addChannel(log);
}
@ -79,6 +90,15 @@ void Loggers::buildLoggers(Poco::Util::AbstractConfiguration & config, Poco::Log
if (!errorlog_path.empty())
{
createDirectory(errorlog_path);
// NOTE: we don't use notice & critical in the code, so in practice error log collects fatal & error & warning.
// (!) Warnings are important, they require attention and should never be silenced / ignored.
auto errorlog_level = Poco::Logger::parseLevel(config.getString("logger.errorlog_level", "notice"));
if (errorlog_level > max_log_level)
{
max_log_level = errorlog_level;
}
std::cerr << "Logging errors to " << errorlog_path << std::endl;
error_log_file = new Poco::FileChannel;
@ -93,7 +113,7 @@ void Loggers::buildLoggers(Poco::Util::AbstractConfiguration & config, Poco::Log
Poco::AutoPtr<OwnPatternFormatter> pf = new OwnPatternFormatter;
Poco::AutoPtr<DB::OwnFormattingChannel> errorlog = new DB::OwnFormattingChannel(pf, error_log_file);
errorlog->setLevel(Poco::Message::PRIO_NOTICE);
errorlog->setLevel(errorlog_level);
errorlog->open();
split->addChannel(errorlog);
}
@ -101,6 +121,11 @@ void Loggers::buildLoggers(Poco::Util::AbstractConfiguration & config, Poco::Log
if (config.getBool("logger.use_syslog", false))
{
//const std::string & cmd_name = commandName();
auto syslog_level = Poco::Logger::parseLevel(config.getString("logger.syslog_level", log_level_string));
if (syslog_level > max_log_level)
{
max_log_level = syslog_level;
}
if (config.has("logger.syslog.address"))
{
@ -127,6 +152,8 @@ void Loggers::buildLoggers(Poco::Util::AbstractConfiguration & config, Poco::Log
Poco::AutoPtr<OwnPatternFormatter> pf = new OwnPatternFormatter;
Poco::AutoPtr<DB::OwnFormattingChannel> log = new DB::OwnFormattingChannel(pf, syslog_channel);
log->setLevel(syslog_level);
split->addChannel(log);
}
@ -138,9 +165,17 @@ void Loggers::buildLoggers(Poco::Util::AbstractConfiguration & config, Poco::Log
{
bool color_enabled = config.getBool("logger.color_terminal", color_logs_by_default);
auto console_log_level_string = config.getString("logger.console_log_level", log_level_string);
auto console_log_level = Poco::Logger::parseLevel(console_log_level_string);
if (console_log_level > max_log_level)
{
max_log_level = console_log_level;
}
Poco::AutoPtr<OwnPatternFormatter> pf = new OwnPatternFormatter(color_enabled);
Poco::AutoPtr<DB::OwnFormattingChannel> log = new DB::OwnFormattingChannel(pf, new Poco::ConsoleChannel);
logger.warning("Logging " + log_level + " to console");
logger.warning("Logging " + console_log_level_string + " to console");
log->setLevel(console_log_level);
split->addChannel(log);
}
@ -149,17 +184,17 @@ void Loggers::buildLoggers(Poco::Util::AbstractConfiguration & config, Poco::Log
logger.setChannel(split);
// Global logging level (it can be overridden for specific loggers).
logger.setLevel(log_level);
logger.setLevel(max_log_level);
// Set level to all already created loggers
std::vector<std::string> names;
//logger_root = Logger::root();
logger.root().names(names);
for (const auto & name : names)
logger.root().get(name).setLevel(log_level);
logger.root().get(name).setLevel(max_log_level);
// Attach to the root logger.
logger.root().setLevel(log_level);
logger.root().setLevel(max_log_level);
logger.root().setChannel(logger.getChannel());
// Explicitly specified log levels for specific loggers.

View File

@ -22,6 +22,9 @@ public:
void setLevel(Poco::Message::Priority priority_) { priority = priority_; }
// Poco::Logger::parseLevel returns ints
void setLevel(int level) { priority = static_cast<Poco::Message::Priority>(level); }
void open() override
{
if (pChannel)

View File

@ -78,6 +78,8 @@ PoolWithFailover::PoolWithFailover(
const RemoteDescription & addresses,
const std::string & user,
const std::string & password,
unsigned default_connections_,
unsigned max_connections_,
size_t max_tries_)
: max_tries(max_tries_)
, shareable(false)
@ -85,7 +87,13 @@ PoolWithFailover::PoolWithFailover(
/// Replicas have the same priority, but traversed replicas are moved to the end of the queue.
for (const auto & [host, port] : addresses)
{
replicas_by_priority[0].emplace_back(std::make_shared<Pool>(database, host, user, password, port));
replicas_by_priority[0].emplace_back(std::make_shared<Pool>(database,
host, user, password, port,
/* socket_ = */ "",
MYSQLXX_DEFAULT_TIMEOUT,
MYSQLXX_DEFAULT_RW_TIMEOUT,
default_connections_,
max_connections_));
}
}

View File

@ -115,6 +115,8 @@ namespace mysqlxx
const RemoteDescription & addresses,
const std::string & user,
const std::string & password,
unsigned default_connections_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_START_CONNECTIONS,
unsigned max_connections_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_CONNECTIONS,
size_t max_tries_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_TRIES);
PoolWithFailover(const PoolWithFailover & other);

View File

@ -1,9 +1,9 @@
# This strings autochanged from release_lib.sh:
SET(VERSION_REVISION 54451)
SET(VERSION_REVISION 54452)
SET(VERSION_MAJOR 21)
SET(VERSION_MINOR 6)
SET(VERSION_MINOR 7)
SET(VERSION_PATCH 1)
SET(VERSION_GITHASH 96fced4c3cf432fb0b401d2ab01f0c56e5f74a96)
SET(VERSION_DESCRIBE v21.6.1.1-prestable)
SET(VERSION_STRING 21.6.1.1)
SET(VERSION_GITHASH 976ccc2e908ac3bc28f763bfea8134ea0a121b40)
SET(VERSION_DESCRIBE v21.7.1.1-prestable)
SET(VERSION_STRING 21.7.1.1)
# end of autochange

6
cmake/check_flags.cmake Normal file
View File

@ -0,0 +1,6 @@
include (CheckCXXCompilerFlag)
include (CheckCCompilerFlag)
check_cxx_compiler_flag("-Wsuggest-destructor-override" HAS_SUGGEST_DESTRUCTOR_OVERRIDE)
check_cxx_compiler_flag("-Wshadow" HAS_SHADOW)
check_cxx_compiler_flag("-Wsuggest-override" HAS_SUGGEST_OVERRIDE)

2
contrib/NuRaft vendored

@ -1 +1 @@
Subproject commit 377f8e77491d9f66ce8e32e88aae19dffe8dc4d7
Subproject commit 95d6bbba579b3a4e4c2dede954f541ff6f3dba51

2
contrib/boringssl vendored

@ -1 +1 @@
Subproject commit 83c1cda8a0224dc817cbad2966c7ed4acc35f02a
Subproject commit a6a2e2ab3e44d97ce98e51c558e989f211de7eb3

2
contrib/cppkafka vendored

@ -1 +1 @@
Subproject commit b06e64ef5bffd636d918a742c689f69130c1dbab
Subproject commit 57a599d99c540e647bcd0eb9ea77c523cca011b3

2
contrib/grpc vendored

@ -1 +1 @@
Subproject commit 1085a941238e66b13e3fb89c310533745380acbc
Subproject commit 60c986e15cae70aade721d26badabab1f822fdd6

2
contrib/poco vendored

@ -1 +1 @@
Subproject commit b7d9ec16ee33ca76643d5fcd907ea9a33285640a
Subproject commit 5994506908028612869fee627d68d8212dfe7c1e

2
contrib/re2 vendored

@ -1 +1 @@
Subproject commit 7cf8b88e8f70f97fd4926b56aa87e7f53b2717e0
Subproject commit 13ebb377c6ad763ca61d12dd6f88b1126bd0b911

View File

@ -1,7 +1,7 @@
file (READ ${SOURCE_FILENAME} CONTENT)
string (REGEX REPLACE "using re2::RE2;" "" CONTENT "${CONTENT}")
string (REGEX REPLACE "using re2::LazyRE2;" "" CONTENT "${CONTENT}")
string (REGEX REPLACE "namespace re2" "namespace re2_st" CONTENT "${CONTENT}")
string (REGEX REPLACE "namespace re2 {" "namespace re2_st {" CONTENT "${CONTENT}")
string (REGEX REPLACE "re2::" "re2_st::" CONTENT "${CONTENT}")
string (REGEX REPLACE "\"re2/" "\"re2_st/" CONTENT "${CONTENT}")
string (REGEX REPLACE "(.\\*?_H)" "\\1_ST" CONTENT "${CONTENT}")

2
contrib/rocksdb vendored

@ -1 +1 @@
Subproject commit 54a0decabbcf4c0bb5cf7befa9c597f28289bff5
Subproject commit 07c77549a20b63ff6981b400085eba36bb5c80c4

2
contrib/simdjson vendored

@ -1 +1 @@
Subproject commit 95b4870e20be5f97d9dcf63b23b1c6f520c366c1
Subproject commit 8df32cea3359cb30120795da6020b3b73da01d38

2
contrib/zlib-ng vendored

@ -1 +1 @@
Subproject commit 5cc4d232020dc66d1d6c5438834457e2a2f6127b
Subproject commit db232d30b4c72fd58e6d7eae2d12cebf9c3d90db

2
contrib/zstd vendored

@ -1 +1 @@
Subproject commit 10f0e6993f9d2f682da6d04aa2385b7d53cbb4ee
Subproject commit a488ba114ec17ea1054b9057c26a046fc122b3b6

View File

@ -66,6 +66,7 @@ SET(Sources
"${LIBRARY_DIR}/compress/zstd_compress.c"
"${LIBRARY_DIR}/compress/zstd_compress_literals.c"
"${LIBRARY_DIR}/compress/zstd_compress_sequences.c"
"${LIBRARY_DIR}/compress/zstd_compress_superblock.c"
"${LIBRARY_DIR}/compress/zstd_double_fast.c"
"${LIBRARY_DIR}/compress/zstd_fast.c"
"${LIBRARY_DIR}/compress/zstd_lazy.c"
@ -95,16 +96,19 @@ SET(Headers
"${LIBRARY_DIR}/common/pool.h"
"${LIBRARY_DIR}/common/threading.h"
"${LIBRARY_DIR}/common/xxhash.h"
"${LIBRARY_DIR}/common/zstd_errors.h"
"${LIBRARY_DIR}/common/zstd_deps.h"
"${LIBRARY_DIR}/common/zstd_internal.h"
"${LIBRARY_DIR}/common/zstd_trace.h"
"${LIBRARY_DIR}/compress/hist.h"
"${LIBRARY_DIR}/compress/zstd_compress_internal.h"
"${LIBRARY_DIR}/compress/zstd_compress_literals.h"
"${LIBRARY_DIR}/compress/zstd_compress_sequences.h"
"${LIBRARY_DIR}/compress/zstd_compress_superblock.h"
"${LIBRARY_DIR}/compress/zstd_cwksp.h"
"${LIBRARY_DIR}/compress/zstd_double_fast.h"
"${LIBRARY_DIR}/compress/zstd_fast.h"
"${LIBRARY_DIR}/compress/zstd_lazy.h"
"${LIBRARY_DIR}/compress/zstd_ldm_geartab.h"
"${LIBRARY_DIR}/compress/zstd_ldm.h"
"${LIBRARY_DIR}/compress/zstdmt_compress.h"
"${LIBRARY_DIR}/compress/zstd_opt.h"
@ -113,7 +117,8 @@ SET(Headers
"${LIBRARY_DIR}/decompress/zstd_decompress_internal.h"
"${LIBRARY_DIR}/dictBuilder/cover.h"
"${LIBRARY_DIR}/dictBuilder/divsufsort.h"
"${LIBRARY_DIR}/dictBuilder/zdict.h"
"${LIBRARY_DIR}/zdict.h"
"${LIBRARY_DIR}/zstd_errors.h"
"${LIBRARY_DIR}/zstd.h")
SET(ZSTD_LEGACY_SUPPORT true)

4
debian/changelog vendored
View File

@ -1,5 +1,5 @@
clickhouse (21.6.1.1) unstable; urgency=low
clickhouse (21.7.1.1) unstable; urgency=low
* Modified source code
-- clickhouse-release <clickhouse-release@yandex-team.ru> Tue, 20 Apr 2021 01:48:16 +0300
-- clickhouse-release <clickhouse-release@yandex-team.ru> Thu, 20 May 2021 22:23:29 +0300

View File

@ -1,7 +1,7 @@
FROM ubuntu:18.04
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
ARG version=21.6.1.*
ARG version=21.7.1.*
RUN apt-get update \
&& apt-get install --yes --no-install-recommends \

View File

@ -1,7 +1,7 @@
FROM ubuntu:20.04
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
ARG version=21.6.1.*
ARG version=21.7.1.*
ARG gosu_ver=1.10
# set non-empty deb_location_url url to create a docker image

View File

@ -1,7 +1,7 @@
FROM ubuntu:18.04
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
ARG version=21.6.1.*
ARG version=21.7.1.*
RUN apt-get update && \
apt-get install -y apt-transport-https dirmngr && \

View File

@ -73,7 +73,7 @@ function start_server
--path "$FASTTEST_DATA"
--user_files_path "$FASTTEST_DATA/user_files"
--top_level_domains_path "$FASTTEST_DATA/top_level_domains"
--keeper_server.log_storage_path "$FASTTEST_DATA/coordination"
--keeper_server.storage_path "$FASTTEST_DATA/coordination"
)
clickhouse-server "${opts[@]}" &>> "$FASTTEST_OUTPUT/server.log" &
server_pid=$!
@ -376,35 +376,14 @@ function run_tests
# Depends on LLVM JIT
01852_jit_if
01865_jit_comparison_constant_result
01871_merge_tree_compile_expressions
)
(time clickhouse-test --hung-check -j 8 --order=random --use-skip-list --no-long --testname --shard --zookeeper --skip "${TESTS_TO_SKIP[@]}" -- "$FASTTEST_FOCUS" 2>&1 ||:) | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/test_log.txt"
# substr is to remove semicolon after test name
readarray -t FAILED_TESTS < <(awk '/\[ FAIL|TIMEOUT|ERROR \]/ { print substr($3, 1, length($3)-1) }' "$FASTTEST_OUTPUT/test_log.txt" | tee "$FASTTEST_OUTPUT/failed-parallel-tests.txt")
# We will rerun sequentially any tests that have failed during parallel run.
# They might have failed because there was some interference from other tests
# running concurrently. If they fail even in seqential mode, we will report them.
# FIXME All tests that require exclusive access to the server must be
# explicitly marked as `sequential`, and `clickhouse-test` must detect them and
# run them in a separate group after all other tests. This is faster and also
# explicit instead of guessing.
if [[ -n "${FAILED_TESTS[*]}" ]]
then
stop_server ||:
# Clean the data so that there is no interference from the previous test run.
rm -rf "$FASTTEST_DATA"/{{meta,}data,user_files,coordination} ||:
start_server
echo "Going to run again: ${FAILED_TESTS[*]}"
clickhouse-test --hung-check --order=random --no-long --testname --shard --zookeeper "${FAILED_TESTS[@]}" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee -a "$FASTTEST_OUTPUT/test_log.txt"
else
echo "No failed tests"
fi
time clickhouse-test --hung-check -j 8 --order=random --use-skip-list \
--no-long --testname --shard --zookeeper --skip "${TESTS_TO_SKIP[@]}" \
-- "$FASTTEST_FOCUS" 2>&1 \
| ts '%Y-%m-%d %H:%M:%S' \
| tee "$FASTTEST_OUTPUT/test_log.txt"
}
case "$stage" in

View File

@ -0,0 +1,92 @@
version: '2.3'
services:
zoo1:
image: ${image:-yandex/clickhouse-integration-test}
restart: always
user: ${user:-}
volumes:
- type: bind
source: ${keeper_binary:-}
target: /usr/bin/clickhouse
- type: bind
source: ${keeper_config_dir1:-}
target: /etc/clickhouse-keeper
- type: bind
source: ${keeper_logs_dir1:-}
target: /var/log/clickhouse-keeper
- type: ${keeper_fs:-tmpfs}
source: ${keeper_db_dir1:-}
target: /var/lib/clickhouse-keeper
entrypoint: "clickhouse keeper --config=/etc/clickhouse-keeper/keeper_config1.xml --log-file=/var/log/clickhouse-keeper/clickhouse-keeper.log --errorlog-file=/var/log/clickhouse-keeper/clickhouse-keeper.err.log"
cap_add:
- SYS_PTRACE
- NET_ADMIN
- IPC_LOCK
- SYS_NICE
security_opt:
- label:disable
dns_opt:
- attempts:2
- timeout:1
- inet6
- rotate
zoo2:
image: ${image:-yandex/clickhouse-integration-test}
restart: always
user: ${user:-}
volumes:
- type: bind
source: ${keeper_binary:-}
target: /usr/bin/clickhouse
- type: bind
source: ${keeper_config_dir2:-}
target: /etc/clickhouse-keeper
- type: bind
source: ${keeper_logs_dir2:-}
target: /var/log/clickhouse-keeper
- type: ${keeper_fs:-tmpfs}
source: ${keeper_db_dir2:-}
target: /var/lib/clickhouse-keeper
entrypoint: "clickhouse keeper --config=/etc/clickhouse-keeper/keeper_config2.xml --log-file=/var/log/clickhouse-keeper/clickhouse-keeper.log --errorlog-file=/var/log/clickhouse-keeper/clickhouse-keeper.err.log"
cap_add:
- SYS_PTRACE
- NET_ADMIN
- IPC_LOCK
- SYS_NICE
security_opt:
- label:disable
dns_opt:
- attempts:2
- timeout:1
- inet6
- rotate
zoo3:
image: ${image:-yandex/clickhouse-integration-test}
restart: always
user: ${user:-}
volumes:
- type: bind
source: ${keeper_binary:-}
target: /usr/bin/clickhouse
- type: bind
source: ${keeper_config_dir3:-}
target: /etc/clickhouse-keeper
- type: bind
source: ${keeper_logs_dir3:-}
target: /var/log/clickhouse-keeper
- type: ${keeper_fs:-tmpfs}
source: ${keeper_db_dir3:-}
target: /var/lib/clickhouse-keeper
entrypoint: "clickhouse keeper --config=/etc/clickhouse-keeper/keeper_config3.xml --log-file=/var/log/clickhouse-keeper/clickhouse-keeper.log --errorlog-file=/var/log/clickhouse-keeper/clickhouse-keeper.err.log"
cap_add:
- SYS_PTRACE
- NET_ADMIN
- IPC_LOCK
- SYS_NICE
security_opt:
- label:disable
dns_opt:
- attempts:2
- timeout:1
- inet6
- rotate

View File

@ -20,6 +20,9 @@
<!-- mmap shows some improvements in perf tests -->
<min_bytes_to_use_mmap_io>64Mi</min_bytes_to_use_mmap_io>
<!-- disable jit for perf tests -->
<compile_expressions>0</compile_expressions>
</default>
</profiles>
<users>

View File

@ -35,10 +35,10 @@ RUN apt-get update \
ENV TZ=Europe/Moscow
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN pip3 install urllib3 testflows==1.6.74 docker-compose docker dicttoxml kazoo tzlocal python-dateutil numpy
RUN pip3 install urllib3 testflows==1.6.90 docker-compose==1.29.1 docker==5.0.0 dicttoxml kazoo tzlocal python-dateutil numpy
ENV DOCKER_CHANNEL stable
ENV DOCKER_VERSION 17.09.1-ce
ENV DOCKER_VERSION 20.10.6
RUN set -eux; \
\

View File

@ -1,6 +1,6 @@
# How to add test queries to ClickHouse CI
ClickHouse has hundreds (or even thousands) of features. Every commit get checked by a complex set of tests containing many thousands of test cases.
ClickHouse has hundreds (or even thousands) of features. Every commit gets checked by a complex set of tests containing many thousands of test cases.
The core functionality is very well tested, but some corner-cases and different combinations of features can be uncovered with ClickHouse CI.
@ -105,13 +105,13 @@ clickhouse-client -nmT < tests/queries/0_stateless/01521_dummy_test.sql | tee te
5) ensure everything is correct, if the test output is incorrect (due to some bug for example), adjust the reference file using text editor.
#### How create good test
#### How to create good test
- test should be
- minimal - create only tables related to tested functionality, remove unrelated columns and parts of query
- fast - should not take longer than few seconds (better subseconds)
- correct - fails then feature is not working
- deteministic
- deterministic
- isolated / stateless
- don't rely on some environment things
- don't rely on timing when possible
@ -124,7 +124,7 @@ clickhouse-client -nmT < tests/queries/0_stateless/01521_dummy_test.sql | tee te
- clean up the created objects after test and before the test (DROP IF EXISTS) - in case of some dirty state
- prefer sync mode of operations (mutations, merges, etc.)
- use other SQL files in the `0_stateless` folder as an example
- ensure the feature / feature combination you want to tests is not covered yet with existsing tests
- ensure the feature / feature combination you want to tests is not covered yet with existing tests
#### Commit / push / create PR.

View File

@ -15,7 +15,12 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1] [TTL expr1],
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2] [TTL expr2],
...
) ENGINE = MySQL('host:port', 'database', 'table', 'user', 'password'[, replace_query, 'on_duplicate_clause']);
) ENGINE = MySQL('host:port', 'database', 'table', 'user', 'password'[, replace_query, 'on_duplicate_clause'])
SETTINGS
[connection_pool_size=16, ]
[connection_max_tries=3, ]
[connection_auto_close=true ]
;
```
See a detailed description of the [CREATE TABLE](../../../sql-reference/statements/create/table.md#create-table-query) query.

View File

@ -139,8 +139,8 @@ The following settings can be specified in configuration file for given endpoint
- `endpoint` — Specifies prefix of an endpoint. Mandatory.
- `access_key_id` and `secret_access_key` — Specifies credentials to use with given endpoint. Optional.
- `use_environment_credentials` — If set to `true`, S3 client will try to obtain credentials from environment variables and [Amazon EC2](https://en.wikipedia.org/wiki/Amazon_Elastic_Compute_Cloud) metadata for given endpoint. Optional, default value is `false`.
- `region` — Specifies S3 region name. Optional.
- `use_environment_credentials` — If set to `true`, S3 client will try to obtain credentials from environment variables and Amazon EC2 metadata for given endpoint. Optional, default value is `false`.
- `use_insecure_imds_request` — If set to `true`, S3 client will use insecure IMDS request while obtaining credentials from Amazon EC2 metadata. Optional, default value is `false`.
- `header` — Adds specified HTTP header to a request to given endpoint. Optional, can be speficied multiple times.
- `server_side_encryption_customer_key_base64` — If specified, required headers for accessing S3 objects with SSE-C encryption will be set. Optional.

View File

@ -104,7 +104,7 @@ For non-Linux operating systems and for AArch64 CPU arhitecture, ClickHouse buil
After downloading, you can use the `clickhouse client` to connect to the server, or `clickhouse local` to process local data.
Run `sudo ./clickhouse install` if you want to install clickhouse system-wide (also with needed condiguration files, configuring users etc.). After that run `clickhouse start` commands to start the clickhouse-server and `clickhouse-client` to connect to it.
Run `sudo ./clickhouse install` if you want to install clickhouse system-wide (also with needed configuration files, configuring users etc.). After that run `clickhouse start` commands to start the clickhouse-server and `clickhouse-client` to connect to it.
These builds are not recommended for use in production environments because they are less thoroughly tested, but you can do so on your own risk. They also have only a subset of ClickHouse features available.

View File

@ -17,6 +17,7 @@ To define LDAP server you must add `ldap_servers` section to the `config.xml`.
<yandex>
<!- ... -->
<ldap_servers>
<!- Typical LDAP server. -->
<my_ldap_server>
<host>localhost</host>
<port>636</port>
@ -31,6 +32,18 @@ To define LDAP server you must add `ldap_servers` section to the `config.xml`.
<tls_ca_cert_dir>/path/to/tls_ca_cert_dir</tls_ca_cert_dir>
<tls_cipher_suite>ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:AES256-GCM-SHA384</tls_cipher_suite>
</my_ldap_server>
<!- Typical Active Directory with configured user DN detection for further role mapping. -->
<my_ad_server>
<host>localhost</host>
<port>389</port>
<bind_dn>EXAMPLE\{user_name}</bind_dn>
<user_dn_detection>
<base_dn>CN=Users,DC=example,DC=com</base_dn>
<search_filter>(&amp;(objectClass=user)(sAMAccountName={user_name}))</search_filter>
</user_dn_detection>
<enable_tls>no</enable_tls>
</my_ad_server>
</ldap_servers>
</yandex>
```
@ -43,6 +56,15 @@ Note, that you can define multiple LDAP servers inside the `ldap_servers` sectio
- `port` — LDAP server port, default is `636` if `enable_tls` is set to `true`, `389` otherwise.
- `bind_dn` — Template used to construct the DN to bind to.
- The resulting DN will be constructed by replacing all `{user_name}` substrings of the template with the actual user name during each authentication attempt.
- `user_dn_detection` - Section with LDAP search parameters for detecting the actual user DN of the bound user.
- This is mainly used in search filters for further role mapping when the server is Active Directory. The resulting user DN will be used when replacing `{user_dn}` substrings wherever they are allowed. By default, user DN is set equal to bind DN, but once search is performed, it will be updated with to the actual detected user DN value.
- `base_dn` - Template used to construct the base DN for the LDAP search.
- The resulting DN will be constructed by replacing all `{user_name}` and `{bind_dn}` substrings of the template with the actual user name and bind DN during the LDAP search.
- `scope` - Scope of the LDAP search.
- Accepted values are: `base`, `one_level`, `children`, `subtree` (the default).
- `search_filter` - Template used to construct the search filter for the LDAP search.
- The resulting filter will be constructed by replacing all `{user_name}`, `{bind_dn}`, and `{base_dn}` substrings of the template with the actual user name, bind DN, and base DN during the LDAP search.
- Note, that the special characters must be escaped properly in XML.
- `verification_cooldown` — A period of time, in seconds, after a successful bind attempt, during which the user will be assumed to be successfully authenticated for all consecutive requests without contacting the LDAP server.
- Specify `0` (the default) to disable caching and force contacting the LDAP server for each authentication request.
- `enable_tls` — A flag to trigger the use of the secure connection to the LDAP server.
@ -107,7 +129,7 @@ Goes into `config.xml`.
<yandex>
<!- ... -->
<user_directories>
<!- ... -->
<!- Typical LDAP server. -->
<ldap>
<server>my_ldap_server</server>
<roles>
@ -122,6 +144,18 @@ Goes into `config.xml`.
<prefix>clickhouse_</prefix>
</role_mapping>
</ldap>
<!- Typical Active Directory with role mapping that relies on the detected user DN. -->
<ldap>
<server>my_ad_server</server>
<role_mapping>
<base_dn>CN=Users,DC=example,DC=com</base_dn>
<attribute>CN</attribute>
<scope>subtree</scope>
<search_filter>(&amp;(objectClass=group)(member={user_dn}))</search_filter>
<prefix>clickhouse_</prefix>
</role_mapping>
</ldap>
</user_directories>
</yandex>
```
@ -137,13 +171,13 @@ Note that `my_ldap_server` referred in the `ldap` section inside the `user_direc
- When a user authenticates, while still bound to LDAP, an LDAP search is performed using `search_filter` and the name of the logged-in user. For each entry found during that search, the value of the specified attribute is extracted. For each attribute value that has the specified prefix, the prefix is removed, and the rest of the value becomes the name of a local role defined in ClickHouse, which is expected to be created beforehand by the [CREATE ROLE](../../sql-reference/statements/create/role.md#create-role-statement) statement.
- There can be multiple `role_mapping` sections defined inside the same `ldap` section. All of them will be applied.
- `base_dn` — Template used to construct the base DN for the LDAP search.
- The resulting DN will be constructed by replacing all `{user_name}` and `{bind_dn}` substrings of the template with the actual user name and bind DN during each LDAP search.
- The resulting DN will be constructed by replacing all `{user_name}`, `{bind_dn}`, and `{user_dn}` substrings of the template with the actual user name, bind DN, and user DN during each LDAP search.
- `scope` — Scope of the LDAP search.
- Accepted values are: `base`, `one_level`, `children`, `subtree` (the default).
- `search_filter` — Template used to construct the search filter for the LDAP search.
- The resulting filter will be constructed by replacing all `{user_name}`, `{bind_dn}` and `{base_dn}` substrings of the template with the actual user name, bind DN and base DN during each LDAP search.
- The resulting filter will be constructed by replacing all `{user_name}`, `{bind_dn}`, `{user_dn}`, and `{base_dn}` substrings of the template with the actual user name, bind DN, user DN, and base DN during each LDAP search.
- Note, that the special characters must be escaped properly in XML.
- `attribute` — Attribute name whose values will be returned by the LDAP search.
- `attribute` — Attribute name whose values will be returned by the LDAP search. `cn`, by default.
- `prefix` — Prefix, that will be expected to be in front of each string in the original list of strings returned by the LDAP search. The prefix will be removed from the original strings and the resulting strings will be treated as local role names. Empty by default.
[Original article](https://clickhouse.tech/docs/en/operations/external-authenticators/ldap/) <!--hide-->

View File

@ -135,6 +135,39 @@ Default value: 604800 (1 week).
Similar to [replicated_deduplication_window](#replicated-deduplication-window), `replicated_deduplication_window_seconds` specifies how long to store hash sums of blocks for insert deduplication. Hash sums older than `replicated_deduplication_window_seconds` are removed from Zookeeper, even if they are less than ` replicated_deduplication_window`.
## replicated_fetches_http_connection_timeout {#replicated_fetches_http_connection_timeout}
HTTP connection timeout (in seconds) for part fetch requests. Inherited from default profile [http_connection_timeout](./settings.md#http_connection_timeout) if not set explicitly.
Possible values:
- Any positive integer.
- 0 - Use value of `http_connection_timeout`.
Default value: 0.
## replicated_fetches_http_send_timeout {#replicated_fetches_http_send_timeout}
HTTP send timeout (in seconds) for part fetch requests. Inherited from default profile [http_send_timeout](./settings.md#http_send_timeout) if not set explicitly.
Possible values:
- Any positive integer.
- 0 - Use value of `http_send_timeout`.
Default value: 0.
## replicated_fetches_http_receive_timeout {#replicated_fetches_http_receive_timeout}
HTTP receive timeout (in seconds) for fetch part requests. Inherited from default profile [http_receive_timeout](./settings.md#http_receive_timeout) if not set explicitly.
Possible values:
- Any positive integer.
- 0 - Use value of `http_receive_timeout`.
Default value: 0.
## old_parts_lifetime {#old-parts-lifetime}
The time (in seconds) of storing inactive parts to protect against data loss during spontaneous server reboots.

View File

@ -1520,8 +1520,8 @@ Do not merge aggregation states from different servers for distributed query pro
Possible values:
- 0 — Disabled (final query processing is done on the initiator node).
- 1 - Do not merge aggregation states from different servers for distributed query processing (query completelly processed on the shard, initiator only proxy the data).
- 2 - Same as 1 but apply `ORDER BY` and `LIMIT` on the initiator (can be used for queries with `ORDER BY` and/or `LIMIT`).
- 1 - Do not merge aggregation states from different servers for distributed query processing (query completelly processed on the shard, initiator only proxy the data), can be used in case it is for certain that there are different keys on different shards.
- 2 - Same as `1` but applies `ORDER BY` and `LIMIT` (it is not possilbe when the query processed completelly on the remote node, like for `distributed_group_by_no_merge=1`) on the initiator (can be used for queries with `ORDER BY` and/or `LIMIT`).
**Example**
@ -2863,6 +2863,39 @@ Sets the interval in seconds after which periodically refreshed [live view](../.
Default value: `60`.
## http_connection_timeout {#http_connection_timeout}
HTTP connection timeout (in seconds).
Possible values:
- Any positive integer.
- 0 - Disabled (infinite timeout).
Default value: 1.
## http_send_timeout {#http_send_timeout}
HTTP send timeout (in seconds).
Possible values:
- Any positive integer.
- 0 - Disabled (infinite timeout).
Default value: 1800.
## http_receive_timeout {#http_receive_timeout}
HTTP receive timeout (in seconds).
Possible values:
- Any positive integer.
- 0 - Disabled (infinite timeout).
Default value: 1800.
## check_query_single_value_result {#check_query_single_value_result}
Defines the level of detail for the [CHECK TABLE](../../sql-reference/statements/check-table.md#checking-mergetree-tables) query result for `MergeTree` family engines .

View File

@ -0,0 +1,98 @@
---
toc_priority: 65
toc_title: clickhouse-format
---
# clickhouse-format {#clickhouse-format}
Allows formatting input queries.
Keys:
- `--help` or`-h` — Produce help message.
- `--hilite` — Add syntax highlight with ANSI terminal escape sequences.
- `--oneline` — Format in single line.
- `--quiet` or `-q` — Just check syntax, no output on success.
- `--multiquery` or `-n` — Allow multiple queries in the same file.
- `--obfuscate` — Obfuscate instead of formatting.
- `--seed <string>` — Seed arbitrary string that determines the result of obfuscation.
- `--backslash` — Add a backslash at the end of each line of the formatted query. Can be useful when you copy a query from web or somewhere else with multiple lines, and want to execute it in command line.
## Examples {#examples}
1. Highlighting and single line:
```bash
$ clickhouse-format --oneline --hilite <<< "SELECT sum(number) FROM numbers(5);"
```
Result:
```sql
SELECT sum(number) FROM numbers(5)
```
2. Multiqueries:
```bash
$ clickhouse-format -n <<< "SELECT * FROM (SELECT 1 AS x UNION ALL SELECT 1 UNION DISTINCT SELECT 3);"
```
Result:
```text
SELECT *
FROM
(
SELECT 1 AS x
UNION ALL
SELECT 1
UNION DISTINCT
SELECT 3
)
;
```
3. Obfuscating:
```bash
$ clickhouse-format --seed Hello --obfuscate <<< "SELECT cost_first_screen BETWEEN a AND b, CASE WHEN x >= 123 THEN y ELSE NULL END;"
```
Result:
```text
SELECT treasury_mammoth_hazelnut BETWEEN nutmeg AND span, CASE WHEN chive >= 116 THEN switching ELSE ANYTHING END;
```
Same query and another seed string:
```bash
$ clickhouse-format --seed World --obfuscate <<< "SELECT cost_first_screen BETWEEN a AND b, CASE WHEN x >= 123 THEN y ELSE NULL END;"
```
Result:
```text
SELECT horse_tape_summer BETWEEN folklore AND moccasins, CASE WHEN intestine >= 116 THEN nonconformist ELSE FORESTRY END;
```
4. Adding backslash:
```bash
$ clickhouse-format --backslash <<< "SELECT * FROM (SELECT 1 AS x UNION ALL SELECT 1 UNION DISTINCT SELECT 3);"
```
Result:
```text
SELECT * \
FROM \
( \
SELECT 1 AS x \
UNION ALL \
SELECT 1 \
UNION DISTINCT \
SELECT 3 \
)
```

View File

@ -9,5 +9,8 @@ toc_title: Overview
- [clickhouse-local](../../operations/utilities/clickhouse-local.md) — Allows running SQL queries on data without stopping the ClickHouse server, similar to how `awk` does this.
- [clickhouse-copier](../../operations/utilities/clickhouse-copier.md) — Copies (and reshards) data from one cluster to another cluster.
- [clickhouse-benchmark](../../operations/utilities/clickhouse-benchmark.md) — Loads server with the custom queries and settings.
- [clickhouse-format](../../operations/utilities/clickhouse-format.md) — Enables formatting input queries.
- [ClickHouse obfuscator](../../operations/utilities/clickhouse-obfuscator.md) — Obfuscates data.
- [ClickHouse compressor](../../operations/utilities/clickhouse-compressor.md) — Compresses and decompresses data.
- [clickhouse-odbc-bridge](../../operations/utilities/odbc-bridge.md) — A proxy server for ODBC driver.
[Original article](https://clickhouse.tech/docs/en/operations/utils/) <!--hide-->

View File

@ -253,7 +253,7 @@ windowFunnel(window, [mode, [mode, ... ]])(timestamp, cond1, cond2, ..., condN)
**Parameters**
- `window` — Length of the sliding window, it is the time interval between first condition and last condition. The unit of `window` depends on the `timestamp` itself and varies. Determined using the expression `timestamp of cond1 <= timestamp of cond2 <= ... <= timestamp of condN <= timestamp of cond1 + window`.
- `window` — Length of the sliding window, it is the time interval between the first and the last condition. The unit of `window` depends on the `timestamp` itself and varies. Determined using the expression `timestamp of cond1 <= timestamp of cond2 <= ... <= timestamp of condN <= timestamp of cond1 + window`.
- `mode` — It is an optional argument. One or more modes can be set.
- `'strict'` — If same condition holds for sequence of events then such non-unique events would be skipped.
- `'strict_order'` — Don't allow interventions of other events. E.g. in the case of `A->B->D->C`, it stops finding `A->B->C` at the `D` and the max event level is 2.
@ -312,7 +312,7 @@ FROM
GROUP BY user_id
)
GROUP BY level
ORDER BY level ASC
ORDER BY level ASC;
```
Result:

View File

@ -6,9 +6,8 @@ toc_priority: 141
Sums the arithmetic difference between consecutive rows. If the difference is negative, it is ignored.
Note that the underlying data must be sorted in order for this function to work properly.
If you would like to use this function in a materialized view, you most likely want to use the
[deltaSumTimestamp](deltasumtimestamp.md) method instead.
!!! info "Note"
The underlying data must be sorted for this function to work properly. If you would like to use this function in a [materialized view](../../../sql-reference/statements/create/view.md#materialized), you most likely want to use the [deltaSumTimestamp](../../../sql-reference/aggregate-functions/reference/deltasumtimestamp.md#agg_functions-deltasumtimestamp) method instead.
**Syntax**

View File

@ -2,38 +2,42 @@
toc_priority: 141
---
# deltaSumTimestamp {#agg_functions-deltasum}
# deltaSumTimestamp {#agg_functions-deltasumtimestamp}
Syntax: `deltaSumTimestamp(value, timestamp)`
Adds the difference between consecutive rows. If the difference is negative, it is ignored.
Adds the differences between consecutive rows. If the difference is negative, it is ignored.
Uses `timestamp` to order values.
This function is primarily for [materialized views](../../../sql-reference/statements/create/view.md#materialized) that are ordered by some time bucket-aligned timestamp, for example, a `toStartOfMinute` bucket. Because the rows in such a materialized view will all have the same timestamp, it is impossible for them to be merged in the "right" order. This function keeps track of the `timestamp` of the values it's seen, so it's possible to order the states correctly during merging.
This function is primarily for materialized views that are ordered by some time bucket aligned
timestamp, for example a `toStartOfMinute` bucket. Because the rows in such a materialized view
will all have the same timestamp, it is impossible for them to be merged in the "right" order. This
function keeps track of the `timestamp` of the values it's seen, so it's possible to order the states
correctly during merging.
To calculate the delta sum across an ordered collection you can simply use the [deltaSum](../../../sql-reference/aggregate-functions/reference/deltasum.md#agg_functions-deltasum) function.
To calculate the delta sum across an ordered collection you can simply use the
[deltaSum](./deltasum.md) function.
**Syntax**
``` sql
deltaSumTimestamp(value, timestamp)
```
**Arguments**
- `value` must be some [Integer](../../data-types/int-uint.md) type or [Float](../../data-types/float.md) type or a [Date](../../data-types/date.md) or [DateTime](../../data-types/datetime.md).
- `timestamp` must be some [Integer](../../data-types/int-uint.md) type or [Float](../../data-types/float.md) type or a [Date](../../data-types/date.md) or [DateTime](../../data-types/datetime.md).
- `value` — Input values, must be some [Integer](../../data-types/int-uint.md) type or [Float](../../data-types/float.md) type or a [Date](../../data-types/date.md) or [DateTime](../../data-types/datetime.md).
- `timestamp` — The parameter for order values, must be some [Integer](../../data-types/int-uint.md) type or [Float](../../data-types/float.md) type or a [Date](../../data-types/date.md) or [DateTime](../../data-types/datetime.md).
**Returned value**
- Accumulated differences between consecutive values, ordered by the `timestamp` parameter.
- Accumulated differences between consecutive values, ordered by the `timestamp` parameter.
Type: [Integer](../../data-types/int-uint.md) or [Float](../../data-types/float.md) or [Date](../../data-types/date.md) or [DateTime](../../data-types/datetime.md).
**Example**
Query:
```sql
SELECT deltaSumTimestamp(value, timestamp)
FROM (select number as timestamp, [0, 4, 8, 3, 0, 0, 0, 1, 3, 5][number] as value from numbers(1, 10))
FROM (SELECT number AS timestamp, [0, 4, 8, 3, 0, 0, 0, 1, 3, 5][number] AS value FROM numbers(1, 10));
```
Result:
``` text
┌─deltaSumTimestamp(value, timestamp)─┐
│ 13 │

View File

@ -38,4 +38,4 @@ We recommend using this function in almost all scenarios.
- [uniqCombined64](../../../sql-reference/aggregate-functions/reference/uniqcombined64.md#agg_function-uniqcombined64)
- [uniqHLL12](../../../sql-reference/aggregate-functions/reference/uniqhll12.md#agg_function-uniqhll12)
- [uniqExact](../../../sql-reference/aggregate-functions/reference/uniqexact.md#agg_function-uniqexact)
- [uniqThetaSketch](../../../sql-reference/aggregate-functions/reference/uniqthetasketch.md#agg_function-uniqthetasketch)
- [uniqTheta](../../../sql-reference/aggregate-functions/reference/uniqthetasketch.md#agg_function-uniqthetasketch)

View File

@ -49,4 +49,4 @@ Compared to the [uniq](../../../sql-reference/aggregate-functions/reference/uniq
- [uniqCombined64](../../../sql-reference/aggregate-functions/reference/uniqcombined64.md#agg_function-uniqcombined64)
- [uniqHLL12](../../../sql-reference/aggregate-functions/reference/uniqhll12.md#agg_function-uniqhll12)
- [uniqExact](../../../sql-reference/aggregate-functions/reference/uniqexact.md#agg_function-uniqexact)
- [uniqThetaSketch](../../../sql-reference/aggregate-functions/reference/uniqthetasketch.md#agg_function-uniqthetasketch)
- [uniqTheta](../../../sql-reference/aggregate-functions/reference/uniqthetasketch.md#agg_function-uniqthetasketch)

View File

@ -23,4 +23,4 @@ The function takes a variable number of parameters. Parameters can be `Tuple`, `
- [uniq](../../../sql-reference/aggregate-functions/reference/uniq.md#agg_function-uniq)
- [uniqCombined](../../../sql-reference/aggregate-functions/reference/uniq.md#agg_function-uniqcombined)
- [uniqHLL12](../../../sql-reference/aggregate-functions/reference/uniq.md#agg_function-uniqhll12)
- [uniqThetaSketch](../../../sql-reference/aggregate-functions/reference/uniqthetasketch.md#agg_function-uniqthetasketch)
- [uniqTheta](../../../sql-reference/aggregate-functions/reference/uniqthetasketch.md#agg_function-uniqthetasketch)

View File

@ -37,4 +37,4 @@ We dont recommend using this function. In most cases, use the [uniq](../../..
- [uniq](../../../sql-reference/aggregate-functions/reference/uniq.md#agg_function-uniq)
- [uniqCombined](../../../sql-reference/aggregate-functions/reference/uniqcombined.md#agg_function-uniqcombined)
- [uniqExact](../../../sql-reference/aggregate-functions/reference/uniqexact.md#agg_function-uniqexact)
- [uniqThetaSketch](../../../sql-reference/aggregate-functions/reference/uniqthetasketch.md#agg_function-uniqthetasketch)
- [uniqTheta](../../../sql-reference/aggregate-functions/reference/uniqthetasketch.md#agg_function-uniqthetasketch)

View File

@ -2,12 +2,12 @@
toc_priority: 195
---
# uniqThetaSketch {#agg_function-uniqthetasketch}
# uniqTheta {#agg_function-uniqthetasketch}
Calculates the approximate number of different argument values, using the [Theta Sketch Framework](https://datasketches.apache.org/docs/Theta/ThetaSketchFramework.html).
``` sql
uniqThetaSketch(x[, ...])
uniqTheta(x[, ...])
```
**Arguments**

View File

@ -31,7 +31,7 @@ For example, Decimal32(4) can contain numbers from -99999.9999 to 99999.9999 wit
Internally data is represented as normal signed integers with respective bit width. Real value ranges that can be stored in memory are a bit larger than specified above, which are checked only on conversion from a string.
Because modern CPUs do not support 128-bit integers natively, operations on Decimal128 are emulated. Because of this Decimal128 works significantly slower than Decimal32/Decimal64.
Because modern CPUs do not support 128-bit integers natively, operations on Decimal128 are emulated. Because of this Decimal128 works significantly slower than Decimal32/Decimal64.
## Operations and Result Type {#operations-and-result-type}

View File

@ -95,7 +95,10 @@ LAYOUT(FLAT(INITIAL_ARRAY_SIZE 50000 MAX_ARRAY_SIZE 5000000))
The dictionary is completely stored in memory in the form of a hash table. The dictionary can contain any number of elements with any identifiers In practice, the number of keys can reach tens of millions of items.
The hash table will be preallocated (this will make dictionary load faster), if the is approx number of total rows is known, this is supported only if the source is `clickhouse` without any `<where>` (since in case of `<where>` you can filter out too much rows and the dictionary will allocate too much memory, that will not be used eventually).
If `preallocate` is `true` (default is `false`) the hash table will be preallocated (this will make the dictionary load faster). But note that you should use it only if:
- The source support an approximate number of elements (for now it is supported only by the `ClickHouse` source).
- There are no duplicates in the data (otherwise it may increase memory usage for the hashtable).
All types of sources are supported. When updating, data (from a file or from a table) is read in its entirety.
@ -103,21 +106,23 @@ Configuration example:
``` xml
<layout>
<hashed />
<hashed>
<preallocate>0</preallocate>
</hashed>
</layout>
```
or
``` sql
LAYOUT(HASHED())
LAYOUT(HASHED(PREALLOCATE 0))
```
### sparse_hashed {#dicts-external_dicts_dict_layout-sparse_hashed}
Similar to `hashed`, but uses less memory in favor more CPU usage.
It will be also preallocated so as `hashed`, note that it is even more significant for `sparse_hashed`.
It will be also preallocated so as `hashed` (with `preallocate` set to `true`), and note that it is even more significant for `sparse_hashed`.
Configuration example:
@ -127,8 +132,10 @@ Configuration example:
</layout>
```
or
``` sql
LAYOUT(SPARSE_HASHED())
LAYOUT(SPARSE_HASHED([PREALLOCATE 0]))
```
### complex_key_hashed {#complex-key-hashed}

View File

@ -30,7 +30,7 @@ LIFETIME(300)
Setting `<lifetime>0</lifetime>` (`LIFETIME(0)`) prevents dictionaries from updating.
You can set a time interval for upgrades, and ClickHouse will choose a uniformly random time within this range. This is necessary in order to distribute the load on the dictionary source when upgrading on a large number of servers.
You can set a time interval for updates, and ClickHouse will choose a uniformly random time within this range. This is necessary in order to distribute the load on the dictionary source when updating on a large number of servers.
Example of settings:
@ -54,7 +54,7 @@ LIFETIME(MIN 300 MAX 360)
If `<min>0</min>` and `<max>0</max>`, ClickHouse does not reload the dictionary by timeout.
In this case, ClickHouse can reload the dictionary earlier if the dictionary configuration file was changed or the `SYSTEM RELOAD DICTIONARY` command was executed.
When upgrading the dictionaries, the ClickHouse server applies different logic depending on the type of [source](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md):
When updating the dictionaries, the ClickHouse server applies different logic depending on the type of [source](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md):
- For a text file, it checks the time of modification. If the time differs from the previously recorded time, the dictionary is updated.
- For MySQL source, the time of modification is checked using a `SHOW TABLE STATUS` query (in case of MySQL 8 you need to disable meta-information caching in MySQL by `set global information_schema_stats_expiry=0`.
@ -86,3 +86,4 @@ SOURCE(ODBC(... invalidate_query 'SELECT update_time FROM dictionary_source wher
...
```
For `Cache`, `ComplexKeyCache`, `SSDCache`, and `SSDComplexKeyCache` dictionaries both synchronious and asynchronious updates are supported.

View File

@ -159,7 +159,7 @@ Configuration fields:
| Tag | Description | Required |
|------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|
| `name` | Column name. | Yes |
| `type` | ClickHouse data type.<br/>ClickHouse tries to cast value from dictionary to the specified data type. For example, for MySQL, the field might be `TEXT`, `VARCHAR`, or `BLOB` in the MySQL source table, but it can be uploaded as `String` in ClickHouse.<br/>[Nullable](../../../sql-reference/data-types/nullable.md) is currently supported for [Flat](external-dicts-dict-layout.md#flat), [Hashed](external-dicts-dict-layout.md#dicts-external_dicts_dict_layout-hashed), [ComplexKeyHashed](external-dicts-dict-layout.md#complex-key-hashed), [Direct](external-dicts-dict-layout.md#direct), [ComplexKeyDirect](external-dicts-dict-layout.md#complex-key-direct), [RangeHashed](external-dicts-dict-layout.md#range-hashed), [Polygon](external-dicts-dict-polygon.md) dictionaries. In [Cache](external-dicts-dict-layout.md#cache), [ComplexKeyCache](external-dicts-dict-layout.md#complex-key-cache), [SSDCache](external-dicts-dict-layout.md#ssd-cache), [SSDComplexKeyCache](external-dicts-dict-layout.md#complex-key-ssd-cache), [IPTrie](external-dicts-dict-layout.md#ip-trie) dictionaries `Nullable` types are not supported. | Yes |
| `type` | ClickHouse data type.<br/>ClickHouse tries to cast value from dictionary to the specified data type. For example, for MySQL, the field might be `TEXT`, `VARCHAR`, or `BLOB` in the MySQL source table, but it can be uploaded as `String` in ClickHouse.<br/>[Nullable](../../../sql-reference/data-types/nullable.md) is currently supported for [Flat](external-dicts-dict-layout.md#flat), [Hashed](external-dicts-dict-layout.md#dicts-external_dicts_dict_layout-hashed), [ComplexKeyHashed](external-dicts-dict-layout.md#complex-key-hashed), [Direct](external-dicts-dict-layout.md#direct), [ComplexKeyDirect](external-dicts-dict-layout.md#complex-key-direct), [RangeHashed](external-dicts-dict-layout.md#range-hashed), [Polygon](external-dicts-dict-polygon.md), [Cache](external-dicts-dict-layout.md#cache), [ComplexKeyCache](external-dicts-dict-layout.md#complex-key-cache), [SSDCache](external-dicts-dict-layout.md#ssd-cache), [SSDComplexKeyCache](external-dicts-dict-layout.md#complex-key-ssd-cache) dictionaries. In [IPTrie](external-dicts-dict-layout.md#ip-trie) dictionaries `Nullable` types are not supported. | Yes |
| `null_value` | Default value for a non-existing element.<br/>In the example, it is an empty string. [NULL](../../syntax.md#null-literal) value can be used only for the `Nullable` types (see the previous line with types description). | Yes |
| `expression` | [Expression](../../../sql-reference/syntax.md#syntax-expressions) that ClickHouse executes on the value.<br/>The expression can be a column name in the remote SQL database. Thus, you can use it to create an alias for the remote column.<br/><br/>Default value: no expression. | No |
| <a name="hierarchical-dict-attr"></a> `hierarchical` | If `true`, the attribute contains the value of a parent key for the current key. See [Hierarchical Dictionaries](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-hierarchical.md).<br/><br/>Default value: `false`. | No |

View File

@ -1544,3 +1544,52 @@ SELECT arrayCumSumNonNegative([1, 1, -4, 1]) AS res
```
Note that the `arraySumNonNegative` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument.
## arrayProduct {#arrayproduct}
Multiplies elements of an [array](../../sql-reference/data-types/array.md).
**Syntax**
``` sql
arrayProduct(arr)
```
**Arguments**
- `arr` — [Array](../../sql-reference/data-types/array.md) of numeric values.
**Returned value**
- A product of array's elements.
Type: [Float64](../../sql-reference/data-types/float.md).
**Examples**
Query:
``` sql
SELECT arrayProduct([1,2,3,4,5,6]) as res;
```
Result:
``` text
┌─res───┐
│ 720 │
└───────┘
```
Query:
``` sql
SELECT arrayProduct([toDecimal64(1,8), toDecimal64(2,8), toDecimal64(3,8)]) as res, toTypeName(res);
```
Return value type is always [Float64](../../sql-reference/data-types/float.md). Result:
``` text
┌─res─┬─toTypeName(arrayProduct(array(toDecimal64(1, 8), toDecimal64(2, 8), toDecimal64(3, 8))))─┐
│ 6 │ Float64 │
└─────┴──────────────────────────────────────────────────────────────────────────────────────────┘
```

View File

@ -10,21 +10,22 @@ toc_title: External Dictionaries
For information on connecting and configuring external dictionaries, see [External dictionaries](../../sql-reference/dictionaries/external-dictionaries/external-dicts.md).
## dictGet {#dictget}
## dictGet, dictGetOrDefault, dictGetOrNull {#dictget}
Retrieves a value from an external dictionary.
Retrieves values from an external dictionary.
``` sql
dictGet('dict_name', 'attr_name', id_expr)
dictGetOrDefault('dict_name', 'attr_name', id_expr, default_value_expr)
dictGet('dict_name', attr_names, id_expr)
dictGetOrDefault('dict_name', attr_names, id_expr, default_value_expr)
dictGetOrNull('dict_name', attr_name, id_expr)
```
**Arguments**
- `dict_name` — Name of the dictionary. [String literal](../../sql-reference/syntax.md#syntax-string-literal).
- `attr_name` — Name of the column of the dictionary. [String literal](../../sql-reference/syntax.md#syntax-string-literal).
- `attr_names` — Name of the column of the dictionary, [String literal](../../sql-reference/syntax.md#syntax-string-literal), or tuple of column names, [Tuple](../../sql-reference/data-types/tuple.md)([String literal](../../sql-reference/syntax.md#syntax-string-literal)).
- `id_expr` — Key value. [Expression](../../sql-reference/syntax.md#syntax-expressions) returning a [UInt64](../../sql-reference/data-types/int-uint.md) or [Tuple](../../sql-reference/data-types/tuple.md)-type value depending on the dictionary configuration.
- `default_value_expr` — Value returned if the dictionary doesnt contain a row with the `id_expr` key. [Expression](../../sql-reference/syntax.md#syntax-expressions) returning the value in the data type configured for the `attr_name` attribute.
- `default_value_expr` — Values returned if the dictionary doesnt contain a row with the `id_expr` key. [Expression](../../sql-reference/syntax.md#syntax-expressions) or [Tuple](../../sql-reference/data-types/tuple.md)([Expression](../../sql-reference/syntax.md#syntax-expressions)), returning the value (or values) in the data types configured for the `attr_names` attribute.
**Returned value**
@ -34,12 +35,13 @@ dictGetOrDefault('dict_name', 'attr_name', id_expr, default_value_expr)
- `dictGet` returns the content of the `<null_value>` element specified for the attribute in the dictionary configuration.
- `dictGetOrDefault` returns the value passed as the `default_value_expr` parameter.
- `dictGetOrNull` returns `NULL` in case key was not found in dictionary.
ClickHouse throws an exception if it cannot parse the value of the attribute or the value doesnt match the attribute data type.
**Example**
**Example for simple key dictionary**
Create a text file `ext-dict-text.csv` containing the following:
Create a text file `ext-dict-test.csv` containing the following:
``` text
1,1
@ -96,6 +98,130 @@ LIMIT 3
└─────┴────────┘
```
**Example for complex key dictionary**
Create a text file `ext-dict-mult.csv` containing the following:
``` text
1,1,'1'
2,2,'2'
3,3,'3'
```
The first column is `id`, the second is `c1`, the third is `c2`.
Configure the external dictionary:
``` xml
<yandex>
<dictionary>
<name>ext-dict-mult</name>
<source>
<file>
<path>/path-to/ext-dict-mult.csv</path>
<format>CSV</format>
</file>
</source>
<layout>
<flat />
</layout>
<structure>
<id>
<name>id</name>
</id>
<attribute>
<name>c1</name>
<type>UInt32</type>
<null_value></null_value>
</attribute>
<attribute>
<name>c2</name>
<type>String</type>
<null_value></null_value>
</attribute>
</structure>
<lifetime>0</lifetime>
</dictionary>
</yandex>
```
Perform the query:
``` sql
SELECT
dictGet('ext-dict-mult', ('c1','c2'), number) AS val,
toTypeName(val) AS type
FROM system.numbers
LIMIT 3;
```
``` text
┌─val─────┬─type──────────────────┐
│ (1,'1') │ Tuple(UInt8, String) │
│ (2,'2') │ Tuple(UInt8, String) │
│ (3,'3') │ Tuple(UInt8, String) │
└─────────┴───────────────────────┘
```
**Example for range key dictionary**
Input table:
```sql
CREATE TABLE range_key_dictionary_source_table
(
key UInt64,
start_date Date,
end_date Date,
value String,
value_nullable Nullable(String)
)
ENGINE = TinyLog();
INSERT INTO range_key_dictionary_source_table VALUES(1, toDate('2019-05-20'), toDate('2019-05-20'), 'First', 'First');
INSERT INTO range_key_dictionary_source_table VALUES(2, toDate('2019-05-20'), toDate('2019-05-20'), 'Second', NULL);
INSERT INTO range_key_dictionary_source_table VALUES(3, toDate('2019-05-20'), toDate('2019-05-20'), 'Third', 'Third');
```
Create the external dictionary:
```sql
CREATE DICTIONARY range_key_dictionary
(
key UInt64,
start_date Date,
end_date Date,
value String,
value_nullable Nullable(String)
)
PRIMARY KEY key
SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() TABLE 'range_key_dictionary_source_table'))
LIFETIME(MIN 1 MAX 1000)
LAYOUT(RANGE_HASHED())
RANGE(MIN start_date MAX end_date);
```
Perform the query:
``` sql
SELECT
(number, toDate('2019-05-20')),
dictHas('range_key_dictionary', number, toDate('2019-05-20')),
dictGetOrNull('range_key_dictionary', 'value', number, toDate('2019-05-20')),
dictGetOrNull('range_key_dictionary', 'value_nullable', number, toDate('2019-05-20')),
dictGetOrNull('range_key_dictionary', ('value', 'value_nullable'), number, toDate('2019-05-20'))
FROM system.numbers LIMIT 5 FORMAT TabSeparated;
```
Result:
``` text
(0,'2019-05-20') 0 \N \N (NULL,NULL)
(1,'2019-05-20') 1 First First ('First','First')
(2,'2019-05-20') 0 \N \N (NULL,NULL)
(3,'2019-05-20') 0 \N \N (NULL,NULL)
(4,'2019-05-20') 0 \N \N (NULL,NULL)
```
**See Also**
- [External Dictionaries](../../sql-reference/dictionaries/external-dictionaries/external-dicts.md)
@ -202,4 +328,3 @@ dictGet[Type]OrDefault('dict_name', 'attr_name', id_expr, default_value_expr)
- `dictGet[Type]OrDefault` returns the value passed as the `default_value_expr` parameter.
ClickHouse throws an exception if it cannot parse the value of the attribute or the value doesnt match the attribute data type.

View File

@ -422,7 +422,7 @@ Type: [UInt8](../../sql-reference/data-types/int-uint.md).
Query:
``` sql
SELECT isIPAddressInRange('127.0.0.1', '127.0.0.0/8')
SELECT isIPAddressInRange('127.0.0.1', '127.0.0.0/8');
```
Result:
@ -436,7 +436,7 @@ Result:
Query:
``` sql
SELECT isIPAddressInRange('127.0.0.1', 'ffff::/16')
SELECT isIPAddressInRange('127.0.0.1', 'ffff::/16');
```
Result:

View File

@ -90,6 +90,53 @@ SELECT splitByString('', 'abcde')
└────────────────────────────┘
```
## splitByRegexp(regexp, s) {#splitbyregexpseparator-s}
Splits a string into substrings separated by a regular expression. It uses a regular expression string `regexp` as the separator. If the `regexp` is empty, it will split the string s into an array of single characters. If no match is found for this regex expression, the string `s` won't be split.
**Syntax**
``` sql
splitByRegexp(<regexp>, <s>)
```
**Arguments**
- `regexp` — Regular expression. Constant. [String](../data-types/string.md) or [FixedString](../data-types/fixedstring.md).
- `s` — The string to split. [String](../../sql-reference/data-types/string.md).
**Returned value(s)**
Returns an array of selected substrings. Empty substrings may be selected when:
- A non-empty regular expression match occurs at the beginning or end of the string;
- There are multiple consecutive non-empty regular expression matches;
- The original string `s` is empty while the regular expression is not empty.
Type: [Array](../../sql-reference/data-types/array.md) of [String](../../sql-reference/data-types/string.md).
**Example**
``` sql
SELECT splitByRegexp('\\d+', 'a12bc23de345f')
```
``` text
┌─splitByRegexp('\\d+', 'a12bc23de345f')─┐
│ ['a','bc','de','f'] │
└────────────────────────────────────────┘
```
``` sql
SELECT splitByRegexp('', 'abcde')
```
``` text
┌─splitByRegexp('', 'abcde')─┐
│ ['a','b','c','d','e'] │
└────────────────────────────┘
```
## arrayStringConcat(arr\[, separator\]) {#arraystringconcatarr-separator}
Concatenates the strings listed in the array with the separator.separator is an optional parameter: a constant string, set to an empty string by default.
@ -149,4 +196,3 @@ Result:
│ [['abc','123'],['8','"hkl"']] │
└───────────────────────────────────────────────────────────────────────┘
```

View File

@ -373,7 +373,7 @@ This function accepts a number or date or date with time, and returns a FixedStr
## reinterpretAsUUID {#reinterpretasuuid}
This function accepts 16 bytes string, and returns UUID containing bytes representing the corresponding value in network byte order (big-endian). If the string isn't long enough, the functions work as if the string is padded with the necessary number of null bytes to the end. If the string longer than 16 bytes, the extra bytes at the end are ignored.
Accepts 16 bytes string and returns UUID containing bytes representing the corresponding value in network byte order (big-endian). If the string isn't long enough, the function works as if the string is padded with the necessary number of null bytes to the end. If the string longer than 16 bytes, the extra bytes at the end are ignored.
**Syntax**
@ -429,7 +429,24 @@ Result:
## reinterpret(x, T) {#type_conversion_function-reinterpret}
Use the same source in-memory bytes sequence for `x` value and reinterpret it to destination type
Uses the same source in-memory bytes sequence for `x` value and reinterprets it to destination type.
**Syntax**
``` sql
reinterpret(x, type)
```
**Arguments**
- `x` — Any type.
- `type` — Destination type. [String](../../sql-reference/data-types/string.md).
**Returned value**
- Destination type value.
**Examples**
Query:
```sql
@ -448,11 +465,27 @@ Result:
## CAST(x, T) {#type_conversion_function-cast}
Converts input value `x` to the `T` data type. Unlike to `reinterpret` function use external representation of `x` value.
Converts input value `x` to the `T` data type. Unlike to `reinterpret` function, type conversion is performed in a natural way.
The syntax `CAST(x AS t)` is also supported.
Note, that if value `x` does not fit the bounds of type T, the function overflows. For example, CAST(-1, 'UInt8') returns 255.
!!! note "Note"
If value `x` does not fit the bounds of type `T`, the function overflows. For example, `CAST(-1, 'UInt8')` returns `255`.
**Syntax**
``` sql
CAST(x, T)
```
**Arguments**
- `x` — Any type.
- `T` — Destination type. [String](../../sql-reference/data-types/string.md).
**Returned value**
- Destination type value.
**Examples**
@ -460,9 +493,9 @@ Query:
```sql
SELECT
cast(toInt8(-1), 'UInt8') AS cast_int_to_uint,
cast(toInt8(1), 'Float32') AS cast_int_to_float,
cast('1', 'UInt32') AS cast_string_to_int
CAST(toInt8(-1), 'UInt8') AS cast_int_to_uint,
CAST(toInt8(1), 'Float32') AS cast_int_to_float,
CAST('1', 'UInt32') AS cast_string_to_int;
```
Result:
@ -492,7 +525,7 @@ Result:
└─────────────────────┴─────────────────────┴────────────┴─────────────────────┴───────────────────────────┘
```
Conversion to FixedString(N) only works for arguments of type String or FixedString(N).
Conversion to FixedString(N) only works for arguments of type [String](../../sql-reference/data-types/string.md) or [FixedString](../../sql-reference/data-types/fixedstring.md).
Type conversion to [Nullable](../../sql-reference/data-types/nullable.md) and back is supported.
@ -1038,7 +1071,7 @@ Result:
## parseDateTime64BestEffort {#parsedatetime64besteffort}
Same as [parseDateTimeBestEffort](#parsedatetimebesteffort) function but also parse milliseconds and microseconds and return `DateTime64(3)` or `DateTime64(6)` data types.
Same as [parseDateTimeBestEffort](#parsedatetimebesteffort) function but also parse milliseconds and microseconds and returns [DateTime](../../sql-reference/functions/type-conversion-functions.md#data_type-datetime) data type.
**Syntax**
@ -1049,9 +1082,13 @@ parseDateTime64BestEffort(time_string [, precision [, time_zone]])
**Parameters**
- `time_string` — String containing a date or date with time to convert. [String](../../sql-reference/data-types/string.md).
- `precision``3` for milliseconds, `6` for microseconds. Default `3`. Optional [UInt8](../../sql-reference/data-types/int-uint.md).
- `precision`Required precision. `3` for milliseconds, `6` for microseconds. Default `3`. Optional. [UInt8](../../sql-reference/data-types/int-uint.md).
- `time_zone` — [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone). The function parses `time_string` according to the timezone. Optional. [String](../../sql-reference/data-types/string.md).
**Returned value**
- `time_string` converted to the [DateTime](../../sql-reference/data-types/datetime.md) data type.
**Examples**
Query:
@ -1064,7 +1101,7 @@ UNION ALL
SELECT parseDateTime64BestEffort('2021-01-01 01:01:00.12346',6) AS a, toTypeName(a) AS t
UNION ALL
SELECT parseDateTime64BestEffort('2021-01-01 01:01:00.12346',3,'Europe/Moscow') AS a, toTypeName(a) AS t
FORMAT PrettyCompactMonoBlcok
FORMAT PrettyCompactMonoBlock;
```
Result:
@ -1131,12 +1168,14 @@ Result:
## toUnixTimestamp64Nano {#tounixtimestamp64nano}
Converts a `DateTime64` to a `Int64` value with fixed sub-second precision.
Input value is scaled up or down appropriately depending on it precision. Please note that output value is a timestamp in UTC, not in timezone of `DateTime64`.
Converts a `DateTime64` to a `Int64` value with fixed sub-second precision. Input value is scaled up or down appropriately depending on it precision.
!!! info "Note"
The output value is a timestamp in UTC, not in the timezone of `DateTime64`.
**Syntax**
``` sql
```sql
toUnixTimestamp64Milli(value)
```
@ -1152,7 +1191,7 @@ toUnixTimestamp64Milli(value)
Query:
``` sql
```sql
WITH toDateTime64('2019-09-16 19:20:12.345678910', 6) AS dt64
SELECT toUnixTimestamp64Milli(dt64);
```
@ -1298,4 +1337,3 @@ Result:
│ 2,"good" │
└───────────────────────────────────────────┘
```

View File

@ -15,11 +15,23 @@ ALTER USER [IF EXISTS] name1 [ON CLUSTER cluster_name1] [RENAME TO new_name1]
[NOT IDENTIFIED | IDENTIFIED {[WITH {no_password | plaintext_password | sha256_password | sha256_hash | double_sha1_password | double_sha1_hash}] BY {'password' | 'hash'}} | {WITH ldap SERVER 'server_name'} | {WITH kerberos [REALM 'realm']}]
[[ADD | DROP] HOST {LOCAL | NAME 'name' | REGEXP 'name_regexp' | IP 'address' | LIKE 'pattern'} [,...] | ANY | NONE]
[DEFAULT ROLE role [,...] | ALL | ALL EXCEPT role [,...] ]
[GRANTEES {user | role | ANY | NONE} [,...] [EXCEPT {user | role} [,...]]]
[SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY | WRITABLE] | PROFILE 'profile_name'] [,...]
```
To use `ALTER USER` you must have the [ALTER USER](../../../sql-reference/statements/grant.md#grant-access-management) privilege.
## GRANTEES Clause {#grantees}
Specifies users or roles which are allowed to receive [privileges](../../../sql-reference/statements/grant.md#grant-privileges) from this user on the condition this user has also all required access granted with [GRANT OPTION](../../../sql-reference/statements/grant.md#grant-privigele-syntax). Options of the `GRANTEES` clause:
- `user` — Specifies a user this user can grant privileges to.
- `role` — Specifies a role this user can grant privileges to.
- `ANY` — This user can grant privileges to anyone. It's the default setting.
- `NONE` — This user can grant privileges to none.
You can exclude any user or role by using the `EXCEPT` expression. For example, `ALTER USER user1 GRANTEES ANY EXCEPT user2`. It means if `user1` has some privileges granted with `GRANT OPTION` it will be able to grant those privileges to anyone except `user2`.
## Examples {#alter-user-examples}
Set assigned roles as default:
@ -43,3 +55,9 @@ Set all the assigned roles to default, excepting `role1` and `role2`:
``` sql
ALTER USER user DEFAULT ROLE ALL EXCEPT role1, role2
```
Allows the user with `john` account to grant his privileges to the user with `jack` account:
``` sql
ALTER USER john GRANTEES jack;
```

View File

@ -15,6 +15,7 @@ CREATE USER [IF NOT EXISTS | OR REPLACE] name1 [ON CLUSTER cluster_name1]
[NOT IDENTIFIED | IDENTIFIED {[WITH {no_password | plaintext_password | sha256_password | sha256_hash | double_sha1_password | double_sha1_hash}] BY {'password' | 'hash'}} | {WITH ldap SERVER 'server_name'} | {WITH kerberos [REALM 'realm']}]
[HOST {LOCAL | NAME 'name' | REGEXP 'name_regexp' | IP 'address' | LIKE 'pattern'} [,...] | ANY | NONE]
[DEFAULT ROLE role [,...]]
[GRANTEES {user | role | ANY | NONE} [,...] [EXCEPT {user | role} [,...]]]
[SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY | WRITABLE] | PROFILE 'profile_name'] [,...]
```
@ -53,12 +54,24 @@ Another way of specifying host is to use `@` syntax following the username. Exam
!!! info "Warning"
ClickHouse treats `user_name@'address'` as a username as a whole. Thus, technically you can create multiple users with the same `user_name` and different constructions after `@`. However, we dont recommend to do so.
## GRANTEES Clause {#grantees}
Specifies users or roles which are allowed to receive [privileges](../../../sql-reference/statements/grant.md#grant-privileges) from this user on the condition this user has also all required access granted with [GRANT OPTION](../../../sql-reference/statements/grant.md#grant-privigele-syntax). Options of the `GRANTEES` clause:
- `user` — Specifies a user this user can grant privileges to.
- `role` — Specifies a role this user can grant privileges to.
- `ANY` — This user can grant privileges to anyone. It's the default setting.
- `NONE` — This user can grant privileges to none.
You can exclude any user or role by using the `EXCEPT` expression. For example, `CREATE USER user1 GRANTEES ANY EXCEPT user2`. It means if `user1` has some privileges granted with `GRANT OPTION` it will be able to grant those privileges to anyone except `user2`.
## Examples {#create-user-examples}
Create the user account `mira` protected by the password `qwerty`:
``` sql
CREATE USER mira HOST IP '127.0.0.1' IDENTIFIED WITH sha256_password BY 'qwerty'
CREATE USER mira HOST IP '127.0.0.1' IDENTIFIED WITH sha256_password BY 'qwerty';
```
`mira` should start client app at the host where the ClickHouse server runs.
@ -66,13 +79,13 @@ CREATE USER mira HOST IP '127.0.0.1' IDENTIFIED WITH sha256_password BY 'qwerty'
Create the user account `john`, assign roles to it and make this roles default:
``` sql
CREATE USER john DEFAULT ROLE role1, role2
CREATE USER john DEFAULT ROLE role1, role2;
```
Create the user account `john` and make all his future roles default:
``` sql
CREATE USER user DEFAULT ROLE ALL
CREATE USER john DEFAULT ROLE ALL;
```
When some role is assigned to `john` in the future, it will become default automatically.
@ -80,5 +93,11 @@ When some role is assigned to `john` in the future, it will become default autom
Create the user account `john` and make all his future roles default excepting `role1` and `role2`:
``` sql
CREATE USER john DEFAULT ROLE ALL EXCEPT role1, role2
CREATE USER john DEFAULT ROLE ALL EXCEPT role1, role2;
```
Create the user account `john` and allow him to grant his privileges to the user with `jack` account:
``` sql
CREATE USER john GRANTEES jack;
```

View File

@ -316,7 +316,7 @@ Allows executing [CREATE](../../sql-reference/statements/create/index.md) and [A
Allows executing [DROP](../../sql-reference/statements/misc.md#drop) and [DETACH](../../sql-reference/statements/misc.md#detach) queries according to the following hierarchy of privileges:
- `DROP`. Level:
- `DROP`. Level: `GROUP`
- `DROP DATABASE`. Level: `DATABASE`
- `DROP TABLE`. Level: `TABLE`
- `DROP VIEW`. Level: `VIEW`

View File

@ -11,4 +11,4 @@ TRUNCATE TABLE [IF EXISTS] [db.]name [ON CLUSTER cluster]
Removes all data from a table. When the clause `IF EXISTS` is omitted, the query returns an error if the table does not exist.
The `TRUNCATE` query is not supported for [View](../../engines/table-engines/special/view.md), [File](../../engines/table-engines/special/file.md), [URL](../../engines/table-engines/special/url.md) and [Null](../../engines/table-engines/special/null.md) table engines.
The `TRUNCATE` query is not supported for [View](../../engines/table-engines/special/view.md), [File](../../engines/table-engines/special/file.md), [URL](../../engines/table-engines/special/url.md), [Buffer](../../engines/table-engines/special/buffer.md) and [Null](../../engines/table-engines/special/null.md) table engines.

View File

@ -183,7 +183,7 @@ CREATE TABLE big_table (name String, value UInt32) ENGINE = HDFS('hdfs://hdfs1:9
#### Ограничения {#limitations}
* hadoop\_security\_kerberos\_ticket\_cache\_path могут быть определены только на глобальном уровне
## Поддержика Kerberos {#kerberos-support}
## Поддержка Kerberos {#kerberos-support}
Если hadoop\_security\_authentication параметр имеет значение 'kerberos', ClickHouse аутентифицируется с помощью Kerberos.
[Расширенные параметры](#clickhouse-extras) и hadoop\_security\_kerberos\_ticket\_cache\_path помогают сделать это.

View File

@ -82,8 +82,9 @@ SELECT * FROM s3_engine_table LIMIT 2;
Необязательные настройки:
- `access_key_id` и `secret_access_key` — указывают учетные данные для использования с данной точкой приема запроса.
- `use_environment_credentials` — если `true`, S3-клиент будет пытаться получить учетные данные из переменных среды и метаданных [Amazon EC2](https://ru.wikipedia.org/wiki/Amazon_EC2) для данной точки приема запроса. Значение по умолчанию — `false`.
- `use_insecure_imds_request` — признак использования менее безопасного соединения при выполнении запроса к IMDS при получении учётных данных из метаданных Amazon EC2. Значение по умолчанию — `false`.
- `region` — название региона S3.
- `use_environment_credentials` — если `true`, S3-клиент будет пытаться получить учетные данные из переменных среды и метаданных Amazon EC2 для данной точки приема запроса. Значение по умолчанию - `false`.
- `header` — добавляет указанный HTTP-заголовок к запросу на заданную точку приема запроса. Может быть определен несколько раз.
- `server_side_encryption_customer_key_base64` — устанавливает необходимые заголовки для доступа к объектам S3 с шифрованием SSE-C.
@ -97,6 +98,7 @@ SELECT * FROM s3_engine_table LIMIT 2;
<!-- <secret_access_key>SECRET_ACCESS_KEY</secret_access_key> -->
<!-- <region>us-west-1</region> -->
<!-- <use_environment_credentials>false</use_environment_credentials> -->
<!-- <use_insecure_imds_request>false</use_insecure_imds_request> -->
<!-- <header>Authorization: Bearer SOME-TOKEN</header> -->
<!-- <server_side_encryption_customer_key_base64>BASE64-ENCODED-KEY</server_side_encryption_customer_key_base64> -->
</endpoint-name>
@ -105,7 +107,7 @@ SELECT * FROM s3_engine_table LIMIT 2;
## Примеры использования {#usage-examples}
Предположим, у нас есть несколько файлов в формате TSV со следующими URL-адресами в HDFS:
Предположим, у нас есть несколько файлов в формате TSV со следующими URL-адресами в S3:
- 'https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_1.csv'
- 'https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_2.csv'
@ -143,6 +145,7 @@ ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_p
CREATE TABLE big_table (name String, value UInt32)
ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/big_prefix/file-{000..999}.csv', 'CSV');
```
**Смотрите также**
- [Табличная функция S3](../../../sql-reference/table-functions/s3.md)

View File

@ -95,7 +95,9 @@ sudo clickhouse-client-$LATEST_VERSION/install/doinst.sh
- [AArch64](https://builds.clickhouse.tech/master/aarch64/clickhouse) — `curl -O 'https://builds.clickhouse.tech/master/aarch64/clickhouse' && chmod a+x ./clickhouse`
- [FreeBSD](https://builds.clickhouse.tech/master/freebsd/clickhouse) — `curl -O 'https://builds.clickhouse.tech/master/freebsd/clickhouse' && chmod a+x ./clickhouse`
После скачивания, можно воспользоваться `clickhouse client` для подключения к серверу, или `clickhouse local` для обработки локальных данных. Для запуска `clickhouse server` необходимо скачать конфигурационные файлы [сервера](https://github.com/ClickHouse/ClickHouse/blob/master/programs/server/config.xml) и [пользователей](https://github.com/ClickHouse/ClickHouse/blob/master/programs/server/users.xml) с GitHub.
После скачивания можно воспользоваться `clickhouse client` для подключения к серверу или `clickhouse local` для обработки локальных данных.
Чтобы установить ClickHouse в рамках всей системы (с необходимыми конфигурационными файлами, настройками пользователей и т.д.), выполните `sudo ./clickhouse install`. Затем выполните команды `clickhouse start` (чтобы запустить сервер) и `clickhouse-client` (чтобы подключиться к нему).
Данные сборки не рекомендуются для использования в продакшене, так как они недостаточно тщательно протестированны. Также, в них присутствуют не все возможности ClickHouse.
@ -172,4 +174,3 @@ SELECT 1
**Поздравляем, система работает!**
Для дальнейших экспериментов можно попробовать загрузить один из тестовых наборов данных или пройти [пошаговое руководство для начинающих](https://clickhouse.tech/tutorial.html).

View File

@ -149,6 +149,39 @@ Eсли суммарное число активных кусков во все
Стандартное значение Linux dirty_expire_centisecs - 30 секунд (максимальное время, которое записанные данные хранятся только в оперативной памяти), но при больших нагрузках на дисковую систему, данные могут быть записаны намного позже. Экспериментально было найдено время - 480 секунд, за которое гарантированно новый кусок будет записан на диск.
## replicated_fetches_http_connection_timeout {#replicated_fetches_http_connection_timeout}
Тайм-аут HTTP-соединения (в секундах) для запросов на скачивание кусков. Наследуется из профиля по умолчанию [http_connection_timeout](./settings.md#http_connection_timeout), если не задан явно.
Возможные значения:
- 0 - используется значение `http_connection_timeout`.
- Любое положительное целое число.
Значение по умолчанию: `0`.
## replicated_fetches_http_send_timeout {#replicated_fetches_http_send_timeout}
Тайм-аут (в секундах) для отправки HTTP-запросов на скачивание кусков. Наследуется из профиля по умолчанию [http_send_timeout](./settings.md#http_send_timeout), если не задан явно.
Возможные значения:
- 0 - используется значение `http_send_timeout`.
- Любое положительное целое число.
Значение по умолчанию: `0`.
## replicated_fetches_http_receive_timeout {#replicated_fetches_http_receive_timeout}
Тайм-аут (в секундах) для получения HTTP-запросов на скачивание кусков. Наследуется из профиля по умолчанию [http_receive_timeout](./settings.md#http_receive_timeout), если не задан явно.
Возможные значения:
- 0 - используется значение `http_receive_timeout`.
- Любое положительное целое число.
Значение по умолчанию: `0`.
## max_bytes_to_merge_at_max_space_in_pool {#max-bytes-to-merge-at-max-space-in-pool}
Максимальный суммарный размер кусков (в байтах) в одном слиянии, при наличии свободных ресурсов в фоновом пуле.

View File

@ -1761,7 +1761,7 @@ ClickHouse генерирует исключение
## background_pool_size {#background_pool_size}
Задает количество потоков для выполнения фоновых операций в движках таблиц (например, слияния в таблицах c движком [MergeTree](../../engines/table-engines/mergetree-family/index.md)). Настройка применяется при запуске сервера ClickHouse и не может быть изменена во пользовательском сеансе. Настройка позволяет управлять загрузкой процессора и диска. Чем меньше пулл, тем ниже нагрузка на CPU и диск, при этом фоновые процессы замедляются, что может повлиять на скорость выполнения запроса.
Задает количество потоков для выполнения фоновых операций в движках таблиц (например, слияния в таблицах c движком [MergeTree](../../engines/table-engines/mergetree-family/index.md)). Настройка применяется при запуске сервера ClickHouse и не может быть изменена во пользовательском сеансе. Настройка позволяет управлять загрузкой процессора и диска. Чем меньше пул, тем ниже нагрузка на CPU и диск, при этом фоновые процессы работают с меньшей интенсивностью, что в конечном итоге может повлиять на производительность запросов, потому что сервер будет обрабатывать больше кусков.
Допустимые значения:
@ -2767,12 +2767,12 @@ SELECT * FROM test2;
## prefer_column_name_to_alias {#prefer-column-name-to-alias}
Включает или отключает замену названий столбцов на синонимы в выражениях и секциях запросов, см. [Примечания по использованию синонимов](../../sql-reference/syntax.md#syntax-expression_aliases). Включите эту настройку, чтобы синтаксис синонимов в ClickHouse был более совместим с большинством других СУБД.
Включает или отключает замену названий столбцов на псевдонимы (alias) в выражениях и секциях запросов, см. [Примечания по использованию синонимов](../../sql-reference/syntax.md#syntax-expression_aliases). Включите эту настройку, чтобы синтаксис псевдонимов в ClickHouse был более совместим с большинством других СУБД.
Возможные значения:
- 0 — синоним подставляется вместо имени столбца.
- 1 — синоним не подставляется вместо имени столбца.
- 0 — псевдоним подставляется вместо имени столбца.
- 1 — псевдоним не подставляется вместо имени столбца.
Значение по умолчанию: `0`.
@ -2857,5 +2857,37 @@ SELECT * FROM test LIMIT 10 OFFSET 100;
│ 109 │
└─────┘
```
## http_connection_timeout {#http_connection_timeout}
Тайм-аут для HTTP-соединения (в секундах).
Возможные значения:
- 0 - бесконечный тайм-аут.
- Любое положительное целое число.
Значение по умолчанию: `1`.
## http_send_timeout {#http_send_timeout}
Тайм-аут для отправки данных через HTTP-интерфейс (в секундах).
Возможные значения:
- 0 - бесконечный тайм-аут.
- Любое положительное целое число.
Значение по умолчанию: `1800`.
## http_receive_timeout {#http_receive_timeout}
Тайм-аут для получения данных через HTTP-интерфейс (в секундах).
Возможные значения:
- 0 - бесконечный тайм-аут.
- Любое положительное целое число.
Значение по умолчанию: `1800`.
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/settings/settings/) <!--hide-->

View File

@ -21,6 +21,7 @@
- `bytes_allocated` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Объем оперативной памяти, используемый словарем.
- `query_count` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Количество запросов с момента загрузки словаря или с момента последней успешной перезагрузки.
- `hit_rate` ([Float64](../../sql-reference/data-types/float.md)) — Для cache-словарей — процент закэшированных значений.
- `found_rate` ([Float64](../../sql-reference/data-types/float.md)) — Процент обращений к словарю, при которых значение было найдено.
- `element_count` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Количество элементов, хранящихся в словаре.
- `load_factor` ([Float64](../../sql-reference/data-types/float.md)) — Процент заполнения словаря (для хэшированного словаря — процент заполнения хэш-таблицы).
- `source` ([String](../../sql-reference/data-types/string.md)) — Текст, описывающий [источник данных](../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md) для словаря.
@ -58,4 +59,3 @@ SELECT * FROM system.dictionaries
│ dictdb │ dict │ LOADED │ dictdb.dict │ Flat │ UInt64 │ ['value_default','value_expression'] │ ['String','String'] │ 74032 │ 0 │ 1 │ 1 │ 0.0004887585532746823 │ ClickHouse: dictdb.dicttbl │ 0 │ 1 │ 2020-03-04 04:17:34 │ 2020-03-04 04:30:34 │ 0.002 │ │
└──────────┴──────┴────────┴─────────────┴──────┴────────┴──────────────────────────────────────┴─────────────────────┴─────────────────┴─────────────┴──────────┴───────────────┴───────────────────────┴────────────────────────────┴──────────────┴──────────────┴─────────────────────┴──────────────────────────────┘───────────────────────┴────────────────┘
```

View File

@ -0,0 +1,27 @@
## ClickHouse compressor
Simple program for data compression and decompression in ClickHouse way.
### Examples
Compress data with LZ4:
```
$ ./clickhouse-compressor < input_file > output_file
```
Decompress data from LZ4 format:
```
$ ./clickhouse-compressor --decompress < input_file > output_file
```
Compress data with ZSTD at level 5:
```
$ ./clickhouse-compressor --codec 'ZSTD(5)' < input_file > output_file
```
Compress data with Delta of four bytes and ZSTD level 10.
```
$ ./clickhouse-compressor --codec 'Delta(4)' --codec 'ZSTD(10)' < input_file > output_file
```

View File

@ -0,0 +1,98 @@
---
toc_priority: 65
toc_title: clickhouse-format
---
# clickhouse-format {#clickhouse-format}
Позволяет форматировать входящие запросы.
Ключи:
- `--help` или`-h` — выводит описание ключей.
- `--hilite` — добавляет подсветку синтаксиса с экранированием символов.
- `--oneline` — форматирование в одну строку.
- `--quiet` или `-q` — проверяет синтаксис без вывода результата.
- `--multiquery` or `-n` — поддерживает несколько запросов в одной строке.
- `--obfuscate` — обфусцирует вместо форматирования.
- `--seed <строка>` — задает строку, которая определяет результат обфускации.
- `--backslash` — добавляет обратный слеш в конце каждой строки отформатированного запроса. Удобно использовать если многострочный запрос скопирован из интернета или другого источника и его нужно выполнить из командной строки.
## Примеры {#examples}
1. Подсветка синтаксиса и форматирование в одну строку:
```bash
$ clickhouse-format --oneline --hilite <<< "SELECT sum(number) FROM numbers(5);"
```
Результат:
```sql
SELECT sum(number) FROM numbers(5)
```
2. Несколько запросов в одной строке:
```bash
$ clickhouse-format -n <<< "SELECT * FROM (SELECT 1 AS x UNION ALL SELECT 1 UNION DISTINCT SELECT 3);"
```
Результат:
```text
SELECT *
FROM
(
SELECT 1 AS x
UNION ALL
SELECT 1
UNION DISTINCT
SELECT 3
)
;
```
3. Обфускация:
```bash
$ clickhouse-format --seed Hello --obfuscate <<< "SELECT cost_first_screen BETWEEN a AND b, CASE WHEN x >= 123 THEN y ELSE NULL END;"
```
Результат:
```text
SELECT treasury_mammoth_hazelnut BETWEEN nutmeg AND span, CASE WHEN chive >= 116 THEN switching ELSE ANYTHING END;
```
Тот же запрос с другой инициализацией обфускатора:
```bash
$ clickhouse-format --seed World --obfuscate <<< "SELECT cost_first_screen BETWEEN a AND b, CASE WHEN x >= 123 THEN y ELSE NULL END;"
```
Результат:
```text
SELECT horse_tape_summer BETWEEN folklore AND moccasins, CASE WHEN intestine >= 116 THEN nonconformist ELSE FORESTRY END;
```
4. Добавление обратного слеша:
```bash
$ clickhouse-format --backslash <<< "SELECT * FROM (SELECT 1 AS x UNION ALL SELECT 1 UNION DISTINCT SELECT 3);"
```
Результат:
```text
SELECT * \
FROM \
( \
SELECT 1 AS x \
UNION ALL \
SELECT 1 \
UNION DISTINCT \
SELECT 3 \
)
```

View File

@ -6,6 +6,10 @@ toc_title: "Обзор"
# Утилиты ClickHouse {#utility-clickhouse}
- [clickhouse-local](clickhouse-local.md)
- [clickhouse-local](clickhouse-local.md) - позволяет выполнять SQL-запросы над данными без остановки сервера ClickHouse, подобно утилите `awk`.
- [clickhouse-copier](clickhouse-copier.md) - копирует (и перешардирует) данные с одного кластера на другой.
- [clickhouse-benchmark](../../operations/utilities/clickhouse-benchmark.md) — устанавливает соединение с сервером ClickHouse и запускает циклическое выполнение указанных запросов.
- [clickhouse-format](../../operations/utilities/clickhouse-format.md) — позволяет форматировать входящие запросы.
- [ClickHouse obfuscator](../../operations/utilities/clickhouse-obfuscator.md) — обфусцирует данные.
- [ClickHouse compressor](../../operations/utilities/clickhouse-compressor.md) — упаковывает и распаковывает данные.
- [clickhouse-odbc-bridge](../../operations/utilities/odbc-bridge.md) — прокси-сервер для ODBC.

View File

@ -0,0 +1,38 @@
# clickhouse-odbc-bridge
Simple HTTP-server which works like a proxy for ODBC driver. The main motivation
was possible segfaults or another faults in ODBC implementations, which can
crash whole clickhouse-server process.
This tool works via HTTP, not via pipes, shared memory, or TCP because:
- It's simpler to implement
- It's simpler to debug
- jdbc-bridge can be implemented in the same way
## Usage
`clickhouse-server` use this tool inside odbc table function and StorageODBC.
However it can be used as standalone tool from command line with the following
parameters in POST-request URL:
- `connection_string` -- ODBC connection string.
- `columns` -- columns in ClickHouse NamesAndTypesList format, name in backticks,
type as string. Name and type are space separated, rows separated with
newline.
- `max_block_size` -- optional parameter, sets maximum size of single block.
Query is send in post body. Response is returned in RowBinary format.
## Example:
```bash
$ clickhouse-odbc-bridge --http-port 9018 --daemon
$ curl -d "query=SELECT PageID, ImpID, AdType FROM Keys ORDER BY PageID, ImpID" --data-urlencode "connection_string=DSN=ClickHouse;DATABASE=stat" --data-urlencode "columns=columns format version: 1
3 columns:
\`PageID\` String
\`ImpID\` String
\`AdType\` String
" "http://localhost:9018/" > result.txt
$ cat result.txt # Result in RowBinary format
12246623837185725195925621517
```

View File

@ -253,7 +253,7 @@ windowFunnel(window, [mode, [mode, ... ]])(timestamp, cond1, cond2, ..., condN)
**Параметры**
- `window` — ширина скользящего окна по времени. Единица измерения зависит от `timestamp` и может варьироваться. Должно соблюдаться условие `timestamp события cond2 <= timestamp события cond1 + window`.
- `window` — ширина скользящего окна по времени. Это время между первым и последним условием. Единица измерения зависит от `timestamp` и может варьироваться. Должно соблюдаться условие `timestamp события cond1 <= timestamp события cond2 <= ... <= timestamp события condN <= timestamp события cond1 + window`.
- `mode` — необязательный параметр. Может быть установленно несколько значений одновременно.
- `'strict'` — не учитывать подряд идущие повторяющиеся события.
- `'strict_order'` — запрещает посторонние события в искомой последовательности. Например, при поиске цепочки `A->B->C` в `A->B->D->C` поиск будет остановлен на `D` и функция вернет 2.
@ -311,7 +311,7 @@ FROM
GROUP BY user_id
)
GROUP BY level
ORDER BY level ASC
ORDER BY level ASC;
```
## retention {#retention}

View File

@ -6,6 +6,9 @@ toc_priority: 141
Суммирует арифметическую разницу между последовательными строками. Если разница отрицательна — она будет проигнорирована.
!!! info "Примечание"
Чтобы эта функция работала должным образом, исходные данные должны быть отсортированы. В [материализованном представлении](../../../sql-reference/statements/create/view.md#materialized) вместо нее рекомендуется использовать [deltaSumTimestamp](../../../sql-reference/aggregate-functions/reference/deltasumtimestamp.md#agg_functions-deltasumtimestamp).
**Синтаксис**
``` sql
@ -18,7 +21,8 @@ deltaSum(value)
**Возвращаемое значение**
- накопленная арифметическая разница, типа `Integer` или `Float`.
- Накопленная арифметическая разница.
Тип: `Integer` или `Float`.
**Примеры**

View File

@ -0,0 +1,45 @@
---
toc_priority: 141
---
# deltaSumTimestamp {#agg_functions-deltasumtimestamp}
Суммирует разницу между последовательными строками. Если разница отрицательна — она будет проигнорирована.
Эта функция предназначена в первую очередь для [материализованных представлений](../../../sql-reference/statements/create/view.md#materialized), упорядоченных по некоторому временному бакету согласно timestamp, например, по бакету `toStartOfMinute`. Поскольку строки в таком материализованном представлении будут иметь одинаковый timestamp, невозможно объединить их в "правом" порядке. Функция отслеживает `timestamp` наблюдаемых значений, поэтому возможно правильно упорядочить состояния во время слияния.
Чтобы вычислить разницу между упорядоченными последовательными строками, вы можете использовать функцию [deltaSum](../../../sql-reference/aggregate-functions/reference/deltasum.md#agg_functions-deltasum) вместо функции `deltaSumTimestamp`.
**Синтаксис**
``` sql
deltaSumTimestamp(value, timestamp)
```
**Аргументы**
- `value` — входные значения, должны быть типа [Integer](../../data-types/int-uint.md), или [Float](../../data-types/float.md), или [Date](../../data-types/date.md), или [DateTime](../../data-types/datetime.md).
- `timestamp` — параметр для упорядочивания значений, должен быть типа [Integer](../../data-types/int-uint.md), или [Float](../../data-types/float.md), или [Date](../../data-types/date.md), или [DateTime](../../data-types/datetime.md).
**Возвращаемое значение**
- Накопленная разница между последовательными значениями, упорядоченными по параметру `timestamp`.
Тип: [Integer](../../data-types/int-uint.md), или [Float](../../data-types/float.md), или [Date](../../data-types/date.md), или [DateTime](../../data-types/datetime.md).
**Пример**
Запрос:
```sql
SELECT deltaSumTimestamp(value, timestamp)
FROM (SELECT number AS timestamp, [0, 4, 8, 3, 0, 0, 0, 1, 3, 5][number] AS value FROM numbers(1, 10));
```
Результат:
``` text
┌─deltaSumTimestamp(value, timestamp)─┐
│ 13 │
└─────────────────────────────────────┘
```

View File

@ -94,26 +94,35 @@ LAYOUT(FLAT(INITIAL_ARRAY_SIZE 50000 MAX_ARRAY_SIZE 5000000))
Словарь полностью хранится в оперативной памяти в виде хэш-таблиц. Словарь может содержать произвольное количество элементов с произвольными идентификаторами. На практике количество ключей может достигать десятков миллионов элементов.
Если `preallocate` имеет значение `true` (по умолчанию `false`), хеш-таблица будет предварительно определена (это ускорит загрузку словаря). Используйте этот метод только в случае, если:
- Источник поддерживает произвольное количество элементов (пока поддерживается только источником `ClickHouse`).
- В данных нет дубликатов (иначе это может увеличить объем используемой памяти хеш-таблицы).
Поддерживаются все виды источников. При обновлении данные (из файла, из таблицы) читаются целиком.
Пример конфигурации:
``` xml
<layout>
<hashed />
<hashed>
<preallocate>0</preallocate>
</hashed>
</layout>
```
или
``` sql
LAYOUT(HASHED())
LAYOUT(HASHED(PREALLOCATE 0))
```
### sparse_hashed {#dicts-external_dicts_dict_layout-sparse_hashed}
Аналогичен `hashed`, но при этом занимает меньше места в памяти и генерирует более высокую загрузку CPU.
Для этого типа размещения также можно задать `preallocate` в значении `true`. В данном случае это более важно, чем для типа `hashed`.
Пример конфигурации:
``` xml
@ -125,7 +134,7 @@ LAYOUT(HASHED())
или
``` sql
LAYOUT(SPARSE_HASHED())
LAYOUT(SPARSE_HASHED([PREALLOCATE 0]))
```
### complex_key_hashed {#complex-key-hashed}
@ -443,4 +452,3 @@ dictGetString('prefix', 'asn', tuple(IPv6StringToNum('2001:db8::1')))
Никакие другие типы не поддерживаются. Функция возвращает атрибут для префикса, соответствующего данному IP-адресу. Если есть перекрывающиеся префиксы, возвращается наиболее специфический.
Данные должны полностью помещаться в оперативной памяти.

View File

@ -86,3 +86,4 @@ SOURCE(ODBC(... invalidate_query 'SELECT update_time FROM dictionary_source wher
...
```
Для словарей `Cache`, `ComplexKeyCache`, `SSDCache` и `SSDComplexKeyCache` поддерживается как синхронное, так и асинхронное обновление.

View File

@ -159,7 +159,7 @@ CREATE DICTIONARY somename (
| Тег | Описание | Обязательный |
|------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------|
| `name` | Имя столбца. | Да |
| `type` | Тип данных ClickHouse.<br/>ClickHouse пытается привести значение из словаря к заданному типу данных. Например, в случае MySQL, в таблице-источнике поле может быть `TEXT`, `VARCHAR`, `BLOB`, но загружено может быть как `String`. <br/>[Nullable](../../../sql-reference/data-types/nullable.md) в настоящее время поддерживается для словарей [Flat](external-dicts-dict-layout.md#flat), [Hashed](external-dicts-dict-layout.md#dicts-external_dicts_dict_layout-hashed), [ComplexKeyHashed](external-dicts-dict-layout.md#complex-key-hashed), [Direct](external-dicts-dict-layout.md#direct), [ComplexKeyDirect](external-dicts-dict-layout.md#complex-key-direct), [RangeHashed](external-dicts-dict-layout.md#range-hashed), [Polygon](external-dicts-dict-polygon.md). Для словарей [Cache](external-dicts-dict-layout.md#cache), [ComplexKeyCache](external-dicts-dict-layout.md#complex-key-cache), [SSDCache](external-dicts-dict-layout.md#ssd-cache), [SSDComplexKeyCache](external-dicts-dict-layout.md#complex-key-ssd-cache) и [IPTrie](external-dicts-dict-layout.md#ip-trie) `Nullable`-типы не поддерживаются. | Да |
| `type` | Тип данных ClickHouse.<br/>ClickHouse пытается привести значение из словаря к заданному типу данных. Например, в случае MySQL, в таблице-источнике поле может быть `TEXT`, `VARCHAR`, `BLOB`, но загружено может быть как `String`. <br/>[Nullable](../../../sql-reference/data-types/nullable.md) в настоящее время поддерживается для словарей [Flat](external-dicts-dict-layout.md#flat), [Hashed](external-dicts-dict-layout.md#dicts-external_dicts_dict_layout-hashed), [ComplexKeyHashed](external-dicts-dict-layout.md#complex-key-hashed), [Direct](external-dicts-dict-layout.md#direct), [ComplexKeyDirect](external-dicts-dict-layout.md#complex-key-direct), [RangeHashed](external-dicts-dict-layout.md#range-hashed), [Polygon](external-dicts-dict-polygon.md), [Cache](external-dicts-dict-layout.md#cache), [ComplexKeyCache](external-dicts-dict-layout.md#complex-key-cache), [SSDCache](external-dicts-dict-layout.md#ssd-cache), [SSDComplexKeyCache](external-dicts-dict-layout.md#complex-key-ssd-cache). Для словарей [IPTrie](external-dicts-dict-layout.md#ip-trie) `Nullable`-типы не поддерживаются. | Да |
| `null_value` | Значение по умолчанию для несуществующего элемента.<br/>В примере это пустая строка. Значение [NULL](../../syntax.md#null-literal) можно указывать только для типов `Nullable` (см. предыдущую строку с описанием типов). | Да |
| `expression` | [Выражение](../../syntax.md#syntax-expressions), которое ClickHouse выполняет со значением.<br/>Выражением может быть имя столбца в удаленной SQL базе. Таким образом, вы можете использовать его для создания псевдонима удаленного столбца.<br/><br/>Значение по умолчанию: нет выражения. | Нет |
| <a name="hierarchical-dict-attr"></a> `hierarchical` | Если `true`, то атрибут содержит ключ предка для текущего элемента. Смотрите [Иерархические словари](external-dicts-dict-hierarchical.md).<br/><br/>Значение по умолчанию: `false`. | Нет |

View File

@ -1528,3 +1528,52 @@ SELECT arrayAUC([0.1, 0.4, 0.35, 0.8], [0, 0, 1, 1]);
└────────────────────────────────────────---──┘
```
## arrayProduct {#arrayproduct}
Возвращает произведение элементов [массива](../../sql-reference/data-types/array.md).
**Синтаксис**
``` sql
arrayProduct(arr)
```
**Аргументы**
- `arr` — [массив](../../sql-reference/data-types/array.md) числовых значений.
**Возвращаемое значение**
- Произведение элементов массива.
Тип: [Float64](../../sql-reference/data-types/float.md).
**Примеры**
Запрос:
``` sql
SELECT arrayProduct([1,2,3,4,5,6]) as res;
```
Результат:
``` text
┌─res───┐
│ 720 │
└───────┘
```
Запрос:
``` sql
SELECT arrayProduct([toDecimal64(1,8), toDecimal64(2,8), toDecimal64(3,8)]) as res, toTypeName(res);
```
Возвращаемое значение всегда имеет тип [Float64](../../sql-reference/data-types/float.md). Результат:
``` text
┌─res─┬─toTypeName(arrayProduct(array(toDecimal64(1, 8), toDecimal64(2, 8), toDecimal64(3, 8))))─┐
│ 6 │ Float64 │
└─────┴──────────────────────────────────────────────────────────────────────────────────────────┘
```

View File

@ -5,11 +5,11 @@ toc_title: "Функции для шифрования"
# Функции шифрования {#encryption-functions}
Даннвые функции реализуют шифрование и расшифровку данных с помощью AES (Advanced Encryption Standard) алгоритма.
Данные функции реализуют шифрование и расшифровку данных с помощью AES (Advanced Encryption Standard) алгоритма.
Длина ключа зависит от режима шифрования. Он может быть длинной в 16, 24 и 32 байта для режимов шифрования `-128-`, `-196-` и `-256-` соответственно.
Длина инициализирующего вектора всегда 16 байт (лишнии байты игнорируются).
Длина инициализирующего вектора всегда 16 байт (лишние байты игнорируются).
Обратите внимание, что до версии Clickhouse 21.1 эти функции работали медленно.

View File

@ -7,21 +7,22 @@ toc_title: "Функции для работы с внешними словар
Информацию о подключении и настройке внешних словарей смотрите в разделе [Внешние словари](../../sql-reference/dictionaries/external-dictionaries/external-dicts.md).
## dictGet {#dictget}
## dictGet, dictGetOrDefault, dictGetOrNull {#dictget}
Извлекает значение из внешнего словаря.
``` sql
dictGet('dict_name', 'attr_name', id_expr)
dictGetOrDefault('dict_name', 'attr_name', id_expr, default_value_expr)
dictGet('dict_name', attr_names, id_expr)
dictGetOrDefault('dict_name', attr_names, id_expr, default_value_expr)
dictGetOrNull('dict_name', attr_name, id_expr)
```
**Аргументы**
- `dict_name` — имя словаря. [Строковый литерал](../syntax.md#syntax-string-literal).
- `attr_name` — имя столбца словаря. [Строковый литерал](../syntax.md#syntax-string-literal).
- `id_expr` — значение ключа словаря. [Выражение](../syntax.md#syntax-expressions), возвращающее значение типа [UInt64](../../sql-reference/functions/ext-dict-functions.md) или [Tuple](../../sql-reference/functions/ext-dict-functions.md) в зависимости от конфигурации словаря.
- `default_value_expr` — значение, возвращаемое в том случае, когда словарь не содержит строки с заданным ключом `id_expr`. [Выражение](../syntax.md#syntax-expressions) возвращающее значение с типом данных, сконфигурированным для атрибута `attr_name`.
- `attr_names` — имя столбца словаря, [Строковый литерал](../syntax.md#syntax-string-literal), или кортеж [Tuple](../../sql-reference/data-types/tuple.md) таких имен.
- `id_expr` — значение ключа словаря. [Выражение](../syntax.md#syntax-expressions), возвращающее значение типа [UInt64](../../sql-reference/functions/ext-dict-functions.md) или [Tuple](../../sql-reference/functions/ext-dict-functions.md), в зависимости от конфигурации словаря.
- `default_value_expr` — значение, возвращаемое в том случае, когда словарь не содержит строки с заданным ключом `id_expr`. [Выражение](../syntax.md#syntax-expressions), возвращающее значение с типом данных, сконфигурированным для атрибута `attr_names`, или кортеж [Tuple](../../sql-reference/data-types/tuple.md) таких выражений.
**Возвращаемое значение**
@ -31,10 +32,11 @@ dictGetOrDefault('dict_name', 'attr_name', id_expr, default_value_expr)
- `dictGet` возвращает содержимое элемента `<null_value>`, указанного для атрибута в конфигурации словаря.
- `dictGetOrDefault` возвращает атрибут `default_value_expr`.
- `dictGetOrNull` возвращает `NULL` в случае, если ключ не найден в словаре.
Если значение атрибута не удалось обработать или оно не соответствует типу данных атрибута, то ClickHouse генерирует исключение.
**Пример**
**Пример с единственным атрибутом**
Создадим текстовый файл `ext-dict-text.csv` со следующим содержимым:
@ -93,6 +95,130 @@ LIMIT 3
└─────┴────────┘
```
**Пример с несколькими атрибутами**
Создадим текстовый файл `ext-dict-mult.csv` со следующим содержимым:
``` text
1,1,'1'
2,2,'2'
3,3,'3'
```
Первый столбец — `id`, второй столбец — `c1`, третий столбец — `c2`.
Настройка внешнего словаря:
``` xml
<yandex>
<dictionary>
<name>ext-dict-mult</name>
<source>
<file>
<path>/path-to/ext-dict-mult.csv</path>
<format>CSV</format>
</file>
</source>
<layout>
<flat />
</layout>
<structure>
<id>
<name>id</name>
</id>
<attribute>
<name>c1</name>
<type>UInt32</type>
<null_value></null_value>
</attribute>
<attribute>
<name>c2</name>
<type>String</type>
<null_value></null_value>
</attribute>
</structure>
<lifetime>0</lifetime>
</dictionary>
</yandex>
```
Выполним запрос:
``` sql
SELECT
dictGet('ext-dict-mult', ('c1','c2'), number) AS val,
toTypeName(val) AS type
FROM system.numbers
LIMIT 3;
```
``` text
┌─val─────┬─type──────────────────┐
│ (1,'1') │ Tuple(UInt8, String) │
│ (2,'2') │ Tuple(UInt8, String) │
│ (3,'3') │ Tuple(UInt8, String) │
└─────────┴───────────────────────┘
```
**Пример для словаря с диапазоном ключей**
Создадим таблицу:
```sql
CREATE TABLE range_key_dictionary_source_table
(
key UInt64,
start_date Date,
end_date Date,
value String,
value_nullable Nullable(String)
)
ENGINE = TinyLog();
INSERT INTO range_key_dictionary_source_table VALUES(1, toDate('2019-05-20'), toDate('2019-05-20'), 'First', 'First');
INSERT INTO range_key_dictionary_source_table VALUES(2, toDate('2019-05-20'), toDate('2019-05-20'), 'Second', NULL);
INSERT INTO range_key_dictionary_source_table VALUES(3, toDate('2019-05-20'), toDate('2019-05-20'), 'Third', 'Third');
```
Создадим внешний словарь:
```sql
CREATE DICTIONARY range_key_dictionary
(
key UInt64,
start_date Date,
end_date Date,
value String,
value_nullable Nullable(String)
)
PRIMARY KEY key
SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() TABLE 'range_key_dictionary_source_table'))
LIFETIME(MIN 1 MAX 1000)
LAYOUT(RANGE_HASHED())
RANGE(MIN start_date MAX end_date);
```
Выполним запрос:
``` sql
SELECT
(number, toDate('2019-05-20')),
dictHas('range_key_dictionary', number, toDate('2019-05-20')),
dictGetOrNull('range_key_dictionary', 'value', number, toDate('2019-05-20')),
dictGetOrNull('range_key_dictionary', 'value_nullable', number, toDate('2019-05-20')),
dictGetOrNull('range_key_dictionary', ('value', 'value_nullable'), number, toDate('2019-05-20'))
FROM system.numbers LIMIT 5 FORMAT TabSeparated;
```
Результат:
``` text
(0,'2019-05-20') 0 \N \N (NULL,NULL)
(1,'2019-05-20') 1 First First ('First','First')
(2,'2019-05-20') 0 \N \N (NULL,NULL)
(3,'2019-05-20') 0 \N \N (NULL,NULL)
(4,'2019-05-20') 0 \N \N (NULL,NULL)
```
**Смотрите также**
- [Внешние словари](../../sql-reference/functions/ext-dict-functions.md)
@ -197,4 +323,3 @@ dictGet[Type]OrDefault('dict_name', 'attr_name', id_expr, default_value_expr)
- `dictGet[Type]OrDefault` возвращает аргумент `default_value_expr`.
Если значение атрибута не удалось обработать или оно не соответствует типу данных атрибута, то ClickHouse генерирует исключение.

View File

@ -397,9 +397,9 @@ SELECT addr, isIPv6String(addr) FROM ( SELECT ['::', '1111::ffff', '::ffff:127.0
## isIPAddressInRange {#isipaddressinrange}
Проверяет попадает ли IP адрес в интервал, заданный в [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) нотации.
Проверяет, попадает ли IP адрес в интервал, заданный в нотации [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing).
**Syntax**
**Синтаксис**
``` sql
isIPAddressInRange(address, prefix)
@ -409,7 +409,7 @@ isIPAddressInRange(address, prefix)
**Аргументы**
- `address` — IPv4 или IPv6 адрес. [String](../../sql-reference/data-types/string.md).
- `prefix` — IPv4 или IPv6 подсеть, заданная в CIDR нотации. [String](../../sql-reference/data-types/string.md).
- `prefix` — IPv4 или IPv6 подсеть, заданная в нотации CIDR. [String](../../sql-reference/data-types/string.md).
**Возвращаемое значение**
@ -422,7 +422,7 @@ isIPAddressInRange(address, prefix)
Запрос:
``` sql
SELECT isIPAddressInRange('127.0.0.1', '127.0.0.0/8')
SELECT isIPAddressInRange('127.0.0.1', '127.0.0.0/8');
```
Результат:
@ -436,7 +436,7 @@ SELECT isIPAddressInRange('127.0.0.1', '127.0.0.0/8')
Запрос:
``` sql
SELECT isIPAddressInRange('127.0.0.1', 'ffff::/16')
SELECT isIPAddressInRange('127.0.0.1', 'ffff::/16');
```
Результат:

View File

@ -369,7 +369,7 @@ SELECT toFixedString('foo\0bar', 8) AS s, toStringCutToZero(s) AS s_cut;
## reinterpretAsUUID {#reinterpretasuuid}
Функция принимает шестнадцатибайтную строку и интерпретирует ее байты в network order (big-endian). Если строка имеет недостаточную длину, то функция работает так, как будто строка дополнена необходимым количетсвом нулевых байт с конца. Если строка длиннее, чем шестнадцать байт, то игнорируются лишние байты с конца.
Функция принимает строку из 16 байт и интерпретирует ее байты в порядок от старшего к младшему. Если строка имеет недостаточную длину, то функция работает так, как будто строка дополнена необходимым количеством нулевых байтов с конца. Если строка длиннее, чем 16 байтов, то лишние байты с конца игнорируются.
**Синтаксис**
@ -425,9 +425,27 @@ SELECT uuid = uuid2;
## reinterpret(x, T) {#type_conversion_function-reinterpret}
Использует туже самую исходную последовательность байт в памяти для значения `x` и переинтерпретирует ее как конечный тип данных
Использует ту же самую исходную последовательность байтов в памяти для значения `x` и интерпретирует ее как конечный тип данных `T`.
**Синтаксис**
``` sql
reinterpret(x, type)
```
**Аргументы**
- `x` — любой тип данных.
- `type` — конечный тип данных. [String](../../sql-reference/data-types/string.md).
**Возвращаемое значение**
- Значение конечного типа данных.
**Примеры**
Запрос:
```sql
SELECT reinterpret(toInt8(-1), 'UInt8') as int_to_uint,
reinterpret(toInt8(1), 'Float32') as int_to_float,
@ -448,7 +466,23 @@ SELECT reinterpret(toInt8(-1), 'UInt8') as int_to_uint,
Поддерживается также синтаксис `CAST(x AS t)`.
Обратите внимание, что если значение `x` не может быть преобразовано к типу `T`, возникает переполнение. Например, `CAST(-1, 'UInt8')` возвращает 255.
!!! warning "Предупреждение"
Если значение `x` не может быть преобразовано к типу `T`, возникает переполнение. Например, `CAST(-1, 'UInt8')` возвращает 255.
**Синтаксис**
``` sql
CAST(x, T)
```
**Аргументы**
- `x` — любой тип данных.
- `T` — конечный тип данных. [String](../../sql-reference/data-types/string.md).
**Возвращаемое значение**
- Значение конечного типа данных.
**Примеры**
@ -456,9 +490,9 @@ SELECT reinterpret(toInt8(-1), 'UInt8') as int_to_uint,
```sql
SELECT
cast(toInt8(-1), 'UInt8') AS cast_int_to_uint,
cast(toInt8(1), 'Float32') AS cast_int_to_float,
cast('1', 'UInt32') AS cast_string_to_int
CAST(toInt8(-1), 'UInt8') AS cast_int_to_uint,
CAST(toInt8(1), 'Float32') AS cast_int_to_float,
CAST('1', 'UInt32') AS cast_string_to_int
```
Результат:
@ -488,9 +522,9 @@ SELECT
└─────────────────────┴─────────────────────┴────────────┴─────────────────────┴───────────────────────────┘
```
Преобразование в FixedString(N) работает только для аргументов типа String или FixedString(N).
Преобразование в FixedString(N) работает только для аргументов типа [String](../../sql-reference/data-types/string.md) или [FixedString](../../sql-reference/data-types/fixedstring.md).
Поддержано преобразование к типу [Nullable](../../sql-reference/functions/type-conversion-functions.md) и обратно.
Поддерживается преобразование к типу [Nullable](../../sql-reference/functions/type-conversion-functions.md) и обратно.
**Примеры**
@ -860,7 +894,7 @@ AS parseDateTimeBestEffortUS;
## parseDateTimeBestEffortOrZero {#parsedatetimebesteffortorzero}
## parseDateTime32BestEffortOrZero {#parsedatetime32besteffortorzero}
Работает также как [parseDateTimeBestEffort](#parsedatetimebesteffort), но возвращает нулевую дату или нулевую дату и время когда получает формат даты который не может быть обработан.
Работает аналогично функции [parseDateTimeBestEffort](#parsedatetimebesteffort), но возвращает нулевое значение, если формат даты не может быть обработан.
## parseDateTimeBestEffortUSOrNull {#parsedatetimebesteffortusornull}
@ -1036,19 +1070,23 @@ SELECT parseDateTimeBestEffortUSOrZero('02.2021') AS parseDateTimeBestEffortUSOr
## parseDateTime64BestEffort {#parsedatetime64besteffort}
Работает также как функция [parseDateTimeBestEffort](#parsedatetimebesteffort) но также понимамет милисекунды и микросекунды и возвращает `DateTime64(3)` или `DateTime64(6)` типы данных в зависимости от заданной точности.
Работает аналогично функции [parseDateTimeBestEffort](#parsedatetimebesteffort), но также принимает миллисекунды и микросекунды. Возвращает тип данных [DateTime](../../sql-reference/functions/type-conversion-functions.md#data_type-datetime).
**Syntax**
**Синтаксис**
``` sql
parseDateTime64BestEffort(time_string [, precision [, time_zone]])
```
**Parameters**
**Аргументы**
- `time_string` — String containing a date or date with time to convert. [String](../../sql-reference/data-types/string.md).
- `precision``3` for milliseconds, `6` for microseconds. Default `3`. Optional [UInt8](../../sql-reference/data-types/int-uint.md).
- `time_zone` — [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone). The function parses `time_string` according to the timezone. Optional. [String](../../sql-reference/data-types/string.md).
- `time_string` — строка, содержащая дату или дату со временем, которые нужно преобразовать. [String](../../sql-reference/data-types/string.md).
- `precision` — требуемая точность: `3` — для миллисекунд, `6` — для микросекунд. По умолчанию — `3`. Необязательный. [UInt8](../../sql-reference/data-types/int-uint.md).
- `time_zone` — [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone). Разбирает значение `time_string` в зависимости от часового пояса. Необязательный. [String](../../sql-reference/data-types/string.md).
**Возвращаемое значение**
- `time_string`, преобразованная в тип данных [DateTime](../../sql-reference/data-types/datetime.md).
**Примеры**
@ -1062,7 +1100,7 @@ UNION ALL
SELECT parseDateTime64BestEffort('2021-01-01 01:01:00.12346',6) AS a, toTypeName(a) AS t
UNION ALL
SELECT parseDateTime64BestEffort('2021-01-01 01:01:00.12346',3,'Europe/Moscow') AS a, toTypeName(a) AS t
FORMAT PrettyCompactMonoBlcok
FORMAT PrettyCompactMonoBlock;
```
Результат:
@ -1078,12 +1116,11 @@ FORMAT PrettyCompactMonoBlcok
## parseDateTime64BestEffortOrNull {#parsedatetime32besteffortornull}
Работает также как функция [parseDateTime64BestEffort](#parsedatetime64besteffort) но возвращает `NULL` когда встречает формат даты который не может обработать.
Работает аналогично функции [parseDateTime64BestEffort](#parsedatetime64besteffort), но возвращает `NULL`, если формат даты не может быть обработан.
## parseDateTime64BestEffortOrZero {#parsedatetime64besteffortorzero}
Работает также как функция [parseDateTime64BestEffort](#parsedatetimebesteffort) но возвращает "нулевую" дату и время когда встречает формат даты который не может обработать.
Работает аналогично функции [parseDateTime64BestEffort](#parsedatetimebesteffort), но возвращает нулевую дату и время, если формат даты не может быть обработан.
## toLowCardinality {#tolowcardinality}
@ -1130,11 +1167,14 @@ SELECT toLowCardinality('1');
## toUnixTimestamp64Nano {#tounixtimestamp64nano}
Преобразует значение `DateTime64` в значение `Int64` с фиксированной точностью менее одной секунды.
Входное значение округляется соответствующим образом вверх или вниз в зависимости от его точности. Обратите внимание, что возвращаемое значение - это временная метка в UTC, а не в часовом поясе `DateTime64`.
Входное значение округляется соответствующим образом вверх или вниз в зависимости от его точности.
!!! info "Примечание"
Возвращаемое значение — это временная метка в UTC, а не в часовом поясе `DateTime64`.
**Синтаксис**
``` sql
```sql
toUnixTimestamp64Milli(value)
```
@ -1150,7 +1190,7 @@ toUnixTimestamp64Milli(value)
Запрос:
``` sql
```sql
WITH toDateTime64('2019-09-16 19:20:12.345678910', 6) AS dt64
SELECT toUnixTimestamp64Milli(dt64);
```
@ -1296,4 +1336,3 @@ FROM numbers(3);
│ 2,"good" │
└───────────────────────────────────────────┘
```

View File

@ -15,17 +15,29 @@ ALTER USER [IF EXISTS] name1 [ON CLUSTER cluster_name1] [RENAME TO new_name1]
[NOT IDENTIFIED | IDENTIFIED {[WITH {no_password | plaintext_password | sha256_password | sha256_hash | double_sha1_password | double_sha1_hash}] BY {'password' | 'hash'}} | {WITH ldap SERVER 'server_name'} | {WITH kerberos [REALM 'realm']}]
[[ADD | DROP] HOST {LOCAL | NAME 'name' | REGEXP 'name_regexp' | IP 'address' | LIKE 'pattern'} [,...] | ANY | NONE]
[DEFAULT ROLE role [,...] | ALL | ALL EXCEPT role [,...] ]
[GRANTEES {user | role | ANY | NONE} [,...] [EXCEPT {user | role} [,...]]]
[SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY | WRITABLE] | PROFILE 'profile_name'] [,...]
```
Для выполнения `ALTER USER` необходима привилегия [ALTER USER](../grant.md#grant-access-management).
## Секция GRANTEES {#grantees}
Определяет пользователей или роли, которым разрешено получать [привилегии](../../../sql-reference/statements/grant.md#grant-privileges) от указанного пользователя при условии, что этому пользователю также предоставлен весь необходимый доступ с использованием [GRANT OPTION](../../../sql-reference/statements/grant.md#grant-privigele-syntax). Параметры секции `GRANTEES`:
- `user` — пользователь, которому разрешено получать привилегии от указанного пользователя.
- `role` — роль, которой разрешено получать привилегии от указанного пользователя.
- `ANY` — любому пользователю или любой роли разрешено получать привилегии от указанного пользователя. Используется по умолчанию.
- `NONE` — никому не разрешено получать привилегии от указанного пользователя.
Вы можете исключить любого пользователя или роль, используя выражение `EXCEPT`. Например, `ALTER USER user1 GRANTEES ANY EXCEPT user2`. Это означает, что если `user1` имеет привилегии, предоставленные с использованием `GRANT OPTION`, он сможет предоставить их любому, кроме `user2`.
## Примеры {#alter-user-examples}
Установить ролями по умолчанию роли, назначенные пользователю:
``` sql
ALTER USER user DEFAULT ROLE role1, role2
ALTER USER user DEFAULT ROLE role1, role2;
```
Если роли не были назначены пользователю, ClickHouse выбрасывает исключение.
@ -33,7 +45,7 @@ ALTER USER user DEFAULT ROLE role1, role2
Установить ролями по умолчанию все роли, назначенные пользователю:
``` sql
ALTER USER user DEFAULT ROLE ALL
ALTER USER user DEFAULT ROLE ALL;
```
Если роль будет впоследствии назначена пользователю, она автоматически станет ролью по умолчанию.
@ -41,6 +53,11 @@ ALTER USER user DEFAULT ROLE ALL
Установить ролями по умолчанию все назначенные пользователю роли кроме `role1` и `role2`:
``` sql
ALTER USER user DEFAULT ROLE ALL EXCEPT role1, role2
ALTER USER user DEFAULT ROLE ALL EXCEPT role1, role2;
```
Разрешить пользователю с аккаунтом `john` предоставить свои привилегии пользователю с аккаунтом `jack`:
``` sql
ALTER USER john GRANTEES jack;
```

View File

@ -15,6 +15,7 @@ CREATE USER [IF NOT EXISTS | OR REPLACE] name1 [ON CLUSTER cluster_name1]
[NOT IDENTIFIED | IDENTIFIED {[WITH {no_password | plaintext_password | sha256_password | sha256_hash | double_sha1_password | double_sha1_hash}] BY {'password' | 'hash'}} | {WITH ldap SERVER 'server_name'} | {WITH kerberos [REALM 'realm']}]
[HOST {LOCAL | NAME 'name' | REGEXP 'name_regexp' | IP 'address' | LIKE 'pattern'} [,...] | ANY | NONE]
[DEFAULT ROLE role [,...]]
[GRANTEES {user | role | ANY | NONE} [,...] [EXCEPT {user | role} [,...]]]
[SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY | WRITABLE] | PROFILE 'profile_name'] [,...]
```
@ -24,43 +25,52 @@ CREATE USER [IF NOT EXISTS | OR REPLACE] name1 [ON CLUSTER cluster_name1]
Существует несколько способов идентификации пользователя:
- `IDENTIFIED WITH no_password`
- `IDENTIFIED WITH plaintext_password BY 'qwerty'`
- `IDENTIFIED WITH sha256_password BY 'qwerty'` or `IDENTIFIED BY 'password'`
- `IDENTIFIED WITH sha256_hash BY 'hash'`
- `IDENTIFIED WITH double_sha1_password BY 'qwerty'`
- `IDENTIFIED WITH double_sha1_hash BY 'hash'`
- `IDENTIFIED WITH ldap SERVER 'server_name'`
- `IDENTIFIED WITH kerberos` or `IDENTIFIED WITH kerberos REALM 'realm'`
- `IDENTIFIED WITH no_password`
- `IDENTIFIED WITH plaintext_password BY 'qwerty'`
- `IDENTIFIED WITH sha256_password BY 'qwerty'` or `IDENTIFIED BY 'password'`
- `IDENTIFIED WITH sha256_hash BY 'hash'`
- `IDENTIFIED WITH double_sha1_password BY 'qwerty'`
- `IDENTIFIED WITH double_sha1_hash BY 'hash'`
- `IDENTIFIED WITH ldap SERVER 'server_name'`
- `IDENTIFIED WITH kerberos` or `IDENTIFIED WITH kerberos REALM 'realm'`
## Пользовательский хост
Пользовательский хост — это хост, с которого можно установить соединение с сервером ClickHouse. Хост задается в секции `HOST` следующими способами:
- `HOST IP 'ip_address_or_subnetwork'` — Пользователь может подключиться к серверу ClickHouse только с указанного IP-адреса или [подсети](https://ru.wikipedia.org/wiki/Подсеть). Примеры: `HOST IP '192.168.0.0/16'`, `HOST IP '2001:DB8::/32'`. При использовании в эксплуатации указывайте только элементы `HOST IP` (IP-адреса и маски подсети), так как использование `host` и `host_regexp` может привести к дополнительной задержке.
- `HOST ANY` — Пользователь может подключиться с любого хоста. Используется по умолчанию.
- `HOST LOCAL` — Пользователь может подключиться только локально.
- `HOST NAME 'fqdn'` — Хост задается через FQDN. Например, `HOST NAME 'mysite.com'`.
- `HOST NAME REGEXP 'regexp'` — Позволяет использовать регулярные выражения [pcre](http://www.pcre.org/), чтобы задать хосты. Например, `HOST NAME REGEXP '.*\.mysite\.com'`.
- `HOST LIKE 'template'` — Позволяет использовать оператор [LIKE](../../functions/string-search-functions.md#function-like) для фильтрации хостов. Например, `HOST LIKE '%'` эквивалентен `HOST ANY`; `HOST LIKE '%.mysite.com'` разрешает подключение со всех хостов в домене `mysite.com`.
- `HOST IP 'ip_address_or_subnetwork'` — Пользователь может подключиться к серверу ClickHouse только с указанного IP-адреса или [подсети](https://ru.wikipedia.org/wiki/Подсеть). Примеры: `HOST IP '192.168.0.0/16'`, `HOST IP '2001:DB8::/32'`. При использовании в эксплуатации указывайте только элементы `HOST IP` (IP-адреса и маски подсети), так как использование `host` и `host_regexp` может привести к дополнительной задержке.
- `HOST ANY` — Пользователь может подключиться с любого хоста. Используется по умолчанию.
- `HOST LOCAL` — Пользователь может подключиться только локально.
- `HOST NAME 'fqdn'` — Хост задается через FQDN. Например, `HOST NAME 'mysite.com'`.
- `HOST NAME REGEXP 'regexp'` — Позволяет использовать регулярные выражения [pcre](http://www.pcre.org/), чтобы задать хосты. Например, `HOST NAME REGEXP '.*\.mysite\.com'`.
- `HOST LIKE 'template'` — Позволяет использовать оператор [LIKE](../../functions/string-search-functions.md#function-like) для фильтрации хостов. Например, `HOST LIKE '%'` эквивалентен `HOST ANY`; `HOST LIKE '%.mysite.com'` разрешает подключение со всех хостов в домене `mysite.com`.
Также, чтобы задать хост, вы можете использовать `@` вместе с именем пользователя. Примеры:
- `CREATE USER mira@'127.0.0.1'` — Эквивалентно `HOST IP`.
- `CREATE USER mira@'localhost'` — Эквивалентно `HOST LOCAL`.
- `CREATE USER mira@'192.168.%.%'` — Эквивалентно `HOST LIKE`.
- `CREATE USER mira@'127.0.0.1'` — Эквивалентно `HOST IP`.
- `CREATE USER mira@'localhost'` — Эквивалентно `HOST LOCAL`.
- `CREATE USER mira@'192.168.%.%'` — Эквивалентно `HOST LIKE`.
!!! info "Внимание"
ClickHouse трактует конструкцию `user_name@'address'` как имя пользователя целиком. То есть технически вы можете создать несколько пользователей с одинаковыми `user_name`, но разными частями конструкции после `@`, но лучше так не делать.
## Секция GRANTEES {#grantees}
Указываются пользователи или роли, которым разрешено получать [привилегии](../../../sql-reference/statements/grant.md#grant-privileges) от создаваемого пользователя при условии, что этому пользователю также предоставлен весь необходимый доступ с использованием [GRANT OPTION](../../../sql-reference/statements/grant.md#grant-privigele-syntax). Параметры секции `GRANTEES`:
- `user` — указывается пользователь, которому разрешено получать привилегии от создаваемого пользователя.
- `role` — указывается роль, которой разрешено получать привилегии от создаваемого пользователя.
- `ANY` — любому пользователю или любой роли разрешено получать привилегии от создаваемого пользователя. Используется по умолчанию.
- `NONE` — никому не разрешено получать привилегии от создаваемого пользователя.
Вы можете исключить любого пользователя или роль, используя выражение `EXCEPT`. Например, `CREATE USER user1 GRANTEES ANY EXCEPT user2`. Это означает, что если `user1` имеет привилегии, предоставленные с использованием `GRANT OPTION`, он сможет предоставить их любому, кроме `user2`.
## Примеры {#create-user-examples}
Создать аккаунт `mira`, защищенный паролем `qwerty`:
```sql
CREATE USER mira HOST IP '127.0.0.1' IDENTIFIED WITH sha256_password BY 'qwerty'
CREATE USER mira HOST IP '127.0.0.1' IDENTIFIED WITH sha256_password BY 'qwerty';
```
Пользователь `mira` должен запустить клиентское приложение на хосте, где запущен ClickHouse.
@ -68,13 +78,13 @@ CREATE USER mira HOST IP '127.0.0.1' IDENTIFIED WITH sha256_password BY 'qwerty'
Создать аккаунт `john`, назначить на него роли, сделать данные роли ролями по умолчанию:
``` sql
CREATE USER john DEFAULT ROLE role1, role2
CREATE USER john DEFAULT ROLE role1, role2;
```
Создать аккаунт `john` и установить ролями по умолчанию все его будущие роли:
``` sql
CREATE USER user DEFAULT ROLE ALL
CREATE USER john DEFAULT ROLE ALL;
```
Когда роль будет назначена аккаунту `john`, она автоматически станет ролью по умолчанию.
@ -82,7 +92,11 @@ CREATE USER user DEFAULT ROLE ALL
Создать аккаунт `john` и установить ролями по умолчанию все его будущие роли, кроме `role1` и `role2`:
``` sql
CREATE USER john DEFAULT ROLE ALL EXCEPT role1, role2
CREATE USER john DEFAULT ROLE ALL EXCEPT role1, role2;
```
<!--hide-->
Создать пользователя с аккаунтом `john` и разрешить ему предоставить свои привилегии пользователю с аккаунтом `jack`:
``` sql
CREATE USER john GRANTEES jack;
```

View File

@ -11,6 +11,6 @@ TRUNCATE TABLE [IF EXISTS] [db.]name [ON CLUSTER cluster]
Удаляет все данные из таблицы. Если условие `IF EXISTS` не указано, запрос вернет ошибку, если таблицы не существует.
Запрос `TRUNCATE` не поддерживается для следующих движков: [View](../../engines/table-engines/special/view.md), [File](../../engines/table-engines/special/file.md), [URL](../../engines/table-engines/special/url.md) и [Null](../../engines/table-engines/special/null.md).
Запрос `TRUNCATE` не поддерживается для следующих движков: [View](../../engines/table-engines/special/view.md), [File](../../engines/table-engines/special/file.md), [URL](../../engines/table-engines/special/url.md), [Buffer](../../engines/table-engines/special/buffer.md) и [Null](../../engines/table-engines/special/null.md).

View File

@ -51,5 +51,5 @@ The easiest way to see the result is to use `--livereload=8888` argument of buil
At the moment theres no easy way to do just that, but you can consider:
- To hit the “Watch” button on top of GitHub web interface to know as early as possible, even during pull request. Alternative to this is `#github-activity` channel of [public ClickHouse Slack](https://join.slack.com/t/clickhousedb/shared_invite/zt-nwwakmk4-xOJ6cdy0sJC3It8j348~IA).
- To hit the “Watch” button on top of GitHub web interface to know as early as possible, even during pull request. Alternative to this is `#github-activity` channel of [public ClickHouse Slack](https://join.slack.com/t/clickhousedb/shared_invite/zt-qfort0u8-TWqK4wIP0YSdoDE0btKa1w).
- Some search engines allow to subscribe on specific website changes via email and you can opt-in for that for https://clickhouse.tech.

View File

@ -62,7 +62,6 @@ def build_amp(lang, args, cfg):
for root, _, filenames in os.walk(site_temp):
if 'index.html' in filenames:
paths.append(prepare_amp_html(lang, args, root, site_temp, main_site_dir))
test.test_amp(paths, lang)
logging.info(f'Finished building AMP version for {lang}')

View File

@ -40,7 +40,7 @@ def build_for_lang(lang, args):
site_names = {
'en': 'ClickHouse Blog',
'ru': 'Блог ClickHouse '
'ru': 'Блог ClickHouse'
}
assert len(site_names) == len(languages)
@ -62,7 +62,7 @@ def build_for_lang(lang, args):
strict=True,
theme=theme_cfg,
nav=blog_nav,
copyright='©20162020 Yandex LLC',
copyright='©20162021 Yandex LLC',
use_directory_urls=True,
repo_name='ClickHouse/ClickHouse',
repo_url='https://github.com/ClickHouse/ClickHouse/',

View File

@ -94,7 +94,7 @@ def build_for_lang(lang, args):
site_dir=site_dir,
strict=True,
theme=theme_cfg,
copyright='©20162020 Yandex LLC',
copyright='©20162021 Yandex LLC',
use_directory_urls=True,
repo_name='ClickHouse/ClickHouse',
repo_url='https://github.com/ClickHouse/ClickHouse/',

View File

@ -31,7 +31,16 @@ def build_nav_entry(root, args):
result_items.append((prio, title, payload))
elif filename.endswith('.md'):
path = os.path.join(root, filename)
meta, content = util.read_md_file(path)
meta = ''
content = ''
try:
meta, content = util.read_md_file(path)
except:
print('Error in file: {}'.format(path))
raise
path = path.split('/', 2)[-1]
title = meta.get('toc_title', find_first_header(content))
if title:

View File

@ -3,34 +3,9 @@
import logging
import os
import sys
import bs4
import logging
import os
import subprocess
import bs4
def test_amp(paths, lang):
try:
# Get latest amp validator version
subprocess.check_call('amphtml-validator --help',
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
shell=True)
except subprocess.CalledProcessError:
subprocess.check_call('npm i -g amphtml-validator', stderr=subprocess.DEVNULL, shell=True)
paths = ' '.join(paths)
command = f'amphtml-validator {paths}'
try:
subprocess.check_output(command, shell=True).decode('utf-8')
except subprocess.CalledProcessError:
logging.error(f'Invalid AMP for {lang}')
raise
def test_template(template_path):
if template_path.endswith('amp.html'):

View File

@ -155,10 +155,6 @@ def build_website(args):
os.path.join(args.src_dir, 'utils', 'list-versions', 'version_date.tsv'),
os.path.join(args.output_dir, 'data', 'version_date.tsv'))
shutil.copy2(
os.path.join(args.website_dir, 'js', 'embedd.min.js'),
os.path.join(args.output_dir, 'js', 'embedd.min.js'))
for root, _, filenames in os.walk(args.output_dir):
for filename in filenames:
if filename == 'main.html':

View File

@ -7,11 +7,11 @@ toc_title: ODBC
# ODBC {#table-engine-odbc}
允许ClickHouse通过以下方式连接到外部数据库 [ODBC](https://en.wikipedia.org/wiki/Open_Database_Connectivity).
允许ClickHouse通过[ODBC](https://en.wikipedia.org/wiki/Open_Database_Connectivity)方式连接到外部数据库.
为了安全地实现ODBC连接ClickHouse使用单独的程序 `clickhouse-odbc-bridge`. 如果直接从ODBC驱动程序加载 `clickhouse-server`驱动程序问题可能会导致ClickHouse服务器崩溃。 ClickHouse自动启动 `clickhouse-odbc-bridge` 当它是必需的。 ODBC桥程序是从相同的软件包作为安装 `clickhouse-server`.
为了安全地实现ODBC连接ClickHouse使用了一个独立程序 `clickhouse-odbc-bridge`. 如果ODBC驱动程序是直接从 `clickhouse-server`中加载的那么驱动问题可能会导致ClickHouse服务崩溃。 当有需要时ClickHouse会自动启动 `clickhouse-odbc-bridge`。 ODBC桥梁程序与`clickhouse-server`来自相同的安装包.
该引擎支持 [可为空](../../../sql-reference/data-types/nullable.md) 数据类型。
该引擎支持 [可为空](../../../sql-reference/data-types/nullable.md) 数据类型。
## 创建表 {#creating-a-table}
@ -25,14 +25,14 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
ENGINE = ODBC(connection_settings, external_database, external_table)
```
请参阅的详细说明 [CREATE TABLE](../../../sql-reference/statements/create.md#create-table-query) 查询。
详情请见 [CREATE TABLE](../../../sql-reference/statements/create.md#create-table-query) 查询。
表结构可以与源表结构不同:
- 列名应与源表中的列名相同,但您可以按任何顺序使用其中的一些列。
- 列类型可能与源表中的列类型不同。 ClickHouse尝试 [](../../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) ClickHouse数据类型的值
- 列类型可能与源表中的列类型不同。 ClickHouse尝试将数值[映射](../../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) ClickHouse数据类型。
**发动机参数**
**引擎参数**
- `connection_settings` — Name of the section with connection settings in the `odbc.ini` 文件
- `external_database` — Name of a database in an external DBMS.
@ -40,13 +40,13 @@ ENGINE = ODBC(connection_settings, external_database, external_table)
## 用法示例 {#usage-example}
**通过ODBC从本地MySQL安装中检索数据**
**通过ODBC从本地安装的MySQL中检索数据**
此示例检查Ubuntu Linux18.04和MySQL服务器5.7
本示例针对Ubuntu Linux18.04和MySQL服务器5.7进行检查
确保安装了unixODBC和MySQL连接器。
确保安装了unixODBC和MySQL连接器。
默认情况下如果从软件包安装ClickHouse以用户身份启动 `clickhouse`. 因此您需要在MySQL服务器中创建和配置此用户。
默认情况下如果从软件包安装ClickHouse以用户`clickhouse`的身份启动 . 因此您需要在MySQL服务器中创建和配置此用户。
``` bash
$ sudo mysql
@ -57,7 +57,7 @@ mysql> CREATE USER 'clickhouse'@'localhost' IDENTIFIED BY 'clickhouse';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'clickhouse'@'clickhouse' WITH GRANT OPTION;
```
然后配置连接 `/etc/odbc.ini`.
然后在`/etc/odbc.ini`中配置连接 .
``` bash
$ cat /etc/odbc.ini
@ -70,7 +70,7 @@ USERNAME = clickhouse
PASSWORD = clickhouse
```
您可以使用 `isql` unixodbc安装中的实用程序。
您可以从安装的unixodbc中使用 `isql` 实用程序来检查连接情况
``` bash
$ isql -v mysqlconn

View File

@ -7,37 +7,37 @@ toc_title: "\u6570\u636E\u5907\u4EFD"
# 数据备份 {#data-backup}
尽管[副本](../engines/table-engines/mergetree-family/replication.md) 可以预防硬件错误带来的数据丢失, 但是它不能防止人为操作的错误: 意外删除数据, 删除错误的 table 或者删除错误 cluster 上的 table, 可以导致错误数据处理错误或者数据损坏的 bugs. 这类意外可能会影响所有的副本. ClickHouse 有内建的保障措施可以预防一些错误 — 例如, 默认情况下[您不能使用类似MergeTree的引擎删除包含超过50Gb数据的表](server-configuration-parameters/settings.md#max-table-size-to-drop). 但是,这些保障措施不能涵盖所有可能的情况,并且可以规避
尽管 [副本] (../engines/table-engines/mergetree-family/replication.md) 可以提供针对硬件的错误防护, 但是它不能预防人为操作失误: 数据的意外删除, 错误表的删除或者错误集群上表的删除, 以及导致错误数据处理或者数据损坏的软件bug. 在很多案例中,这类意外可能会影响所有的副本. ClickHouse 有内置的保护措施可以预防一些错误 — 例如, 默认情况下 [不能人工删除使用带有MergeTree引擎且包含超过50Gb数据的表] (server-configuration-parameters/settings.md#max-table-size-to-drop). 但是,这些保护措施不能覆盖所有可能情况,并且这些措施可以被绕过
为了有效地减少可能的人为错误,您应该 **提前**准备备份和还原数据的策略.
为了有效地减少可能的人为错误,您应该 **提前** 仔细的准备备份和数据还原的策略.
不同公司有不同的可用资源和业务需求,因此没有适合各种情况的ClickHouse备份和恢复通用解决方案。 适用于 1GB 的数据的方案可能并不适用于几十 PB 数据的情况。 有多种可能的并有自己优缺点的方法,这将在下面讨论。 好的主意是同时结合使用多种方法而不是仅使用一种,这样可以弥补不同方法各自的缺点
不同公司有不同的可用资源和业务需求,因此不存在一个通用的解决方案可以应对各种情况下的ClickHouse备份和恢复。 适用于 1GB 数据的方案可能并不适用于几十 PB 数据的情况。 有多种具备各自优缺点的可能方法,将在下面对其进行讨论。最好使用几种方法而不是仅仅使用一种方法来弥补它们的各种缺点。
!!! note "注"
请记住,如果您备份了某些内容并且从未尝试过还原它,那么当您实际需要它时(或者至少需要比业务能够容忍的时间更长),恢复可能无法正常工作。 因此无论您选择哪种备份方法请确保自动还原过程并定期在备用ClickHouse群集上练
需要注意的是,如果您备份了某些内容并且从未尝试过还原它,那么当您实际需要它时可能无法正常恢复(或者至少需要的时间比业务能够容忍的时间更长)。 因此无论您选择哪种备份方法请确保自动还原过程并定期在备用ClickHouse群集上练。
## 将源数据复制到其地方 {#duplicating-source-data-somewhere-else}
## 将源数据复制到其地方 {#duplicating-source-data-somewhere-else}
通常被聚集到ClickHouse的数据是通过某种持久队列传递的例如 [Apache Kafka](https://kafka.apache.org). 在这种情况下可以配置一组额外的订阅服务器这些订阅服务器将在写入ClickHouse时读取相同的数据流并将其存储在冷存储中。 大多数公司已经有一些默认推荐冷存储,可能是对象存储或分布式文件系统,如 [HDFS](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html).
通常摄入到ClickHouse的数据是通过某种持久队列传递的例如 [Apache Kafka] (https://kafka.apache.org). 在这种情况下可以配置一组额外的订阅服务器这些订阅服务器将在写入ClickHouse时读取相同的数据流并将其存储在冷存储中。 大多数公司已经有一些默认推荐冷存储,可能是对象存储或分布式文件系统,如 [HDFS] (https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html).
## 文件系统快照 {#filesystem-snapshots}
某些本地文件系统提供快照功能(例如, [ZFS](https://en.wikipedia.org/wiki/ZFS)),但它们可能不是提供实时查询的最佳选择。 一个可能的解决方案是使用这种文件系统创建额外的副本,并将它们从 [分布](../engines/table-engines/special/distributed.md) 用于以下目的的表 `SELECT` 查询。 任何修改数据的查询都无法访问此类副本上的快照。 作为奖励,这些副本可能具有特殊的硬件配置,每个服务器附加更多的磁盘,这将是经济高效的。
某些本地文件系统提供快照功能(例如, [ZFS] (https://en.wikipedia.org/wiki/ZFS)),但它们可能不是提供实时查询的最佳选择。 一个可能的解决方案是使用这种文件系统创建额外的副本,并将它们与用于`SELECT` 查询的 [分布式] (../engines/table-engines/special/distributed.md) 表分离。 任何修改数据的查询都无法访问此类副本上的快照。 作为回报,这些副本可能具有特殊的硬件配置,每个服务器附加更多的磁盘,这将是经济高效的。
## clickhouse-copier {#clickhouse-copier}
[clickhouse-copier](utilities/clickhouse-copier.md) 是一个多功能工具最初创建用于重新分片pb大小的表。 因为它可以在ClickHouse表和集群之间可靠地复制数据所以它还可用于备份和还原数据。
[clickhouse-copier] (utilities/clickhouse-copier.md) 是一个多功能工具最初创建它是为了用于重新切分pb大小的表。 因为它能够在ClickHouse表和集群之间可靠地复制数据所以它也可用于备份和还原数据。
对于较小的数据量,一个简单的 `INSERT INTO ... SELECT ...` 到远程表也可以工作。
## 部件操作 {#manipulations-with-parts}
## part操作 {#manipulations-with-parts}
ClickHouse允许使用 `ALTER TABLE ... FREEZE PARTITION ...` 查询以创建表分区的本地副本。 这是利用硬链接(hardlink)到 `/var/lib/clickhouse/shadow/` 文件夹中实现的,所以它通常不会占用旧数据的额外磁盘空间。 创建的文件副本不由ClickHouse服务器处理所以你可以把它们留在那里你将有一个简单的备份不需要任何额外的外部系统但它仍然容易出现硬件问题。 出于这个原因,最好将它们远程复制到另一个位置,然后删除本地副本。 分布式文件系统和对象存储仍然是一个不错的选择,但是具有足够大容量的正常附加文件服务器也可以工作(在这种情况下,传输将通过网络文件系统 [rsync](https://en.wikipedia.org/wiki/Rsync)).
ClickHouse允许使用 `ALTER TABLE ... FREEZE PARTITION ...` 查询以创建表分区的本地副本。 这是利用硬链接(hardlink)到 `/var/lib/clickhouse/shadow/` 文件夹中实现的,所以它通常不会因为旧数据而占用额外的磁盘空间。 创建的文件副本不由ClickHouse服务器处理所以你可以把它们留在那里你将有一个简单的备份不需要任何额外的外部系统但它仍然容易出现硬件问题。 出于这个原因,最好将它们远程复制到另一个位置,然后删除本地副本。 分布式文件系统和对象存储仍然是一个不错的选择,但是具有足够大容量的正常附加文件服务器也可以工作(在这种情况下,传输将通过网络文件系统或者也许是 [rsync] (https://en.wikipedia.org/wiki/Rsync) 来进行).
数据可以使用 `ALTER TABLE ... ATTACH PARTITION ...` 从备份中恢复。
有关与分区操作相关的查询的详细信息,请参阅 [更改文档](../sql-reference/statements/alter.md#alter_manipulations-with-partitions).
有关与分区操作相关的查询的详细信息,请参阅 [更改文档] (../sql-reference/statements/alter.md#alter_manipulations-with-partitions).
第三方工具可用于自动化此方法: [clickhouse-backup](https://github.com/AlexAkulov/clickhouse-backup).
第三方工具可用于自动化此方法: [clickhouse-backup] (https://github.com/AlexAkulov/clickhouse-backup).
[原始文章](https://clickhouse.tech/docs/en/operations/backup/) <!--hide-->
[原始文章] (https://clickhouse.tech/docs/en/operations/backup/) <!--hide-->

View File

@ -5,13 +5,13 @@ machine_translated_rev: 5decc73b5dc60054f19087d3690c4eb99446a6c3
# 系统。data_type_families {#system_tables-data_type_families}
包含有关受支持的信息 [数据类型](../../sql-reference/data-types/).
包含有关受支持的[数据类型](../../sql-reference/data-types/)的信息.
列:
字段包括:
- `name` ([字符串](../../sql-reference/data-types/string.md)) — Data type name.
- `case_insensitive` ([UInt8](../../sql-reference/data-types/int-uint.md)) — Property that shows whether you can use a data type name in a query in case insensitive manner or not. For example, `Date``date` 都是有效的。
- `alias_to` ([字符串](../../sql-reference/data-types/string.md)) — Data type name for which `name` 是个化名
- `name` ([String](../../sql-reference/data-types/string.md)) — 数据类型的名称.
- `case_insensitive` ([UInt8](../../sql-reference/data-types/int-uint.md)) — 该属性显示是否可以在查询中以不区分大小写的方式使用数据类型名称。例如 `Date``date` 都是有效的。
- `alias_to` ([String](../../sql-reference/data-types/string.md)) — 名称为别名的数据类型名称。
**示例**
@ -36,4 +36,4 @@ SELECT * FROM system.data_type_families WHERE alias_to = 'String'
**另请参阅**
- [语法](../../sql-reference/syntax.md) — Information about supported syntax.
- [Syntax](../../sql-reference/syntax.md) — 关于所支持的语法信息.

View File

@ -7,33 +7,33 @@ toc_title: "\u7CFB\u7EDF\u8868"
# 系统表 {#system-tables}
## 言 {#system-tables-introduction}
## 言 {#system-tables-introduction}
系统表提供以下信息:
系统表提供的信息如下:
- 服务器状态、进程和环境。
- 服务器的状态、进程以及环境。
- 服务器的内部进程。
系统表:
- 坐落`system` 数据库。
- 仅适用于读取数据
- 不能删除或更改,但可以分离。
- 存储`system` 数据库。
- 仅提供数据读取功能
- 不能删除或更改,但可以对其进行分离(detach)操作
大多数系统表将数据存储在RAM中。 ClickHouse服务器在开始时创建此类系统表。
大多数系统表将其数据存储在RAM中。 一个ClickHouse服务在刚启动时便会创建此类系统表。
与其他系统表不同,系统日志表 [metric_log](../../operations/system-tables/metric_log.md#system_tables-metric_log), [query_log](../../operations/system-tables/query_log.md#system_tables-query_log), [query_thread_log](../../operations/system-tables/query_thread_log.md#system_tables-query_thread_log), [trace_log](../../operations/system-tables/trace_log.md#system_tables-trace_log), [part_log](../../operations/system-tables/part_log.md#system.part_log), crash_log and text_log 默认采用[MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) 引擎并将其数据存储在存储文件系统中。 如果从文件系统中删除表ClickHouse服务器会在下一次写入数据时再次创建空表。 如果系统表架构在新版本中发生更改,则ClickHouse会重命名当前表并创建一个新表。
不同于其他系统表,系统日志表 [metric_log](../../operations/system-tables/metric_log.md#system_tables-metric_log), [query_log](../../operations/system-tables/query_log.md#system_tables-query_log), [query_thread_log](../../operations/system-tables/query_thread_log.md#system_tables-query_thread_log), [trace_log](../../operations/system-tables/trace_log.md#system_tables-trace_log), [part_log](../../operations/system-tables/part_log.md#system.part_log), crash_log and text_log 默认采用[MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) 引擎并将其数据存储在文件系统中。 如果人为的从文件系统中删除表ClickHouse服务器会在下一次进行数据写入时再次创建空表。 如果系统表结构在新版本中发生更改,那么ClickHouse会重命名当前表并创建一个新表。
用户可以通过在`/etc/clickhouse-server/config.d/`下创建与系统表同名的配置文件, 或者在`/etc/clickhouse-server/config.xml`中设置相应配置项,来自定义系统日志表的结构。可自定义的配置项如下:
用户可以通过在`/etc/clickhouse-server/config.d/`下创建与系统表同名的配置文件, 或者在`/etc/clickhouse-server/config.xml`中设置相应配置项,来自定义系统日志表的结构。可自定义的配置项如下:
- `database`: 系统日志表所在的数据库。这个选项目前已经废弃。所有的系统日表都位于`system`库中。
- `table`: 系统日志表
- `database`: 系统日志表所在的数据库。这个选项目前已经不推荐使用。所有的系统日表都位于`system`库中。
- `table`: 接收数据写入的系统日志表。
- `partition_by`: 指定[PARTITION BY](../../engines/table-engines/mergetree-family/custom-partitioning-key.md)表达式。
- `ttl`: 指定系统日志表TTL选项。
- `flush_interval_milliseconds`: 指定系统日志表数据落盘时间
- `engine`: 指定完整的表引擎定义。(以`ENGINE = `开)。 这个选项与`partition_by`以及`ttl`冲突。如果两者一起设置,服务启动时会抛出异常并且退出。
- `flush_interval_milliseconds`: 指定日志表数据刷新到磁盘的时间间隔
- `engine`: 指定完整的表引擎定义。(以`ENGINE = `开)。 这个选项与`partition_by`以及`ttl`冲突。如果两者一起设置,服务启动时会抛出异常并且退出。
一个配置定义的例如下:
配置定义的例如下:
```
<yandex>
@ -50,20 +50,20 @@ toc_title: "\u7CFB\u7EDF\u8868"
</yandex>
```
默认情况下,表增长是无限的。 要控制表的大小,可以使用 TTL 删除过期日志记录的设置。 你也可以使用分区功能 `MergeTree`-发动机表。
默认情况下,表增长是无限的。可以通过TTL 删除过期日志记录的设置来控制表的大小。 你也可以使用分区功能 `MergeTree`-引擎表。
## 系统指标的来源 {#system-tables-sources-of-system-metrics}
用于收集ClickHouse服务器使用的系统指标:
- `CAP_NET_ADMIN` 能力。
- [procfs](https://en.wikipedia.org/wiki/Procfs) (仅在Linux中)。
- [procfs](https://en.wikipedia.org/wiki/Procfs) (仅限于Linux)。
**procfs**
如果ClickHouse服务器没有 `CAP_NET_ADMIN` 能力,它试图回`ProcfsMetricsProvider`. `ProcfsMetricsProvider` 允许收集每个查询系统指标(用于CPU和I/O
如果ClickHouse服务器没有 `CAP_NET_ADMIN` 能力,那么试图退回到 `ProcfsMetricsProvider`. `ProcfsMetricsProvider` 允许收集每个查询系统指标(包括CPU和I/O
如果系统上支持并启用procfsClickHouse server将收集这些指标:
如果系统上支持并启用procfsClickHouse server将收集如下指标:
- `OSCPUVirtualTimeMicroseconds`
- `OSCPUWaitMicroseconds`

View File

@ -5,9 +5,9 @@ toc_priority: 61
toc_title: "\u95F4\u9694"
---
# 间隔 {#data-type-interval}
# Interval类型 {#data-type-interval}
表示时间和日期间隔的数据类型族。 由此产生的类型 [INTERVAL](../../../sql-reference/operators/index.md#operator-interval) 接线员
表示时间和日期间隔的数据类型族。 [INTERVAL](../../../sql-reference/operators/index.md#operator-interval) 运算的结果类型。
!!! warning "警告"
`Interval` 数据类型值不能存储在表中。
@ -15,7 +15,7 @@ toc_title: "\u95F4\u9694"
结构:
- 时间间隔作为无符号整数值。
- 间隔的类型。
- 时间间隔的类型。
支持的时间间隔类型:
@ -28,7 +28,7 @@ toc_title: "\u95F4\u9694"
- `QUARTER`
- `YEAR`
对于每个间隔类型,都有一个单独的数据类型。 例如, `DAY` 间隔对应于 `IntervalDay` 数据类型:
对于每个时间间隔类型,都有一个单独的数据类型。 例如, `DAY` 间隔对应于 `IntervalDay` 数据类型:
``` sql
SELECT toTypeName(INTERVAL 4 DAY)
@ -42,7 +42,7 @@ SELECT toTypeName(INTERVAL 4 DAY)
## 使用说明 {#data-type-interval-usage-remarks}
您可以使用 `Interval`-在算术运算类型值 [日期](../../../sql-reference/data-types/date.md) 和 [日期时间](../../../sql-reference/data-types/datetime.md)-类型值。 例如您可以将4天添加到当前时间:
您可以在与 [日期](../../../sql-reference/data-types/date.md) 和 [日期时间](../../../sql-reference/data-types/datetime.md) 类型值的算术运算中使用 `Interval` 类型值。 例如您可以将4天添加到当前时间:
``` sql
SELECT now() as current_date_time, current_date_time + INTERVAL 4 DAY
@ -54,10 +54,10 @@ SELECT now() as current_date_time, current_date_time + INTERVAL 4 DAY
└─────────────────────┴───────────────────────────────┘
```
不同类型的间隔不能合并。 你不能使用间隔,如 `4 DAY 1 HOUR`. 以小于或等于间隔的最小单位的单位指定间隔,例如,间隔 `1 day and an hour` 间隔可以表示为 `25 HOUR``90000 SECOND`.
你不能执行算术运算 `Interval`-类型值,但你可以添加不同类型的时间间隔,因此值 `Date``DateTime` 数据类型。 例如:
不同类型的间隔不能合并。 你不能使用诸如 `4 DAY 1 HOUR` 的时间间隔. 以小于或等于时间间隔最小单位的单位来指定间隔,例如,时间间隔 `1 day and an hour` 可以表示为 `25 HOUR``90000 SECOND`.
你不能对 `Interval` 类型的值执行算术运算,但你可以向 `Date``DateTime` 数据类型的值添加不同类型的时间间隔,例如:
``` sql
SELECT now() AS current_date_time, current_date_time + INTERVAL 4 DAY + INTERVAL 3 HOUR
```
@ -81,5 +81,5 @@ Code: 43. DB::Exception: Received from localhost:9000. DB::Exception: Wrong argu
## 另请参阅 {#see-also}
- [INTERVAL](../../../sql-reference/operators/index.md#operator-interval) 接线员
- [INTERVAL](../../../sql-reference/operators/index.md#operator-interval) 操作
- [toInterval](../../../sql-reference/functions/type-conversion-functions.md#function-tointerval) 类型转换函数

View File

@ -238,7 +238,7 @@ SELECT a, b, c FROM (SELECT ...)
当一个`SELECT`子句包含`DISTINCT`, `GROUP BY`, `ORDER BY`, `LIMIT`时,请注意,这些仅会在插入数据时在每个单独的数据块上执行。例如,如果你在其中包含了`GROUP BY`,则只会在查询期间进行聚合,但聚合范围仅限于单个批的写入数据。数据不会进一步被聚合。但是当你使用一些其他数据聚合引擎时这是例外的,如:`SummingMergeTree`。
目前对物化视图执行`ALTER`是不支持的,因此这可能是不方便的。如果物化视图是使用的`TO [db.]name`的方式进行构建的,你可以使用`DETACH`语句将视图剥离,然后使用`ALTER`运行在目标表上,然后使用`ATTACH`将之前剥离的表重新加载进来。
目前对物化视图执行`ALTER`是不支持的,因此这可能是不方便的。如果物化视图是使用的`TO [db.]name`的方式进行构建的,你可以使用`DETACH`语句将视图剥离,然后使用`ALTER`运行在目标表上,然后使用`ATTACH`将之前剥离的表重新加载进来。
视图看起来和普通的表相同。例如,你可以通过`SHOW TABLES`查看到它们。

View File

@ -14,7 +14,7 @@ INSERT INTO t VALUES (1, 'Hello, world'), (2, 'abc'), (3, 'def')
含`INSERT INTO t VALUES` 的部分由完整SQL解析器处理包含数据的部分 `(1, 'Hello, world'), (2, 'abc'), (3, 'def')` 交给快速流式解析器解析。通过设置参数 [input_format_values_interpret_expressions](../operations/settings/settings.md#settings-input_format_values_interpret_expressions)你也可以对数据部分开启完整SQL解析器。当 `input_format_values_interpret_expressions = 1`CH优先采用快速流式解析器来解析数据。如果失败CH再尝试用完整SQL解析器来处理就像处理SQL [expression](#syntax-expressions) 一样。
数据可以采用任何格式。当CH接到请求时,服务端先在内存中计算不超过 [max_query_size](../operations/settings/settings.md#settings-max_query_size) 字节的请求数据默认1 mb然后剩下部分交给快速流式解析器。
数据可以采用任何格式。当CH接到请求时,服务端先在内存中计算不超过 [max_query_size](../operations/settings/settings.md#settings-max_query_size) 字节的请求数据默认1 mb然后剩下部分交给快速流式解析器。
这将避免在处理大型的 `INSERT`语句时出现问题。

Some files were not shown because too many files have changed in this diff Show More