diff --git a/.github/ISSUE_TEMPLATE/40_bug-report.md b/.github/ISSUE_TEMPLATE/40_bug-report.md index 97137366189..5c8611d47e6 100644 --- a/.github/ISSUE_TEMPLATE/40_bug-report.md +++ b/.github/ISSUE_TEMPLATE/40_bug-report.md @@ -10,12 +10,26 @@ assignees: '' You have to provide the following information whenever possible. **Describe the bug** + A clear and concise description of what works not as it is supposed to. **Does it reproduce on recent release?** + [The list of releases](https://github.com/ClickHouse/ClickHouse/blob/master/utils/list-versions/version_date.tsv) +**Enable crash reporting** + +If possible, change "enabled" to true in "send_crash_reports" section in `config.xml`: + +``` + + + + false +``` + **How to reproduce** + * Which ClickHouse server version to use * Which interface to use, if matters * Non-default settings, if any @@ -24,10 +38,13 @@ A clear and concise description of what works not as it is supposed to. * Queries to run that lead to unexpected result **Expected behavior** + A clear and concise description of what you expected to happen. **Error message and/or stacktrace** + If applicable, add screenshots to help explain your problem. **Additional context** + Add any other context about the problem here. diff --git a/.gitmodules b/.gitmodules index 66a2370f0da..2ccce88e5e4 100644 --- a/.gitmodules +++ b/.gitmodules @@ -228,3 +228,7 @@ [submodule "contrib/datasketches-cpp"] path = contrib/datasketches-cpp url = https://github.com/ClickHouse-Extras/datasketches-cpp.git + +[submodule "contrib/yaml-cpp"] + path = contrib/yaml-cpp + url = https://github.com/ClickHouse-Extras/yaml-cpp.git diff --git a/CHANGELOG.md b/CHANGELOG.md index cc1ec835a7b..2eaecaa4c9b 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,142 @@ +## ClickHouse release 21.5, 2021-05-20 + +#### Backward Incompatible Change + +* Change comparison of integers and floating point numbers when integer is not exactly representable in the floating point data type. In new version comparison will return false as the rounding error will occur. Example: `9223372036854775808.0 != 9223372036854775808`, because the number `9223372036854775808` is not representable as floating point number exactly (and `9223372036854775808.0` is rounded to `9223372036854776000.0`). But in previous version the comparison will return as the numbers are equal, because if the floating point number `9223372036854776000.0` get converted back to UInt64, it will yield `9223372036854775808`. For the reference, the Python programming language also treats these numbers as equal. But this behaviour was dependend on CPU model (different results on AMD64 and AArch64 for some out-of-range numbers), so we make the comparison more precise. It will treat int and float numbers equal only if int is represented in floating point type exactly. [#22595](https://github.com/ClickHouse/ClickHouse/pull/22595) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Remove support for `argMin` and `argMax` for single `Tuple` argument. The code was not memory-safe. The feature was added by mistake and it is confusing for people. These functions can be reintroduced under different names later. This fixes [#22384](https://github.com/ClickHouse/ClickHouse/issues/22384) and reverts [#17359](https://github.com/ClickHouse/ClickHouse/issues/17359). [#23393](https://github.com/ClickHouse/ClickHouse/pull/23393) ([alexey-milovidov](https://github.com/alexey-milovidov)). + +#### New Feature + +* Added functions `dictGetChildren(dictionary, key)`, `dictGetDescendants(dictionary, key, level)`. Function `dictGetChildren` return all children as an array if indexes. It is a inverse transformation for `dictGetHierarchy`. Function `dictGetDescendants` return all descendants as if `dictGetChildren` was applied `level` times recursively. Zero `level` value is equivalent to infinity. Improved performance of `dictGetHierarchy`, `dictIsIn` functions. Closes [#14656](https://github.com/ClickHouse/ClickHouse/issues/14656). [#22096](https://github.com/ClickHouse/ClickHouse/pull/22096) ([Maksim Kita](https://github.com/kitaisreal)). +* Added function `dictGetOrNull`. It works like `dictGet`, but return `Null` in case key was not found in dictionary. Closes [#22375](https://github.com/ClickHouse/ClickHouse/issues/22375). [#22413](https://github.com/ClickHouse/ClickHouse/pull/22413) ([Maksim Kita](https://github.com/kitaisreal)). +* Added a table function `s3Cluster`, which allows to process files from `s3` in parallel on every node of a specified cluster. [#22012](https://github.com/ClickHouse/ClickHouse/pull/22012) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Added support for replicas and shards in MySQL/PostgreSQL table engine / table function. You can write `SELECT * FROM mysql('host{1,2}-{1|2}', ...)`. Closes [#20969](https://github.com/ClickHouse/ClickHouse/issues/20969). [#22217](https://github.com/ClickHouse/ClickHouse/pull/22217) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Added `ALTER TABLE ... FETCH PART ...` query. It's similar to `FETCH PARTITION`, but fetches only one part. [#22706](https://github.com/ClickHouse/ClickHouse/pull/22706) ([turbo jason](https://github.com/songenjie)). +* Added a setting `max_distributed_depth` that limits the depth of recursive queries to `Distributed` tables. Closes [#20229](https://github.com/ClickHouse/ClickHouse/issues/20229). [#21942](https://github.com/ClickHouse/ClickHouse/pull/21942) ([flynn](https://github.com/ucasFL)). + +#### Performance Improvement + +* Improved performance of `intDiv` by dynamic dispatch for AVX2. This closes [#22314](https://github.com/ClickHouse/ClickHouse/issues/22314). [#23000](https://github.com/ClickHouse/ClickHouse/pull/23000) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Improved performance of reading from `ArrowStream` input format for sources other then local file (e.g. URL). [#22673](https://github.com/ClickHouse/ClickHouse/pull/22673) ([nvartolomei](https://github.com/nvartolomei)). +* Disabled compression by default when interacting with localhost (with clickhouse-client or server to server with distributed queries) via native protocol. It may improve performance of some import/export operations. This closes [#22234](https://github.com/ClickHouse/ClickHouse/issues/22234). [#22237](https://github.com/ClickHouse/ClickHouse/pull/22237) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Exclude values that does not belong to the shard from right part of IN section for distributed queries (under `optimize_skip_unused_shards_rewrite_in`, enabled by default, since it still requires `optimize_skip_unused_shards`). [#21511](https://github.com/ClickHouse/ClickHouse/pull/21511) ([Azat Khuzhin](https://github.com/azat)). +* Improved performance of reading a subset of columns with File-like table engine and column-oriented format like Parquet, Arrow or ORC. This closes [#issue:20129](https://github.com/ClickHouse/ClickHouse/issues/20129). [#21302](https://github.com/ClickHouse/ClickHouse/pull/21302) ([keenwolf](https://github.com/keen-wolf)). +* Allow to move more conditions to `PREWHERE` as it was before version 21.1 (adjustment of internal heuristics). Insufficient number of moved condtions could lead to worse performance. [#23397](https://github.com/ClickHouse/ClickHouse/pull/23397) ([Anton Popov](https://github.com/CurtizJ)). +* Improved performance of ODBC connections and fixed all the outstanding issues from the backlog. Using `nanodbc` library instead of `Poco::ODBC`. Closes [#9678](https://github.com/ClickHouse/ClickHouse/issues/9678). Add support for DateTime64 and Decimal* for ODBC table engine. Closes [#21961](https://github.com/ClickHouse/ClickHouse/issues/21961). Fixed issue with cyrillic text being truncated. Closes [#16246](https://github.com/ClickHouse/ClickHouse/issues/16246). Added connection pools for odbc bridge. [#21972](https://github.com/ClickHouse/ClickHouse/pull/21972) ([Kseniia Sumarokova](https://github.com/kssenii)). + +#### Improvement + +* Increase `max_uri_size` (the maximum size of URL in HTTP interface) to 1 MiB by default. This closes [#21197](https://github.com/ClickHouse/ClickHouse/issues/21197). [#22997](https://github.com/ClickHouse/ClickHouse/pull/22997) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Set `background_fetches_pool_size` to `8` that is better for production usage with frequent small insertions or slow ZooKeeper cluster. [#22945](https://github.com/ClickHouse/ClickHouse/pull/22945) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* FlatDictionary added `initial_array_size`, `max_array_size` options. [#22521](https://github.com/ClickHouse/ClickHouse/pull/22521) ([Maksim Kita](https://github.com/kitaisreal)). +* Add new setting `non_replicated_deduplication_window` for non-replicated MergeTree inserts deduplication. [#22514](https://github.com/ClickHouse/ClickHouse/pull/22514) ([alesapin](https://github.com/alesapin)). +* Update paths to the `CatBoost` model configs in config reloading. [#22434](https://github.com/ClickHouse/ClickHouse/pull/22434) ([Kruglov Pavel](https://github.com/Avogar)). +* Added `Decimal256` type support in dictionaries. `Decimal256` is experimental feature. Closes [#20979](https://github.com/ClickHouse/ClickHouse/issues/20979). [#22960](https://github.com/ClickHouse/ClickHouse/pull/22960) ([Maksim Kita](https://github.com/kitaisreal)). +* Enabled `async_socket_for_remote` by default (using less amount of OS threads for distributed queries). [#23683](https://github.com/ClickHouse/ClickHouse/pull/23683) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fixed `quantile(s)TDigest`. Added special handling of singleton centroids according to tdunning/t-digest 3.2+. Also a bug with over-compression of centroids in implementation of earlier version of the algorithm was fixed. [#23314](https://github.com/ClickHouse/ClickHouse/pull/23314) ([Vladimir Chebotarev](https://github.com/excitoon)). +* Make function name `unhex` case insensitive for compatibility with MySQL. [#23229](https://github.com/ClickHouse/ClickHouse/pull/23229) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Implement functions `arrayHasAny`, `arrayHasAll`, `has`, `indexOf`, `countEqual` for generic case when types of array elements are different. In previous versions the functions `arrayHasAny`, `arrayHasAll` returned false and `has`, `indexOf`, `countEqual` thrown exception. Also add support for `Decimal` and big integer types in functions `has` and similar. This closes [#20272](https://github.com/ClickHouse/ClickHouse/issues/20272). [#23044](https://github.com/ClickHouse/ClickHouse/pull/23044) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Raised the threshold on max number of matches in result of the function `extractAllGroupsHorizontal`. [#23036](https://github.com/ClickHouse/ClickHouse/pull/23036) ([Vasily Nemkov](https://github.com/Enmk)). +* Do not perform `optimize_skip_unused_shards` for cluster with one node. [#22999](https://github.com/ClickHouse/ClickHouse/pull/22999) ([Azat Khuzhin](https://github.com/azat)). +* Added ability to run clickhouse-keeper (experimental drop-in replacement to ZooKeeper) with SSL. Config settings `keeper_server.tcp_port_secure` can be used for secure interaction between client and keeper-server. `keeper_server.raft_configuration.secure` can be used to enable internal secure communication between nodes. [#22992](https://github.com/ClickHouse/ClickHouse/pull/22992) ([alesapin](https://github.com/alesapin)). +* Added ability to flush buffer only in background for `Buffer` tables. [#22986](https://github.com/ClickHouse/ClickHouse/pull/22986) ([Azat Khuzhin](https://github.com/azat)). +* When selecting from MergeTree table with NULL in WHERE condition, in rare cases, exception was thrown. This closes [#20019](https://github.com/ClickHouse/ClickHouse/issues/20019). [#22978](https://github.com/ClickHouse/ClickHouse/pull/22978) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Fix error handling in Poco HTTP Client for AWS. [#22973](https://github.com/ClickHouse/ClickHouse/pull/22973) ([kreuzerkrieg](https://github.com/kreuzerkrieg)). +* Respect `max_part_removal_threads` for `ReplicatedMergeTree`. [#22971](https://github.com/ClickHouse/ClickHouse/pull/22971) ([Azat Khuzhin](https://github.com/azat)). +* Fix obscure corner case of MergeTree settings inactive_parts_to_throw_insert = 0 with inactive_parts_to_delay_insert > 0. [#22947](https://github.com/ClickHouse/ClickHouse/pull/22947) ([Azat Khuzhin](https://github.com/azat)). +* `dateDiff` now works with `DateTime64` arguments (even for values outside of `DateTime` range) [#22931](https://github.com/ClickHouse/ClickHouse/pull/22931) ([Vasily Nemkov](https://github.com/Enmk)). +* MaterializeMySQL (experimental feature): added an ability to replicate MySQL databases containing views without failing. This is accomplished by ignoring the views. [#22760](https://github.com/ClickHouse/ClickHouse/pull/22760) ([Christian](https://github.com/cfroystad)). +* Allow RBAC row policy via postgresql protocol. Closes [#22658](https://github.com/ClickHouse/ClickHouse/issues/22658). PostgreSQL protocol is enabled in configuration by default. [#22755](https://github.com/ClickHouse/ClickHouse/pull/22755) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Add metric to track how much time is spend during waiting for Buffer layer lock. [#22725](https://github.com/ClickHouse/ClickHouse/pull/22725) ([Azat Khuzhin](https://github.com/azat)). +* Allow to use CTE in VIEW definition. This closes [#22491](https://github.com/ClickHouse/ClickHouse/issues/22491). [#22657](https://github.com/ClickHouse/ClickHouse/pull/22657) ([Amos Bird](https://github.com/amosbird)). +* Clear the rest of the screen and show cursor in `clickhouse-client` if previous program has left garbage in terminal. This closes [#16518](https://github.com/ClickHouse/ClickHouse/issues/16518). [#22634](https://github.com/ClickHouse/ClickHouse/pull/22634) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Make `round` function to behave consistently on non-x86_64 platforms. Rounding half to nearest even (Banker's rounding) is used. [#22582](https://github.com/ClickHouse/ClickHouse/pull/22582) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Correctly check structure of blocks of data that are sending by Distributed tables. [#22325](https://github.com/ClickHouse/ClickHouse/pull/22325) ([Azat Khuzhin](https://github.com/azat)). +* Allow publishing Kafka errors to a virtual column of Kafka engine, controlled by the `kafka_handle_error_mode` setting. [#21850](https://github.com/ClickHouse/ClickHouse/pull/21850) ([fastio](https://github.com/fastio)). +* Add aliases `simpleJSONExtract/simpleJSONHas` to `visitParam/visitParamExtract{UInt, Int, Bool, Float, Raw, String}`. Fixes [#21383](https://github.com/ClickHouse/ClickHouse/issues/21383). [#21519](https://github.com/ClickHouse/ClickHouse/pull/21519) ([fastio](https://github.com/fastio)). +* Add `clickhouse-library-bridge` for library dictionary source. Closes [#9502](https://github.com/ClickHouse/ClickHouse/issues/9502). [#21509](https://github.com/ClickHouse/ClickHouse/pull/21509) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Forbid to drop a column if it's referenced by materialized view. Closes [#21164](https://github.com/ClickHouse/ClickHouse/issues/21164). [#21303](https://github.com/ClickHouse/ClickHouse/pull/21303) ([flynn](https://github.com/ucasFL)). +* Support dynamic interserver credentials (rotating credentials without downtime). [#14113](https://github.com/ClickHouse/ClickHouse/pull/14113) ([johnskopis](https://github.com/johnskopis)). +* Add support for Kafka storage with `Arrow` and `ArrowStream` format messages. [#23415](https://github.com/ClickHouse/ClickHouse/pull/23415) ([Chao Ma](https://github.com/godliness)). +* Fixed missing semicolon in exception message. The user may find this exception message unpleasant to read. [#23208](https://github.com/ClickHouse/ClickHouse/pull/23208) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Fixed missing whitespace in some exception messages about `LowCardinality` type. [#23207](https://github.com/ClickHouse/ClickHouse/pull/23207) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Some values were formatted with alignment in center in table cells in `Markdown` format. Not anymore. [#23096](https://github.com/ClickHouse/ClickHouse/pull/23096) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Remove non-essential details from suggestions in clickhouse-client. This closes [#22158](https://github.com/ClickHouse/ClickHouse/issues/22158). [#23040](https://github.com/ClickHouse/ClickHouse/pull/23040) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Correct calculation of `bytes_allocated` field in system.dictionaries for sparse_hashed dictionaries. [#22867](https://github.com/ClickHouse/ClickHouse/pull/22867) ([Azat Khuzhin](https://github.com/azat)). +* Fixed approximate total rows accounting for reverse reading from MergeTree. [#22726](https://github.com/ClickHouse/ClickHouse/pull/22726) ([Azat Khuzhin](https://github.com/azat)). +* Fix the case when it was possible to configure dictionary with clickhouse source that was looking to itself that leads to infinite loop. Closes [#14314](https://github.com/ClickHouse/ClickHouse/issues/14314). [#22479](https://github.com/ClickHouse/ClickHouse/pull/22479) ([Maksim Kita](https://github.com/kitaisreal)). + +#### Bug Fix + +* Multiple fixes for hedged requests. Fixed an error `Can't initialize pipeline with empty pipe` for queries with `GLOBAL IN/JOIN` when the setting `use_hedged_requests` is enabled. Fixes [#23431](https://github.com/ClickHouse/ClickHouse/issues/23431). [#23805](https://github.com/ClickHouse/ClickHouse/pull/23805) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). Fixed a race condition in hedged connections which leads to crash. This fixes [#22161](https://github.com/ClickHouse/ClickHouse/issues/22161). [#22443](https://github.com/ClickHouse/ClickHouse/pull/22443) ([Kruglov Pavel](https://github.com/Avogar)). Fix possible crash in case if `unknown packet` was received from remote query (with `async_socket_for_remote` enabled). Fixes [#21167](https://github.com/ClickHouse/ClickHouse/issues/21167). [#23309](https://github.com/ClickHouse/ClickHouse/pull/23309) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fixed the behavior when disabling `input_format_with_names_use_header ` setting discards all the input with CSVWithNames format. This fixes [#22406](https://github.com/ClickHouse/ClickHouse/issues/22406). [#23202](https://github.com/ClickHouse/ClickHouse/pull/23202) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Fixed remote JDBC bridge timeout connection issue. Closes [#9609](https://github.com/ClickHouse/ClickHouse/issues/9609). [#23771](https://github.com/ClickHouse/ClickHouse/pull/23771) ([Maksim Kita](https://github.com/kitaisreal), [alexey-milovidov](https://github.com/alexey-milovidov)). +* Fix the logic of initial load of `complex_key_hashed` if `update_field` is specified. Closes [#23800](https://github.com/ClickHouse/ClickHouse/issues/23800). [#23824](https://github.com/ClickHouse/ClickHouse/pull/23824) ([Maksim Kita](https://github.com/kitaisreal)). +* Fixed crash when `PREWHERE` and row policy filter are both in effect with empty result. [#23763](https://github.com/ClickHouse/ClickHouse/pull/23763) ([Amos Bird](https://github.com/amosbird)). +* Avoid possible "Cannot schedule a task" error (in case some exception had been occurred) on INSERT into Distributed. [#23744](https://github.com/ClickHouse/ClickHouse/pull/23744) ([Azat Khuzhin](https://github.com/azat)). +* Added an exception in case of completely the same values in both samples in aggregate function `mannWhitneyUTest`. This fixes [#23646](https://github.com/ClickHouse/ClickHouse/issues/23646). [#23654](https://github.com/ClickHouse/ClickHouse/pull/23654) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Fixed server fault when inserting data through HTTP caused an exception. This fixes [#23512](https://github.com/ClickHouse/ClickHouse/issues/23512). [#23643](https://github.com/ClickHouse/ClickHouse/pull/23643) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Fixed misinterpretation of some `LIKE` expressions with escape sequences. [#23610](https://github.com/ClickHouse/ClickHouse/pull/23610) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Fixed restart / stop command hanging. Closes [#20214](https://github.com/ClickHouse/ClickHouse/issues/20214). [#23552](https://github.com/ClickHouse/ClickHouse/pull/23552) ([filimonov](https://github.com/filimonov)). +* Fixed `COLUMNS` matcher in case of multiple JOINs in select query. Closes [#22736](https://github.com/ClickHouse/ClickHouse/issues/22736). [#23501](https://github.com/ClickHouse/ClickHouse/pull/23501) ([Maksim Kita](https://github.com/kitaisreal)). +* Fixed a crash when modifying column's default value when a column itself is used as `ReplacingMergeTree`'s parameter. [#23483](https://github.com/ClickHouse/ClickHouse/pull/23483) ([hexiaoting](https://github.com/hexiaoting)). +* Fixed corner cases in vertical merges with `ReplacingMergeTree`. In rare cases they could lead to fails of merges with exceptions like `Incomplete granules are not allowed while blocks are granules size`. [#23459](https://github.com/ClickHouse/ClickHouse/pull/23459) ([Anton Popov](https://github.com/CurtizJ)). +* Fixed bug that does not allow cast from empty array literal, to array with dimensions greater than 1, e.g. `CAST([] AS Array(Array(String)))`. Closes [#14476](https://github.com/ClickHouse/ClickHouse/issues/14476). [#23456](https://github.com/ClickHouse/ClickHouse/pull/23456) ([Maksim Kita](https://github.com/kitaisreal)). +* Fixed a bug when `deltaSum` aggregate function produced incorrect result after resetting the counter. [#23437](https://github.com/ClickHouse/ClickHouse/pull/23437) ([Russ Frank](https://github.com/rf)). +* Fixed `Cannot unlink file` error on unsuccessful creation of ReplicatedMergeTree table with multidisk configuration. This closes [#21755](https://github.com/ClickHouse/ClickHouse/issues/21755). [#23433](https://github.com/ClickHouse/ClickHouse/pull/23433) ([tavplubix](https://github.com/tavplubix)). +* Fixed incompatible constant expression generation during partition pruning based on virtual columns. This fixes https://github.com/ClickHouse/ClickHouse/pull/21401#discussion_r611888913. [#23366](https://github.com/ClickHouse/ClickHouse/pull/23366) ([Amos Bird](https://github.com/amosbird)). +* Fixed a crash when setting join_algorithm is set to 'auto' and Join is performed with a Dictionary. Close [#23002](https://github.com/ClickHouse/ClickHouse/issues/23002). [#23312](https://github.com/ClickHouse/ClickHouse/pull/23312) ([Vladimir](https://github.com/vdimir)). +* Don't relax NOT conditions during partition pruning. This fixes [#23305](https://github.com/ClickHouse/ClickHouse/issues/23305) and [#21539](https://github.com/ClickHouse/ClickHouse/issues/21539). [#23310](https://github.com/ClickHouse/ClickHouse/pull/23310) ([Amos Bird](https://github.com/amosbird)). +* Fixed very rare race condition on background cleanup of old blocks. It might cause a block not to be deduplicated if it's too close to the end of deduplication window. [#23301](https://github.com/ClickHouse/ClickHouse/pull/23301) ([tavplubix](https://github.com/tavplubix)). +* Fixed very rare (distributed) race condition between creation and removal of ReplicatedMergeTree tables. It might cause exceptions like `node doesn't exist` on attempt to create replicated table. Fixes [#21419](https://github.com/ClickHouse/ClickHouse/issues/21419). [#23294](https://github.com/ClickHouse/ClickHouse/pull/23294) ([tavplubix](https://github.com/tavplubix)). +* Fixed simple key dictionary from DDL creation if primary key is not first attribute. Fixes [#23236](https://github.com/ClickHouse/ClickHouse/issues/23236). [#23262](https://github.com/ClickHouse/ClickHouse/pull/23262) ([Maksim Kita](https://github.com/kitaisreal)). +* Fixed reading from ODBC when there are many long column names in a table. Closes [#8853](https://github.com/ClickHouse/ClickHouse/issues/8853). [#23215](https://github.com/ClickHouse/ClickHouse/pull/23215) ([Kseniia Sumarokova](https://github.com/kssenii)). +* MaterializeMySQL (experimental feature): fixed `Not found column` error when selecting from `MaterializeMySQL` with condition on key column. Fixes [#22432](https://github.com/ClickHouse/ClickHouse/issues/22432). [#23200](https://github.com/ClickHouse/ClickHouse/pull/23200) ([tavplubix](https://github.com/tavplubix)). +* Correct aliases handling if subquery was optimized to constant. Fixes [#22924](https://github.com/ClickHouse/ClickHouse/issues/22924). Fixes [#10401](https://github.com/ClickHouse/ClickHouse/issues/10401). [#23191](https://github.com/ClickHouse/ClickHouse/pull/23191) ([Maksim Kita](https://github.com/kitaisreal)). +* Server might fail to start if `data_type_default_nullable` setting is enabled in default profile, it's fixed. Fixes [#22573](https://github.com/ClickHouse/ClickHouse/issues/22573). [#23185](https://github.com/ClickHouse/ClickHouse/pull/23185) ([tavplubix](https://github.com/tavplubix)). +* Fixed a crash on shutdown which happened because of wrong accounting of current connections. [#23154](https://github.com/ClickHouse/ClickHouse/pull/23154) ([Vitaly Baranov](https://github.com/vitlibar)). +* Fixed `Table .inner_id... doesn't exist` error when selecting from Materialized View after detaching it from Atomic database and attaching back. [#23047](https://github.com/ClickHouse/ClickHouse/pull/23047) ([tavplubix](https://github.com/tavplubix)). +* Fix error `Cannot find column in ActionsDAG result` which may happen if subquery uses `untuple`. Fixes [#22290](https://github.com/ClickHouse/ClickHouse/issues/22290). [#22991](https://github.com/ClickHouse/ClickHouse/pull/22991) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix usage of constant columns of type `Map` with nullable values. [#22939](https://github.com/ClickHouse/ClickHouse/pull/22939) ([Anton Popov](https://github.com/CurtizJ)). +* fixed `formatDateTime()` on `DateTime64` and "%C" format specifier fixed `toDateTime64()` for large values and non-zero scale. [#22937](https://github.com/ClickHouse/ClickHouse/pull/22937) ([Vasily Nemkov](https://github.com/Enmk)). +* Fixed a crash when using `mannWhitneyUTest` and `rankCorr` with window functions. This fixes [#22728](https://github.com/ClickHouse/ClickHouse/issues/22728). [#22876](https://github.com/ClickHouse/ClickHouse/pull/22876) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* LIVE VIEW (experimental feature): fixed possible hanging in concurrent DROP/CREATE of TEMPORARY LIVE VIEW in `TemporaryLiveViewCleaner`, [see](https://gist.github.com/vzakaznikov/0c03195960fc86b56bfe2bc73a90019e). [#22858](https://github.com/ClickHouse/ClickHouse/pull/22858) ([Vitaly Baranov](https://github.com/vitlibar)). +* Fixed pushdown of `HAVING` in case, when filter column is used in aggregation. [#22763](https://github.com/ClickHouse/ClickHouse/pull/22763) ([Anton Popov](https://github.com/CurtizJ)). +* Fixed possible hangs in Zookeeper requests in case of OOM exception. Fixes [#22438](https://github.com/ClickHouse/ClickHouse/issues/22438). [#22684](https://github.com/ClickHouse/ClickHouse/pull/22684) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fixed wait for mutations on several replicas for ReplicatedMergeTree table engines. Previously, mutation/alter query may finish before mutation actually executed on other replicas. [#22669](https://github.com/ClickHouse/ClickHouse/pull/22669) ([alesapin](https://github.com/alesapin)). +* Fixed exception for Log with nested types without columns in the SELECT clause. [#22654](https://github.com/ClickHouse/ClickHouse/pull/22654) ([Azat Khuzhin](https://github.com/azat)). +* Fix unlimited wait for auxiliary AWS requests. [#22594](https://github.com/ClickHouse/ClickHouse/pull/22594) ([Vladimir Chebotarev](https://github.com/excitoon)). +* Fixed a crash when client closes connection very early [#22579](https://github.com/ClickHouse/ClickHouse/issues/22579). [#22591](https://github.com/ClickHouse/ClickHouse/pull/22591) ([nvartolomei](https://github.com/nvartolomei)). +* `Map` data type (experimental feature): fixed an incorrect formatting of function `map` in distributed queries. [#22588](https://github.com/ClickHouse/ClickHouse/pull/22588) ([foolchi](https://github.com/foolchi)). +* Fixed deserialization of empty string without newline at end of TSV format. This closes [#20244](https://github.com/ClickHouse/ClickHouse/issues/20244). Possible workaround without version update: set `input_format_null_as_default` to zero. It was zero in old versions. [#22527](https://github.com/ClickHouse/ClickHouse/pull/22527) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Fixed wrong cast of a column of `LowCardinality` type in Merge Join algorithm. Close [#22386](https://github.com/ClickHouse/ClickHouse/issues/22386), close [#22388](https://github.com/ClickHouse/ClickHouse/issues/22388). [#22510](https://github.com/ClickHouse/ClickHouse/pull/22510) ([Vladimir](https://github.com/vdimir)). +* Buffer overflow (on read) was possible in `tokenbf_v1` full text index. The excessive bytes are not used but the read operation may lead to crash in rare cases. This closes [#19233](https://github.com/ClickHouse/ClickHouse/issues/19233). [#22421](https://github.com/ClickHouse/ClickHouse/pull/22421) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Do not limit HTTP chunk size. Fixes [#21907](https://github.com/ClickHouse/ClickHouse/issues/21907). [#22322](https://github.com/ClickHouse/ClickHouse/pull/22322) ([Ivan](https://github.com/abyss7)). +* Fixed a bug, which leads to underaggregation of data in case of enabled `optimize_aggregation_in_order` and many parts in table. Slightly improve performance of aggregation with enabled `optimize_aggregation_in_order`. [#21889](https://github.com/ClickHouse/ClickHouse/pull/21889) ([Anton Popov](https://github.com/CurtizJ)). +* Check if table function view is used as a column. This complements #20350. [#21465](https://github.com/ClickHouse/ClickHouse/pull/21465) ([Amos Bird](https://github.com/amosbird)). +* Fix "unknown column" error for tables with `Merge` engine in queris with `JOIN` and aggregation. Closes [#18368](https://github.com/ClickHouse/ClickHouse/issues/18368), close [#22226](https://github.com/ClickHouse/ClickHouse/issues/22226). [#21370](https://github.com/ClickHouse/ClickHouse/pull/21370) ([Vladimir](https://github.com/vdimir)). +* Fixed name clashes in pushdown optimization. It caused incorrect `WHERE` filtration after FULL JOIN. Close [#20497](https://github.com/ClickHouse/ClickHouse/issues/20497). [#20622](https://github.com/ClickHouse/ClickHouse/pull/20622) ([Vladimir](https://github.com/vdimir)). +* Fixed very rare bug when quorum insert with `quorum_parallel=1` is not really "quorum" because of deduplication. [#18215](https://github.com/ClickHouse/ClickHouse/pull/18215) ([filimonov](https://github.com/filimonov) - reported, [alesapin](https://github.com/alesapin) - fixed). + +#### Build/Testing/Packaging Improvement + +* Run stateless tests in parallel in CI. [#22300](https://github.com/ClickHouse/ClickHouse/pull/22300) ([alesapin](https://github.com/alesapin)). +* Simplify debian packages. This fixes [#21698](https://github.com/ClickHouse/ClickHouse/issues/21698). [#22976](https://github.com/ClickHouse/ClickHouse/pull/22976) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Added support for ClickHouse build on Apple M1. [#21639](https://github.com/ClickHouse/ClickHouse/pull/21639) ([changvvb](https://github.com/changvvb)). +* Fixed ClickHouse Keeper build for MacOS. [#22860](https://github.com/ClickHouse/ClickHouse/pull/22860) ([alesapin](https://github.com/alesapin)). +* Fixed some tests on AArch64 platform. [#22596](https://github.com/ClickHouse/ClickHouse/pull/22596) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Added function alignment for possibly better performance. [#21431](https://github.com/ClickHouse/ClickHouse/pull/21431) ([Danila Kutenin](https://github.com/danlark1)). +* Adjust some tests to output identical results on amd64 and aarch64 (qemu). The result was depending on implementation specific CPU behaviour. [#22590](https://github.com/ClickHouse/ClickHouse/pull/22590) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Allow query profiling only on x86_64. See [#15174](https://github.com/ClickHouse/ClickHouse/issues/15174#issuecomment-812954965) and [#15638](https://github.com/ClickHouse/ClickHouse/issues/15638#issuecomment-703805337). This closes [#15638](https://github.com/ClickHouse/ClickHouse/issues/15638). [#22580](https://github.com/ClickHouse/ClickHouse/pull/22580) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Allow building with unbundled xz (lzma) using `USE_INTERNAL_XZ_LIBRARY=OFF` CMake option. [#22571](https://github.com/ClickHouse/ClickHouse/pull/22571) ([Kfir Itzhak](https://github.com/mastertheknife)). +* Enable bundled `openldap` on `ppc64le` [#22487](https://github.com/ClickHouse/ClickHouse/pull/22487) ([Kfir Itzhak](https://github.com/mastertheknife)). +* Disable incompatible libraries (platform specific typically) on `ppc64le` [#22475](https://github.com/ClickHouse/ClickHouse/pull/22475) ([Kfir Itzhak](https://github.com/mastertheknife)). +* Add Jepsen test in CI for clickhouse Keeper. [#22373](https://github.com/ClickHouse/ClickHouse/pull/22373) ([alesapin](https://github.com/alesapin)). +* Build `jemalloc` with support for [heap profiling](https://github.com/jemalloc/jemalloc/wiki/Use-Case%3A-Heap-Profiling). [#22834](https://github.com/ClickHouse/ClickHouse/pull/22834) ([nvartolomei](https://github.com/nvartolomei)). +* Avoid UB in `*Log` engines for rwlock unlock due to unlock from another thread. [#22583](https://github.com/ClickHouse/ClickHouse/pull/22583) ([Azat Khuzhin](https://github.com/azat)). +* Fixed UB by unlocking the rwlock of the TinyLog from the same thread. [#22560](https://github.com/ClickHouse/ClickHouse/pull/22560) ([Azat Khuzhin](https://github.com/azat)). + + ## ClickHouse release 21.4 ### ClickHouse release 21.4.1 2021-04-12 diff --git a/CMakeLists.txt b/CMakeLists.txt index 2c3fa088995..9c62748ff95 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -36,7 +36,7 @@ option(FAIL_ON_UNSUPPORTED_OPTIONS_COMBINATION if(FAIL_ON_UNSUPPORTED_OPTIONS_COMBINATION) set(RECONFIGURE_MESSAGE_LEVEL FATAL_ERROR) else() - set(RECONFIGURE_MESSAGE_LEVEL STATUS) + set(RECONFIGURE_MESSAGE_LEVEL WARNING) endif() enable_language(C CXX ASM) @@ -504,7 +504,6 @@ include (cmake/find/libuv.cmake) # for amqpcpp and cassandra include (cmake/find/amqpcpp.cmake) include (cmake/find/capnp.cmake) include (cmake/find/llvm.cmake) -include (cmake/find/termcap.cmake) # for external static llvm include (cmake/find/h3.cmake) include (cmake/find/libxml2.cmake) include (cmake/find/brotli.cmake) @@ -527,6 +526,7 @@ include (cmake/find/nanodbc.cmake) include (cmake/find/rocksdb.cmake) include (cmake/find/libpqxx.cmake) include (cmake/find/nuraft.cmake) +include (cmake/find/yaml-cpp.cmake) if(NOT USE_INTERNAL_PARQUET_LIBRARY) diff --git a/README.md b/README.md index ea9f365a3c6..5677837815c 100644 --- a/README.md +++ b/README.md @@ -8,8 +8,11 @@ ClickHouse® is an open-source column-oriented database management system that a * [Tutorial](https://clickhouse.tech/docs/en/getting_started/tutorial/) shows how to set up and query small ClickHouse cluster. * [Documentation](https://clickhouse.tech/docs/en/) provides more in-depth information. * [YouTube channel](https://www.youtube.com/c/ClickHouseDB) has a lot of content about ClickHouse in video format. -* [Slack](https://join.slack.com/t/clickhousedb/shared_invite/zt-nwwakmk4-xOJ6cdy0sJC3It8j348~IA) and [Telegram](https://telegram.me/clickhouse_en) allow to chat with ClickHouse users in real-time. +* [Slack](https://join.slack.com/t/clickhousedb/shared_invite/zt-qfort0u8-TWqK4wIP0YSdoDE0btKa1w) and [Telegram](https://telegram.me/clickhouse_en) allow to chat with ClickHouse users in real-time. * [Blog](https://clickhouse.yandex/blog/en/) contains various ClickHouse-related articles, as well as announcements and reports about events. * [Code Browser](https://clickhouse.tech/codebrowser/html_report/ClickHouse/index.html) with syntax highlight and navigation. * [Contacts](https://clickhouse.tech/#contacts) can help to get your questions answered if there are any. * You can also [fill this form](https://clickhouse.tech/#meet) to meet Yandex ClickHouse team in person. + +## Upcoming Events +* [SF Bay Area ClickHouse Community Meetup (online)](https://www.meetup.com/San-Francisco-Bay-Area-ClickHouse-Meetup/events/278144089/) on 16 June 2021. diff --git a/base/bridge/CMakeLists.txt b/base/bridge/CMakeLists.txt index 20b0b651677..bcba43e8c2e 100644 --- a/base/bridge/CMakeLists.txt +++ b/base/bridge/CMakeLists.txt @@ -3,5 +3,11 @@ add_library (bridge ) target_include_directories (daemon PUBLIC ..) -target_link_libraries (bridge PRIVATE daemon dbms Poco::Data Poco::Data::ODBC) +target_link_libraries (bridge + PRIVATE + daemon + dbms + Poco::Data + Poco::Data::ODBC +) diff --git a/base/daemon/BaseDaemon.cpp b/base/daemon/BaseDaemon.cpp index 83384038b7c..01e700ebba3 100644 --- a/base/daemon/BaseDaemon.cpp +++ b/base/daemon/BaseDaemon.cpp @@ -468,7 +468,7 @@ void BaseDaemon::reloadConfiguration() * instead of using files specified in config.xml. * (It's convenient to log in console when you start server without any command line parameters.) */ - config_path = config().getString("config-file", "config.xml"); + config_path = config().getString("config-file", getDefaultConfigFileName()); DB::ConfigProcessor config_processor(config_path, false, true); config_processor.setConfigPath(Poco::Path(config_path).makeParent().toString()); loaded_config = config_processor.loadConfig(/* allow_zk_includes = */ true); @@ -516,6 +516,11 @@ std::string BaseDaemon::getDefaultCorePath() const return "/opt/cores/"; } +std::string BaseDaemon::getDefaultConfigFileName() const +{ + return "config.xml"; +} + void BaseDaemon::closeFDs() { #if defined(OS_FREEBSD) || defined(OS_DARWIN) diff --git a/base/daemon/BaseDaemon.h b/base/daemon/BaseDaemon.h index 8b9d765cf2e..3d47d85a9f5 100644 --- a/base/daemon/BaseDaemon.h +++ b/base/daemon/BaseDaemon.h @@ -149,6 +149,8 @@ protected: virtual std::string getDefaultCorePath() const; + virtual std::string getDefaultConfigFileName() const; + std::optional pid_file; std::atomic_bool is_cancelled{false}; diff --git a/base/mysqlxx/PoolWithFailover.cpp b/base/mysqlxx/PoolWithFailover.cpp index ea2d060e596..e317ab7f228 100644 --- a/base/mysqlxx/PoolWithFailover.cpp +++ b/base/mysqlxx/PoolWithFailover.cpp @@ -78,6 +78,8 @@ PoolWithFailover::PoolWithFailover( const RemoteDescription & addresses, const std::string & user, const std::string & password, + unsigned default_connections_, + unsigned max_connections_, size_t max_tries_) : max_tries(max_tries_) , shareable(false) @@ -85,7 +87,13 @@ PoolWithFailover::PoolWithFailover( /// Replicas have the same priority, but traversed replicas are moved to the end of the queue. for (const auto & [host, port] : addresses) { - replicas_by_priority[0].emplace_back(std::make_shared(database, host, user, password, port)); + replicas_by_priority[0].emplace_back(std::make_shared(database, + host, user, password, port, + /* socket_ = */ "", + MYSQLXX_DEFAULT_TIMEOUT, + MYSQLXX_DEFAULT_RW_TIMEOUT, + default_connections_, + max_connections_)); } } diff --git a/base/mysqlxx/PoolWithFailover.h b/base/mysqlxx/PoolWithFailover.h index 5154fc3e253..1c7a63e76c0 100644 --- a/base/mysqlxx/PoolWithFailover.h +++ b/base/mysqlxx/PoolWithFailover.h @@ -115,6 +115,8 @@ namespace mysqlxx const RemoteDescription & addresses, const std::string & user, const std::string & password, + unsigned default_connections_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_START_CONNECTIONS, + unsigned max_connections_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_CONNECTIONS, size_t max_tries_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_TRIES); PoolWithFailover(const PoolWithFailover & other); diff --git a/cmake/autogenerated_versions.txt b/cmake/autogenerated_versions.txt index 51f4b974161..34de50e9f8a 100644 --- a/cmake/autogenerated_versions.txt +++ b/cmake/autogenerated_versions.txt @@ -1,9 +1,9 @@ # This strings autochanged from release_lib.sh: -SET(VERSION_REVISION 54451) +SET(VERSION_REVISION 54452) SET(VERSION_MAJOR 21) -SET(VERSION_MINOR 6) +SET(VERSION_MINOR 7) SET(VERSION_PATCH 1) -SET(VERSION_GITHASH 96fced4c3cf432fb0b401d2ab01f0c56e5f74a96) -SET(VERSION_DESCRIBE v21.6.1.1-prestable) -SET(VERSION_STRING 21.6.1.1) +SET(VERSION_GITHASH 976ccc2e908ac3bc28f763bfea8134ea0a121b40) +SET(VERSION_DESCRIBE v21.7.1.1-prestable) +SET(VERSION_STRING 21.7.1.1) # end of autochange diff --git a/cmake/find/llvm.cmake b/cmake/find/llvm.cmake index e08f45b9932..816164bef10 100644 --- a/cmake/find/llvm.cmake +++ b/cmake/find/llvm.cmake @@ -1,102 +1,34 @@ -if (APPLE OR SPLIT_SHARED_LIBRARIES OR NOT ARCH_AMD64) +if (APPLE OR SPLIT_SHARED_LIBRARIES OR NOT ARCH_AMD64 OR SANITIZE STREQUAL "undefined") set (ENABLE_EMBEDDED_COMPILER OFF CACHE INTERNAL "") endif() option (ENABLE_EMBEDDED_COMPILER "Enable support for 'compile_expressions' option for query execution" ON) -# Broken in macos. TODO: update clang, re-test, enable on Apple -if (ENABLE_EMBEDDED_COMPILER AND NOT SPLIT_SHARED_LIBRARIES AND ARCH_AMD64 AND NOT (SANITIZE STREQUAL "undefined")) - option (USE_INTERNAL_LLVM_LIBRARY "Use bundled or system LLVM library." ${NOT_UNBUNDLED}) -endif() if (NOT ENABLE_EMBEDDED_COMPILER) - if(USE_INTERNAL_LLVM_LIBRARY) - message (${RECONFIGURE_MESSAGE_LEVEL} "Cannot use internal LLVM library with ENABLE_EMBEDDED_COMPILER=OFF") - endif() + set (USE_EMBEDDED_COMPILER 0) return() endif() if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/llvm/llvm/CMakeLists.txt") - if (USE_INTERNAL_LLVM_LIBRARY) - message (WARNING "submodule contrib/llvm is missing. to fix try run: \n git submodule update --init --recursive") - message (${RECONFIGURE_MESSAGE_LEVEL} "Can't fidd internal LLVM library") - endif() - set (MISSING_INTERNAL_LLVM_LIBRARY 1) + message (${RECONFIGURE_MESSAGE_LEVEL} "submodule /contrib/llvm is missing. to fix try run: \n git submodule update --init --recursive") endif () -if (NOT USE_INTERNAL_LLVM_LIBRARY) - set (LLVM_PATHS "/usr/local/lib/llvm" "/usr/lib/llvm") +set (USE_EMBEDDED_COMPILER 1) - foreach(llvm_v 11.1 11) - if (NOT LLVM_FOUND) - find_package (LLVM ${llvm_v} CONFIG PATHS ${LLVM_PATHS}) - endif () - endforeach () +set (LLVM_FOUND 1) +set (LLVM_VERSION "12.0.0bundled") +set (LLVM_INCLUDE_DIRS + "${ClickHouse_SOURCE_DIR}/contrib/llvm/llvm/include" + "${ClickHouse_BINARY_DIR}/contrib/llvm/llvm/include" +) +set (LLVM_LIBRARY_DIRS "${ClickHouse_BINARY_DIR}/contrib/llvm/llvm") - if (LLVM_FOUND) - # Remove dynamically-linked zlib and libedit from LLVM's dependencies: - set_target_properties(LLVMSupport PROPERTIES INTERFACE_LINK_LIBRARIES "-lpthread;LLVMDemangle;${ZLIB_LIBRARIES}") - set_target_properties(LLVMLineEditor PROPERTIES INTERFACE_LINK_LIBRARIES "LLVMSupport") - - option(LLVM_HAS_RTTI "Enable if LLVM was build with RTTI enabled" ON) - set (USE_EMBEDDED_COMPILER 1) - else() - message (${RECONFIGURE_MESSAGE_LEVEL} "Can't find system LLVM") - set (USE_EMBEDDED_COMPILER 0) - endif() - - if (LLVM_FOUND AND OS_LINUX AND USE_LIBCXX AND NOT FORCE_LLVM_WITH_LIBCXX) - message(WARNING "Option USE_INTERNAL_LLVM_LIBRARY is not set but the LLVM library from OS packages " - "in Linux is incompatible with libc++ ABI. LLVM Will be disabled. Force: -DFORCE_LLVM_WITH_LIBCXX=ON") - message (${RECONFIGURE_MESSAGE_LEVEL} "Unsupported LLVM configuration, cannot enable LLVM") - set (LLVM_FOUND 0) - set (USE_EMBEDDED_COMPILER 0) - endif () -endif() - -if(NOT LLVM_FOUND AND NOT MISSING_INTERNAL_LLVM_LIBRARY) - if (CMAKE_CURRENT_SOURCE_DIR STREQUAL CMAKE_CURRENT_BINARY_DIR) - message(WARNING "Option ENABLE_EMBEDDED_COMPILER is set but internal LLVM library cannot build if build directory is the same as source directory.") - set (LLVM_FOUND 0) - set (USE_EMBEDDED_COMPILER 0) - elseif (SPLIT_SHARED_LIBRARIES) - # llvm-tablegen cannot find shared libraries that we build. Probably can be easily fixed. - message(WARNING "Option USE_INTERNAL_LLVM_LIBRARY is not compatible with SPLIT_SHARED_LIBRARIES. Build of LLVM will be disabled.") - set (LLVM_FOUND 0) - set (USE_EMBEDDED_COMPILER 0) - elseif (NOT ARCH_AMD64) - # It's not supported yet, but you can help. - message(WARNING "Option USE_INTERNAL_LLVM_LIBRARY is only available for x86_64. Build of LLVM will be disabled.") - set (LLVM_FOUND 0) - set (USE_EMBEDDED_COMPILER 0) - elseif (SANITIZE STREQUAL "undefined") - # llvm-tblgen, that is used during LLVM build, doesn't work with UBSan. - message(WARNING "Option USE_INTERNAL_LLVM_LIBRARY does not work with UBSan, because 'llvm-tblgen' tool from LLVM has undefined behaviour. Build of LLVM will be disabled.") - set (LLVM_FOUND 0) - set (USE_EMBEDDED_COMPILER 0) - else () - set (USE_INTERNAL_LLVM_LIBRARY ON) - set (LLVM_FOUND 1) - set (USE_EMBEDDED_COMPILER 1) - set (LLVM_VERSION "9.0.0bundled") - set (LLVM_INCLUDE_DIRS - "${ClickHouse_SOURCE_DIR}/contrib/llvm/llvm/include" - "${ClickHouse_BINARY_DIR}/contrib/llvm/llvm/include" - ) - set (LLVM_LIBRARY_DIRS "${ClickHouse_BINARY_DIR}/contrib/llvm/llvm") - endif() -endif() - -if (LLVM_FOUND) - message(STATUS "LLVM include Directory: ${LLVM_INCLUDE_DIRS}") - message(STATUS "LLVM library Directory: ${LLVM_LIBRARY_DIRS}") - message(STATUS "LLVM C++ compiler flags: ${LLVM_CXXFLAGS}") -else() - message (${RECONFIGURE_MESSAGE_LEVEL} "Can't enable LLVM") -endif() +message(STATUS "LLVM include Directory: ${LLVM_INCLUDE_DIRS}") +message(STATUS "LLVM library Directory: ${LLVM_LIBRARY_DIRS}") +message(STATUS "LLVM C++ compiler flags: ${LLVM_CXXFLAGS}") # This list was generated by listing all LLVM libraries, compiling the binary and removing all libraries while it still compiles. set (REQUIRED_LLVM_LIBRARIES -LLVMOrcJIT LLVMExecutionEngine LLVMRuntimeDyld LLVMX86CodeGen diff --git a/cmake/find/termcap.cmake b/cmake/find/termcap.cmake deleted file mode 100644 index 58454165785..00000000000 --- a/cmake/find/termcap.cmake +++ /dev/null @@ -1,17 +0,0 @@ -if (ENABLE_EMBEDDED_COMPILER AND NOT USE_INTERNAL_LLVM_LIBRARY AND USE_STATIC_LIBRARIES) - find_library (TERMCAP_LIBRARY tinfo) - if (NOT TERMCAP_LIBRARY) - find_library (TERMCAP_LIBRARY ncurses) - endif() - if (NOT TERMCAP_LIBRARY) - find_library (TERMCAP_LIBRARY termcap) - endif() - - if (NOT TERMCAP_LIBRARY) - message (FATAL_ERROR "Statically Linking external LLVM requires termcap") - endif() - - target_link_libraries(LLVMSupport INTERFACE ${TERMCAP_LIBRARY}) - - message (STATUS "Using termcap: ${TERMCAP_LIBRARY}") -endif() diff --git a/cmake/find/yaml-cpp.cmake b/cmake/find/yaml-cpp.cmake new file mode 100644 index 00000000000..9b9d9bd39d6 --- /dev/null +++ b/cmake/find/yaml-cpp.cmake @@ -0,0 +1,9 @@ +option(USE_YAML_CPP "Enable yaml-cpp" ${ENABLE_LIBRARIES}) + +if (NOT USE_YAML_CPP) + return() +endif() + +if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/yaml-cpp") + message (ERROR "submodule contrib/yaml-cpp is missing. to fix try run: \n git submodule update --init --recursive") +endif() diff --git a/contrib/CMakeLists.txt b/contrib/CMakeLists.txt index 9eafec23f51..c499da9d087 100644 --- a/contrib/CMakeLists.txt +++ b/contrib/CMakeLists.txt @@ -50,6 +50,10 @@ add_subdirectory (replxx-cmake) add_subdirectory (unixodbc-cmake) add_subdirectory (nanodbc-cmake) +if (USE_YAML_CPP) + add_subdirectory (yaml-cpp-cmake) +endif() + if (USE_INTERNAL_XZ_LIBRARY) add_subdirectory (xz) endif() @@ -205,11 +209,12 @@ elseif(GTEST_SRC_DIR) target_compile_definitions(gtest INTERFACE GTEST_HAS_POSIX_RE=0) endif() -if (USE_EMBEDDED_COMPILER AND USE_INTERNAL_LLVM_LIBRARY) +if (USE_EMBEDDED_COMPILER) # ld: unknown option: --color-diagnostics if (APPLE) set (LINKER_SUPPORTS_COLOR_DIAGNOSTICS 0 CACHE INTERNAL "") endif () + set (LLVM_ENABLE_EH 1 CACHE INTERNAL "") set (LLVM_ENABLE_RTTI 1 CACHE INTERNAL "") set (LLVM_ENABLE_PIC 0 CACHE INTERNAL "") @@ -224,8 +229,6 @@ if (USE_EMBEDDED_COMPILER AND USE_INTERNAL_LLVM_LIBRARY) set (CMAKE_CXX_STANDARD ${CMAKE_CXX_STANDARD_bak}) unset (CMAKE_CXX_STANDARD_bak) - - target_include_directories(LLVMSupport SYSTEM BEFORE PRIVATE ${ZLIB_INCLUDE_DIR}) endif () if (USE_INTERNAL_LIBGSASL_LIBRARY) diff --git a/contrib/avro b/contrib/avro index 92caca2d42f..1ee16d8c5a7 160000 --- a/contrib/avro +++ b/contrib/avro @@ -1 +1 @@ -Subproject commit 92caca2d42fc9a97e34e95f963593539d32ed331 +Subproject commit 1ee16d8c5a7808acff5cf0475f771195d9aa3faa diff --git a/contrib/grpc b/contrib/grpc index 5b79aae85c5..60c986e15ca 160000 --- a/contrib/grpc +++ b/contrib/grpc @@ -1 +1 @@ -Subproject commit 5b79aae85c515e0df4abfb7b1e07975fdc7cecc1 +Subproject commit 60c986e15cae70aade721d26badabab1f822fdd6 diff --git a/contrib/libunwind b/contrib/libunwind index 8fe25d7dc70..a491c27b331 160000 --- a/contrib/libunwind +++ b/contrib/libunwind @@ -1 +1 @@ -Subproject commit 8fe25d7dc70f2a4ea38c3e5a33fa9d4199b67a5a +Subproject commit a491c27b33109a842d577c0f7ac5f5f218859181 diff --git a/contrib/llvm b/contrib/llvm index cfaf365cf96..e5751459412 160000 --- a/contrib/llvm +++ b/contrib/llvm @@ -1 +1 @@ -Subproject commit cfaf365cf96918999d09d976ec736b4518cf5d02 +Subproject commit e5751459412bce1391fb7a2e9bbc01e131bf72f1 diff --git a/contrib/re2 b/contrib/re2 index 7cf8b88e8f7..13ebb377c6a 160000 --- a/contrib/re2 +++ b/contrib/re2 @@ -1 +1 @@ -Subproject commit 7cf8b88e8f70f97fd4926b56aa87e7f53b2717e0 +Subproject commit 13ebb377c6ad763ca61d12dd6f88b1126bd0b911 diff --git a/contrib/re2_st/re2_transform.cmake b/contrib/re2_st/re2_transform.cmake index 2d50d9e8c2a..56a96f45630 100644 --- a/contrib/re2_st/re2_transform.cmake +++ b/contrib/re2_st/re2_transform.cmake @@ -1,7 +1,7 @@ file (READ ${SOURCE_FILENAME} CONTENT) string (REGEX REPLACE "using re2::RE2;" "" CONTENT "${CONTENT}") string (REGEX REPLACE "using re2::LazyRE2;" "" CONTENT "${CONTENT}") -string (REGEX REPLACE "namespace re2" "namespace re2_st" CONTENT "${CONTENT}") +string (REGEX REPLACE "namespace re2 {" "namespace re2_st {" CONTENT "${CONTENT}") string (REGEX REPLACE "re2::" "re2_st::" CONTENT "${CONTENT}") string (REGEX REPLACE "\"re2/" "\"re2_st/" CONTENT "${CONTENT}") string (REGEX REPLACE "(.\\*?_H)" "\\1_ST" CONTENT "${CONTENT}") diff --git a/contrib/yaml-cpp b/contrib/yaml-cpp new file mode 160000 index 00000000000..0c86adac6d1 --- /dev/null +++ b/contrib/yaml-cpp @@ -0,0 +1 @@ +Subproject commit 0c86adac6d117ee2b4afcedb8ade19036ca0327d diff --git a/contrib/yaml-cpp-cmake/CMakeLists.txt b/contrib/yaml-cpp-cmake/CMakeLists.txt new file mode 100644 index 00000000000..ed0287de110 --- /dev/null +++ b/contrib/yaml-cpp-cmake/CMakeLists.txt @@ -0,0 +1,39 @@ +set (LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/yaml-cpp) + +set (SRCS + ${LIBRARY_DIR}/src/binary.cpp + ${LIBRARY_DIR}/src/emitterutils.cpp + ${LIBRARY_DIR}/src/null.cpp + ${LIBRARY_DIR}/src/scantoken.cpp + ${LIBRARY_DIR}/src/convert.cpp + ${LIBRARY_DIR}/src/exceptions.cpp + ${LIBRARY_DIR}/src/ostream_wrapper.cpp + ${LIBRARY_DIR}/src/simplekey.cpp + ${LIBRARY_DIR}/src/depthguard.cpp + ${LIBRARY_DIR}/src/exp.cpp + ${LIBRARY_DIR}/src/parse.cpp + ${LIBRARY_DIR}/src/singledocparser.cpp + ${LIBRARY_DIR}/src/directives.cpp + ${LIBRARY_DIR}/src/memory.cpp + ${LIBRARY_DIR}/src/parser.cpp + ${LIBRARY_DIR}/src/stream.cpp + ${LIBRARY_DIR}/src/emit.cpp + ${LIBRARY_DIR}/src/nodebuilder.cpp + ${LIBRARY_DIR}/src/regex_yaml.cpp + ${LIBRARY_DIR}/src/tag.cpp + ${LIBRARY_DIR}/src/emitfromevents.cpp + ${LIBRARY_DIR}/src/node.cpp + ${LIBRARY_DIR}/src/scanner.cpp + ${LIBRARY_DIR}/src/emitter.cpp + ${LIBRARY_DIR}/src/node_data.cpp + ${LIBRARY_DIR}/src/scanscalar.cpp + ${LIBRARY_DIR}/src/emitterstate.cpp + ${LIBRARY_DIR}/src/nodeevents.cpp + ${LIBRARY_DIR}/src/scantag.cpp +) + +add_library (yaml-cpp ${SRCS}) + + +target_include_directories(yaml-cpp PRIVATE ${LIBRARY_DIR}/include/yaml-cpp) +target_include_directories(yaml-cpp SYSTEM BEFORE PUBLIC ${LIBRARY_DIR}/include) diff --git a/debian/changelog b/debian/changelog index 8b6626416a9..e1c46dae3a8 100644 --- a/debian/changelog +++ b/debian/changelog @@ -1,5 +1,5 @@ -clickhouse (21.6.1.1) unstable; urgency=low +clickhouse (21.7.1.1) unstable; urgency=low * Modified source code - -- clickhouse-release Tue, 20 Apr 2021 01:48:16 +0300 + -- clickhouse-release Thu, 20 May 2021 22:23:29 +0300 diff --git a/docker/client/Dockerfile b/docker/client/Dockerfile index 569025dec1c..79ac92f2277 100644 --- a/docker/client/Dockerfile +++ b/docker/client/Dockerfile @@ -1,7 +1,7 @@ FROM ubuntu:18.04 ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/" -ARG version=21.6.1.* +ARG version=21.7.1.* RUN apt-get update \ && apt-get install --yes --no-install-recommends \ diff --git a/docker/packager/packager b/docker/packager/packager index 836f30dec42..81474166cc9 100755 --- a/docker/packager/packager +++ b/docker/packager/packager @@ -154,6 +154,10 @@ def parse_env_variables(build_type, compiler, sanitizer, package_type, image_typ if clang_tidy: cmake_flags.append('-DENABLE_CLANG_TIDY=1') + cmake_flags.append('-DENABLE_UTILS=1') + cmake_flags.append('-DUSE_GTEST=1') + cmake_flags.append('-DENABLE_TESTS=1') + cmake_flags.append('-DENABLE_EXAMPLES=1') # Don't stop on first error to find more clang-tidy errors in one run. result.append('NINJA_FLAGS=-k0') diff --git a/docker/server/Dockerfile b/docker/server/Dockerfile index d302fec7417..52dcb6caae5 100644 --- a/docker/server/Dockerfile +++ b/docker/server/Dockerfile @@ -1,7 +1,7 @@ FROM ubuntu:20.04 ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/" -ARG version=21.6.1.* +ARG version=21.7.1.* ARG gosu_ver=1.10 # set non-empty deb_location_url url to create a docker image diff --git a/docker/test/Dockerfile b/docker/test/Dockerfile index 0e4646386ce..9809a36395d 100644 --- a/docker/test/Dockerfile +++ b/docker/test/Dockerfile @@ -1,7 +1,7 @@ FROM ubuntu:18.04 ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/" -ARG version=21.6.1.* +ARG version=21.7.1.* RUN apt-get update && \ apt-get install -y apt-transport-https dirmngr && \ diff --git a/docker/test/fasttest/run.sh b/docker/test/fasttest/run.sh index 42c720a7e63..fc73a0df0ee 100755 --- a/docker/test/fasttest/run.sh +++ b/docker/test/fasttest/run.sh @@ -73,7 +73,7 @@ function start_server --path "$FASTTEST_DATA" --user_files_path "$FASTTEST_DATA/user_files" --top_level_domains_path "$FASTTEST_DATA/top_level_domains" - --keeper_server.log_storage_path "$FASTTEST_DATA/coordination" + --keeper_server.storage_path "$FASTTEST_DATA/coordination" ) clickhouse-server "${opts[@]}" &>> "$FASTTEST_OUTPUT/server.log" & server_pid=$! @@ -374,37 +374,17 @@ function run_tests 01801_s3_cluster # Depends on LLVM JIT + 01072_nullable_jit 01852_jit_if 01865_jit_comparison_constant_result + 01871_merge_tree_compile_expressions ) - (time clickhouse-test --hung-check -j 8 --order=random --use-skip-list --no-long --testname --shard --zookeeper --skip "${TESTS_TO_SKIP[@]}" -- "$FASTTEST_FOCUS" 2>&1 ||:) | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/test_log.txt" - - # substr is to remove semicolon after test name - readarray -t FAILED_TESTS < <(awk '/\[ FAIL|TIMEOUT|ERROR \]/ { print substr($3, 1, length($3)-1) }' "$FASTTEST_OUTPUT/test_log.txt" | tee "$FASTTEST_OUTPUT/failed-parallel-tests.txt") - - # We will rerun sequentially any tests that have failed during parallel run. - # They might have failed because there was some interference from other tests - # running concurrently. If they fail even in seqential mode, we will report them. - # FIXME All tests that require exclusive access to the server must be - # explicitly marked as `sequential`, and `clickhouse-test` must detect them and - # run them in a separate group after all other tests. This is faster and also - # explicit instead of guessing. - if [[ -n "${FAILED_TESTS[*]}" ]] - then - stop_server ||: - - # Clean the data so that there is no interference from the previous test run. - rm -rf "$FASTTEST_DATA"/{{meta,}data,user_files,coordination} ||: - - start_server - - echo "Going to run again: ${FAILED_TESTS[*]}" - - clickhouse-test --hung-check --order=random --no-long --testname --shard --zookeeper "${FAILED_TESTS[@]}" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee -a "$FASTTEST_OUTPUT/test_log.txt" - else - echo "No failed tests" - fi + time clickhouse-test --hung-check -j 8 --order=random --use-skip-list \ + --no-long --testname --shard --zookeeper --skip "${TESTS_TO_SKIP[@]}" \ + -- "$FASTTEST_FOCUS" 2>&1 \ + | ts '%Y-%m-%d %H:%M:%S' \ + | tee "$FASTTEST_OUTPUT/test_log.txt" } case "$stage" in diff --git a/docker/test/fuzzer/run-fuzzer.sh b/docker/test/fuzzer/run-fuzzer.sh index 626bedb453c..670fc9e58b3 100755 --- a/docker/test/fuzzer/run-fuzzer.sh +++ b/docker/test/fuzzer/run-fuzzer.sh @@ -56,17 +56,19 @@ function watchdog sleep 3600 echo "Fuzzing run has timed out" - killall clickhouse-client ||: for _ in {1..10} do - if ! pgrep -f clickhouse-client + # Only kill by pid the particular client that runs the fuzzing, or else + # we can kill some clickhouse-client processes this script starts later, + # e.g. for checking server liveness. + if ! kill $fuzzer_pid then break fi sleep 1 done - killall -9 clickhouse-client ||: + kill -9 -- $fuzzer_pid ||: } function filter_exists @@ -85,7 +87,7 @@ function fuzz { # Obtain the list of newly added tests. They will be fuzzed in more extreme way than other tests. # Don't overwrite the NEW_TESTS_OPT so that it can be set from the environment. - NEW_TESTS="$(grep -P 'tests/queries/0_stateless/.*\.sql' ci-changed-files.txt | sed -r -e 's!^!ch/!' | sort -R)" + NEW_TESTS="$(sed -n 's!\(^tests/queries/0_stateless/.*\.sql\)$!ch/\1!p' ci-changed-files.txt | sort -R)" # ci-changed-files.txt contains also files that has been deleted/renamed, filter them out. NEW_TESTS="$(filter_exists $NEW_TESTS)" if [[ -n "$NEW_TESTS" ]] @@ -115,17 +117,49 @@ continue gdb -batch -command script.gdb -p "$(pidof clickhouse-server)" & - fuzzer_exit_code=0 # SC2012: Use find instead of ls to better handle non-alphanumeric filenames. They are all alphanumeric. # SC2046: Quote this to prevent word splitting. Actually I need word splitting. # shellcheck disable=SC2012,SC2046 clickhouse-client --query-fuzzer-runs=1000 --queries-file $(ls -1 ch/tests/queries/0_stateless/*.sql | sort -R) $NEW_TESTS_OPT \ > >(tail -n 100000 > fuzzer.log) \ - 2>&1 \ - || fuzzer_exit_code=$? + 2>&1 & + fuzzer_pid=$! + echo "Fuzzer pid is $fuzzer_pid" + # Start a watchdog that should kill the fuzzer on timeout. + # The shell won't kill the child sleep when we kill it, so we have to put it + # into a separate process group so that we can kill them all. + set -m + watchdog & + watchdog_pid=$! + set +m + # Check that the watchdog has started. + kill -0 $watchdog_pid + + # Wait for the fuzzer to complete. + # Note that the 'wait || ...' thing is required so that the script doesn't + # exit because of 'set -e' when 'wait' returns nonzero code. + fuzzer_exit_code=0 + wait "$fuzzer_pid" || fuzzer_exit_code=$? echo "Fuzzer exit code is $fuzzer_exit_code" + kill -- -$watchdog_pid ||: + + # If the server dies, most often the fuzzer returns code 210: connetion + # refused, and sometimes also code 32: attempt to read after eof. For + # simplicity, check again whether the server is accepting connections, using + # clickhouse-client. We don't check for existence of server process, because + # the process is still present while the server is terminating and not + # accepting the connections anymore. + if clickhouse-client --query "select 1 format Null" + then + server_died=0 + else + echo "Server live check returns $?" + server_died=1 + fi + + # Stop the server. clickhouse-client --query "select elapsed, query from system.processes" ||: killall clickhouse-server ||: for _ in {1..10} @@ -137,6 +171,41 @@ continue sleep 1 done killall -9 clickhouse-server ||: + + # Debug. + date + sleep 10 + jobs + pstree -aspgT + + # Make files with status and description we'll show for this check on Github. + task_exit_code=$fuzzer_exit_code + if [ "$server_died" == 1 ] + then + # The server has died. + task_exit_code=210 + echo "failure" > status.txt + if ! grep -ao "Received signal.*\|Logical error.*\|Assertion.*failed\|Failed assertion.*\|.*runtime error: .*\|.*is located.*\|SUMMARY: AddressSanitizer:.*\|SUMMARY: MemorySanitizer:.*\|SUMMARY: ThreadSanitizer:.*\|.*_LIBCPP_ASSERT.*" server.log > description.txt + then + echo "Lost connection to server. See the logs." > description.txt + fi + elif [ "$fuzzer_exit_code" == "143" ] || [ "$fuzzer_exit_code" == "0" ] + then + # Variants of a normal run: + # 0 -- fuzzing ended earlier than timeout. + # 143 -- SIGTERM -- the fuzzer was killed by timeout. + task_exit_code=0 + echo "success" > status.txt + echo "OK" > description.txt + else + # The server was alive, but the fuzzer returned some error. Probably this + # is a problem in the fuzzer itself. Don't grep the server log in this + # case, because we will find a message about normal server termination + # (Received signal 15), which is confusing. + task_exit_code=$fuzzer_exit_code + echo "failure" > status.txt + echo "Fuzzer failed ($fuzzer_exit_code). See the logs." > description.txt + fi } case "$stage" in @@ -165,50 +234,7 @@ case "$stage" in time configure ;& "fuzz") - # Start a watchdog that should kill the fuzzer on timeout. - # The shell won't kill the child sleep when we kill it, so we have to put it - # into a separate process group so that we can kill them all. - set -m - watchdog & - watchdog_pid=$! - set +m - # Check that the watchdog has started - kill -0 $watchdog_pid - - fuzzer_exit_code=0 - time fuzz || fuzzer_exit_code=$? - kill -- -$watchdog_pid ||: - - # Debug - date - sleep 10 - jobs - pstree -aspgT - - # Make files with status and description we'll show for this check on Github - task_exit_code=$fuzzer_exit_code - if [ "$fuzzer_exit_code" == 143 ] - then - # SIGTERM -- the fuzzer was killed by timeout, which means a normal run. - echo "success" > status.txt - echo "OK" > description.txt - task_exit_code=0 - elif [ "$fuzzer_exit_code" == 210 ] - then - # Lost connection to the server. This probably means that the server died - # with abort. - echo "failure" > status.txt - if ! grep -ao "Received signal.*\|Logical error.*\|Assertion.*failed\|Failed assertion.*\|.*runtime error: .*\|.*is located.*\|SUMMARY: AddressSanitizer:.*\|SUMMARY: MemorySanitizer:.*\|SUMMARY: ThreadSanitizer:.*\|.*_LIBCPP_ASSERT.*" server.log > description.txt - then - echo "Lost connection to server. See the logs." > description.txt - fi - else - # Something different -- maybe the fuzzer itself died? Don't grep the - # server log in this case, because we will find a message about normal - # server termination (Received signal 15), which is confusing. - echo "failure" > status.txt - echo "Fuzzer failed ($fuzzer_exit_code). See the logs." > description.txt - fi + time fuzz ;& "report") cat > report.html < >(tee -a analyze/errors.log 1>&2) + +# Fetch historical query variability thresholds from the CI database +clickhouse-local --query " + left join file('analyze/report-thresholds.tsv', TSV, + 'test text, report_threshold float') thresholds + on query_metric_stats.test = thresholds.test +" + +if [ -v CHPC_DATABASE_URL ] +then + set +x # Don't show password in the log + client=(clickhouse-client + # Surprisingly, clickhouse-client doesn't understand --host 127.0.0.1:9000 + # so I have to extract host and port with clickhouse-local. I tried to use + # Poco URI parser to support this in the client, but it's broken and can't + # parse host:port. + $(clickhouse-local --query "with '${CHPC_DATABASE_URL}' as url select '--host ' || domain(url) || ' --port ' || toString(port(url)) format TSV") + --secure + --user "${CHPC_DATABASE_USER}" + --password "${CHPC_DATABASE_PASSWORD}" + --config "right/config/client_config.xml" + --database perftest + --date_time_input_format=best_effort) + + +# Precision is going to be 1.5 times worse for PRs. How do I know it? I ran this: +# SELECT quantilesExact(0., 0.1, 0.5, 0.75, 0.95, 1.)(p / m) +# FROM +# ( +# SELECT +# quantileIf(0.95)(stat_threshold, pr_number = 0) AS m, +# quantileIf(0.95)(stat_threshold, (pr_number != 0) AND (abs(diff) < stat_threshold)) AS p +# FROM query_metrics_v2 +# WHERE (event_date > (today() - toIntervalMonth(1))) AND (metric = 'client_time') +# GROUP BY +# test, +# query_index, +# query_display_name +# HAVING count(*) > 100 +# ) +# The file can be empty if the server is inaccessible, so we can't use TSVWithNamesAndTypes. + "${client[@]}" --query " + select test, query_index, + quantileExact(0.99)(abs(diff)) max_diff, + quantileExactIf(0.99)(stat_threshold, abs(diff) < stat_threshold) * 1.5 max_stat_threshold, + query_display_name + from query_metrics_v2 + where event_date > now() - interval 1 month + and metric = 'client_time' + and pr_number = 0 + group by test, query_index, query_display_name + having count(*) > 100 + " > analyze/historical-thresholds.tsv +else + touch analyze/historical-thresholds.tsv +fi + } # Analyze results @@ -596,6 +653,26 @@ create view query_metric_stats as diff float, stat_threshold float') ; +create table report_thresholds engine File(TSVWithNamesAndTypes, 'report/thresholds.tsv') + as select + query_display_names.test test, query_display_names.query_index query_index, + ceil(greatest(0.1, historical_thresholds.max_diff, + test_thresholds.report_threshold), 2) changed_threshold, + ceil(greatest(0.2, historical_thresholds.max_stat_threshold, + test_thresholds.report_threshold + 0.1), 2) unstable_threshold, + query_display_names.query_display_name query_display_name + from query_display_names + left join file('analyze/historical-thresholds.tsv', TSV, + 'test text, query_index int, max_diff float, max_stat_threshold float, + query_display_name text') historical_thresholds + on query_display_names.test = historical_thresholds.test + and query_display_names.query_index = historical_thresholds.query_index + and query_display_names.query_display_name = historical_thresholds.query_display_name + left join file('analyze/report-thresholds.tsv', TSV, + 'test text, report_threshold float') test_thresholds + on query_display_names.test = test_thresholds.test + ; + -- Main statistics for queries -- query time as reported in query log. create table queries engine File(TSVWithNamesAndTypes, 'report/queries.tsv') as select @@ -610,23 +687,23 @@ create table queries engine File(TSVWithNamesAndTypes, 'report/queries.tsv') -- uncaught regressions, because for the default 7 runs we do for PRs, -- the randomization distribution has only 16 values, so the max quantile -- is actually 0.9375. - abs(diff) > report_threshold and abs(diff) >= stat_threshold as changed_fail, - abs(diff) > report_threshold - 0.05 and abs(diff) >= stat_threshold as changed_show, + abs(diff) > changed_threshold and abs(diff) >= stat_threshold as changed_fail, + abs(diff) > changed_threshold - 0.05 and abs(diff) >= stat_threshold as changed_show, - not changed_fail and stat_threshold > report_threshold + 0.10 as unstable_fail, - not changed_show and stat_threshold > report_threshold - 0.05 as unstable_show, + not changed_fail and stat_threshold > unstable_threshold as unstable_fail, + not changed_show and stat_threshold > unstable_threshold - 0.05 as unstable_show, left, right, diff, stat_threshold, - if(report_threshold > 0, report_threshold, 0.10) as report_threshold, query_metric_stats.test test, query_metric_stats.query_index query_index, - query_display_name + query_display_names.query_display_name query_display_name from query_metric_stats - left join file('analyze/report-thresholds.tsv', TSV, - 'test text, report_threshold float') thresholds - on query_metric_stats.test = thresholds.test left join query_display_names on query_metric_stats.test = query_display_names.test and query_metric_stats.query_index = query_display_names.query_index + left join report_thresholds + on query_display_names.test = report_thresholds.test + and query_display_names.query_index = report_thresholds.query_index + and query_display_names.query_display_name = report_thresholds.query_display_name -- 'server_time' is rounded down to ms, which might be bad for very short queries. -- Use 'client_time' instead. where metric_name = 'client_time' @@ -889,7 +966,6 @@ create table all_query_metrics_tsv engine File(TSV, 'report/all-query-metrics.ts order by test, query_index; " 2> >(tee -a report/errors.log 1>&2) - # Prepare source data for metrics and flamegraphs for queries that were profiled # by perf.py. for version in {right,left} diff --git a/docker/test/performance-comparison/perf.py b/docker/test/performance-comparison/perf.py index 8231caceca8..9628c512e83 100755 --- a/docker/test/performance-comparison/perf.py +++ b/docker/test/performance-comparison/perf.py @@ -44,7 +44,7 @@ parser.add_argument('--port', nargs='*', default=[9000], help="Space-separated l parser.add_argument('--runs', type=int, default=1, help='Number of query runs per server.') parser.add_argument('--max-queries', type=int, default=None, help='Test no more than this number of queries, chosen at random.') parser.add_argument('--queries-to-run', nargs='*', type=int, default=None, help='Space-separated list of indexes of queries to test.') -parser.add_argument('--max-query-seconds', type=int, default=10, help='For how many seconds at most a query is allowed to run. The script finishes with error if this time is exceeded.') +parser.add_argument('--max-query-seconds', type=int, default=15, help='For how many seconds at most a query is allowed to run. The script finishes with error if this time is exceeded.') parser.add_argument('--profile-seconds', type=int, default=0, help='For how many seconds to profile a query for which the performance has changed.') parser.add_argument('--long', action='store_true', help='Do not skip the tests tagged as long.') parser.add_argument('--print-queries', action='store_true', help='Print test queries and exit.') @@ -273,8 +273,14 @@ for query_index in queries_to_run: prewarm_id = f'{query_prefix}.prewarm0' try: - # Will also detect too long queries during warmup stage - res = c.execute(q, query_id = prewarm_id, settings = {'max_execution_time': args.max_query_seconds}) + # During the warmup runs, we will also: + # * detect queries that are exceedingly long, to fail fast, + # * collect profiler traces, which might be helpful for analyzing + # test coverage. We disable profiler for normal runs because + # it makes the results unstable. + res = c.execute(q, query_id = prewarm_id, + settings = {'max_execution_time': args.max_query_seconds, + 'query_profiler_real_time_period_ns': 10000000}) except clickhouse_driver.errors.Error as e: # Add query id to the exception to make debugging easier. e.args = (prewarm_id, *e.args) @@ -359,10 +365,11 @@ for query_index in queries_to_run: # For very short queries we have a special mode where we run them for at # least some time. The recommended lower bound of run time for "normal" # queries is about 0.1 s, and we run them about 10 times, giving the - # time per query per server of about one second. Use this value as a - # reference for "short" queries. + # time per query per server of about one second. Run "short" queries + # for longer time, because they have a high percentage of overhead and + # might give less stable results. if is_short[query_index]: - if server_seconds >= 2 * len(this_query_connections): + if server_seconds >= 8 * len(this_query_connections): break # Also limit the number of runs, so that we don't go crazy processing # the results -- 'eqmed.sql' is really suboptimal. diff --git a/docker/test/performance-comparison/report.py b/docker/test/performance-comparison/report.py index 42490971127..dabf6b7b93d 100755 --- a/docker/test/performance-comparison/report.py +++ b/docker/test/performance-comparison/report.py @@ -446,11 +446,17 @@ if args.report == 'main': attrs[3] = f'style="background: {color_bad}"' else: attrs[3] = '' + # Just don't add the slightly unstable queries we don't consider + # errors. It's not clear what the user should do with them. + continue text += tableRow(r, attrs, anchor) text += tableEnd() - tables.append(text) + + # Don't add an empty table. + if very_unstable_queries: + tables.append(text) add_unstable_queries() @@ -549,16 +555,15 @@ if args.report == 'main': message_array.append(str(slower_queries) + ' slower') if unstable_partial_queries: - unstable_queries += unstable_partial_queries - error_tests += unstable_partial_queries + very_unstable_queries += unstable_partial_queries status = 'failure' - if unstable_queries: - message_array.append(str(unstable_queries) + ' unstable') - -# Disabled before fix. -# if very_unstable_queries: -# status = 'failure' + # Don't show mildly unstable queries, only the very unstable ones we + # treat as errors. + if very_unstable_queries: + error_tests += very_unstable_queries + status = 'failure' + message_array.append(str(very_unstable_queries) + ' unstable') error_tests += slow_average_tests if error_tests: diff --git a/docker/test/sqlancer/Dockerfile b/docker/test/sqlancer/Dockerfile index 6bcdc3df5cd..253ca1b729a 100644 --- a/docker/test/sqlancer/Dockerfile +++ b/docker/test/sqlancer/Dockerfile @@ -2,7 +2,6 @@ FROM ubuntu:20.04 RUN apt-get update --yes && env DEBIAN_FRONTEND=noninteractive apt-get install wget unzip git openjdk-14-jdk maven python3 --yes --no-install-recommends - RUN wget https://github.com/sqlancer/sqlancer/archive/master.zip -O /sqlancer.zip RUN mkdir /sqlancer && \ cd /sqlancer && \ diff --git a/docker/test/testflows/runner/Dockerfile b/docker/test/testflows/runner/Dockerfile index ae95c18bc14..9fa028fedca 100644 --- a/docker/test/testflows/runner/Dockerfile +++ b/docker/test/testflows/runner/Dockerfile @@ -73,4 +73,4 @@ RUN set -x \ VOLUME /var/lib/docker EXPOSE 2375 ENTRYPOINT ["dockerd-entrypoint.sh"] -CMD ["sh", "-c", "python3 regression.py --no-color -o classic --local --clickhouse-binary-path ${CLICKHOUSE_TESTS_SERVER_BIN_PATH} --log test.log ${TESTFLOWS_OPTS}; cat test.log | tfs report results --format json > results.json; /usr/local/bin/process_testflows_result.py || echo -e 'failure\tCannot parse results' > check_status.tsv"] +CMD ["sh", "-c", "python3 regression.py --no-color -o new-fails --local --clickhouse-binary-path ${CLICKHOUSE_TESTS_SERVER_BIN_PATH} --log test.log ${TESTFLOWS_OPTS}; cat test.log | tfs report results --format json > results.json; /usr/local/bin/process_testflows_result.py || echo -e 'failure\tCannot parse results' > check_status.tsv; find * -type f | grep _instances | grep clickhouse-server | xargs -n1 tar -rvf clickhouse_logs.tar; gzip -9 clickhouse_logs.tar"] diff --git a/docs/en/commercial/cloud.md b/docs/en/commercial/cloud.md index 5a897a77db2..a30d17828ab 100644 --- a/docs/en/commercial/cloud.md +++ b/docs/en/commercial/cloud.md @@ -41,6 +41,14 @@ toc_title: Cloud - Built-in monitoring and database management platform - Professional database expert technical support and service +## SberCloud {#sbercloud} + +[SberCloud.Advanced](https://sbercloud.ru/en/advanced) provides [MapReduce Service (MRS)](https://docs.sbercloud.ru/mrs/ug/topics/ug__clickhouse.html), a reliable, secure, and easy-to-use enterprise-level platform for storing, processing, and analyzing big data. MRS allows you to quickly create and manage ClickHouse clusters. + +- A ClickHouse instance consists of three ZooKeeper nodes and multiple ClickHouse nodes. The Dedicated Replica mode is used to ensure high reliability of dual data copies. +- MRS provides smooth and elastic scaling capabilities to quickly meet service growth requirements in scenarios where the cluster storage capacity or CPU computing resources are not enough. When you expand the capacity of ClickHouse nodes in a cluster, MRS provides a one-click data balancing tool and gives you the initiative to balance data. You can determine the data balancing mode and time based on service characteristics to ensure service availability, implementing smooth scaling. +- MRS uses the Elastic Load Balance ensuring high availability deployment architecture to automatically distribute user access traffic to multiple backend nodes, expanding service capabilities to external systems and improving fault tolerance. With the ELB polling mechanism, data is written to local tables and read from distributed tables on different nodes. In this way, data read/write load and high availability of application access are guaranteed. + ## Tencent Cloud {#tencent-cloud} [Tencent Managed Service for ClickHouse](https://cloud.tencent.com/product/cdwch) provides the following key features: diff --git a/docs/en/commercial/index.md b/docs/en/commercial/index.md index 0f69df62c7b..90e74d88ea8 100644 --- a/docs/en/commercial/index.md +++ b/docs/en/commercial/index.md @@ -14,4 +14,4 @@ Service categories: - [Support](../commercial/support.md) !!! note "For service providers" - If you happen to represent one of them, feel free to open a pull request adding your company to the respective section (or even adding a new section if the service doesn’t fit into existing categories). The easiest way to open a pull-request for documentation page is by using a “pencil” edit button in the top-right corner. If your service available in some local market, make sure to mention it in a localized documentation page as well (or at least point it out in a pull-request description). + If you happen to represent one of them, feel free to open a pull request adding your company to the respective section (or even adding a new section if the service does not fit into existing categories). The easiest way to open a pull-request for documentation page is by using a “pencil” edit button in the top-right corner. If your service available in some local market, make sure to mention it in a localized documentation page as well (or at least point it out in a pull-request description). diff --git a/docs/en/development/adding_test_queries.md b/docs/en/development/adding_test_queries.md index db5355393ac..95dfd076a12 100644 --- a/docs/en/development/adding_test_queries.md +++ b/docs/en/development/adding_test_queries.md @@ -120,11 +120,11 @@ clickhouse-client -nmT < tests/queries/0_stateless/01521_dummy_test.sql | tee te - don't switch databases (unless necessary) - you can create several table replicas on the same node if needed - you can use one of the test cluster definitions when needed (see system.clusters) -- use `number` / `numbers_mt` / `zeros` / `zeros_mt` and similar for queries / to initialize data when appliable +- use `number` / `numbers_mt` / `zeros` / `zeros_mt` and similar for queries / to initialize data when applicable - clean up the created objects after test and before the test (DROP IF EXISTS) - in case of some dirty state - prefer sync mode of operations (mutations, merges, etc.) - use other SQL files in the `0_stateless` folder as an example -- ensure the feature / feature combination you want to tests is not covered yet with existing tests +- ensure the feature / feature combination you want to test is not yet covered with existing tests #### Commit / push / create PR. diff --git a/docs/en/development/architecture.md b/docs/en/development/architecture.md index 4ef01f4e4fb..424052001fd 100644 --- a/docs/en/development/architecture.md +++ b/docs/en/development/architecture.md @@ -21,11 +21,11 @@ Various `IColumn` implementations (`ColumnUInt8`, `ColumnString`, and so on) are Nevertheless, it is possible to work with individual values as well. To represent an individual value, the `Field` is used. `Field` is just a discriminated union of `UInt64`, `Int64`, `Float64`, `String` and `Array`. `IColumn` has the `operator []` method to get the n-th value as a `Field`, and the `insert` method to append a `Field` to the end of a column. These methods are not very efficient, because they require dealing with temporary `Field` objects representing an individual value. There are more efficient methods, such as `insertFrom`, `insertRangeFrom`, and so on. -`Field` doesn’t have enough information about a specific data type for a table. For example, `UInt8`, `UInt16`, `UInt32`, and `UInt64` are all represented as `UInt64` in a `Field`. +`Field` does not have enough information about a specific data type for a table. For example, `UInt8`, `UInt16`, `UInt32`, and `UInt64` are all represented as `UInt64` in a `Field`. ## Leaky Abstractions {#leaky-abstractions} -`IColumn` has methods for common relational transformations of data, but they don’t meet all needs. For example, `ColumnUInt64` doesn’t have a method to calculate the sum of two columns, and `ColumnString` doesn’t have a method to run a substring search. These countless routines are implemented outside of `IColumn`. +`IColumn` has methods for common relational transformations of data, but they do not meet all needs. For example, `ColumnUInt64` does not have a method to calculate the sum of two columns, and `ColumnString` does not have a method to run a substring search. These countless routines are implemented outside of `IColumn`. Various functions on columns can be implemented in a generic, non-efficient way using `IColumn` methods to extract `Field` values, or in a specialized way using knowledge of inner memory layout of data in a specific `IColumn` implementation. It is implemented by casting functions to a specific `IColumn` type and deal with internal representation directly. For example, `ColumnUInt64` has the `getData` method that returns a reference to an internal array, then a separate routine reads or fills that array directly. We have “leaky abstractions” to allow efficient specializations of various routines. @@ -35,7 +35,7 @@ Various functions on columns can be implemented in a generic, non-efficient way `IDataType` and `IColumn` are only loosely related to each other. Different data types can be represented in memory by the same `IColumn` implementations. For example, `DataTypeUInt32` and `DataTypeDateTime` are both represented by `ColumnUInt32` or `ColumnConstUInt32`. In addition, the same data type can be represented by different `IColumn` implementations. For example, `DataTypeUInt8` can be represented by `ColumnUInt8` or `ColumnConstUInt8`. -`IDataType` only stores metadata. For instance, `DataTypeUInt8` doesn’t store anything at all (except virtual pointer `vptr`) and `DataTypeFixedString` stores just `N` (the size of fixed-size strings). +`IDataType` only stores metadata. For instance, `DataTypeUInt8` does not store anything at all (except virtual pointer `vptr`) and `DataTypeFixedString` stores just `N` (the size of fixed-size strings). `IDataType` has helper methods for various data formats. Examples are methods to serialize a value with possible quoting, to serialize a value for JSON, and to serialize a value as part of the XML format. There is no direct correspondence to data formats. For example, the different data formats `Pretty` and `TabSeparated` can use the same `serializeTextEscaped` helper method from the `IDataType` interface. @@ -43,7 +43,7 @@ Various functions on columns can be implemented in a generic, non-efficient way A `Block` is a container that represents a subset (chunk) of a table in memory. It is just a set of triples: `(IColumn, IDataType, column name)`. During query execution, data is processed by `Block`s. If we have a `Block`, we have data (in the `IColumn` object), we have information about its type (in `IDataType`) that tells us how to deal with that column, and we have the column name. It could be either the original column name from the table or some artificial name assigned for getting temporary results of calculations. -When we calculate some function over columns in a block, we add another column with its result to the block, and we don’t touch columns for arguments of the function because operations are immutable. Later, unneeded columns can be removed from the block, but not modified. It is convenient for the elimination of common subexpressions. +When we calculate some function over columns in a block, we add another column with its result to the block, and we do not touch columns for arguments of the function because operations are immutable. Later, unneeded columns can be removed from the block, but not modified. It is convenient for the elimination of common subexpressions. Blocks are created for every processed chunk of data. Note that for the same type of calculation, the column names and types remain the same for different blocks, and only column data changes. It is better to split block data from the block header because small block sizes have a high overhead of temporary strings for copying shared_ptrs and column names. @@ -118,11 +118,11 @@ Interpreters are responsible for creating the query execution pipeline from an ` There are ordinary functions and aggregate functions. For aggregate functions, see the next section. -Ordinary functions don’t change the number of rows – they work as if they are processing each row independently. In fact, functions are not called for individual rows, but for `Block`’s of data to implement vectorized query execution. +Ordinary functions do not change the number of rows – they work as if they are processing each row independently. In fact, functions are not called for individual rows, but for `Block`’s of data to implement vectorized query execution. There are some miscellaneous functions, like [blockSize](../sql-reference/functions/other-functions.md#function-blocksize), [rowNumberInBlock](../sql-reference/functions/other-functions.md#function-rownumberinblock), and [runningAccumulate](../sql-reference/functions/other-functions.md#runningaccumulate), that exploit block processing and violate the independence of rows. -ClickHouse has strong typing, so there’s no implicit type conversion. If a function doesn’t support a specific combination of types, it throws an exception. But functions can work (be overloaded) for many different combinations of types. For example, the `plus` function (to implement the `+` operator) works for any combination of numeric types: `UInt8` + `Float32`, `UInt16` + `Int8`, and so on. Also, some variadic functions can accept any number of arguments, such as the `concat` function. +ClickHouse has strong typing, so there’s no implicit type conversion. If a function does not support a specific combination of types, it throws an exception. But functions can work (be overloaded) for many different combinations of types. For example, the `plus` function (to implement the `+` operator) works for any combination of numeric types: `UInt8` + `Float32`, `UInt16` + `Int8`, and so on. Also, some variadic functions can accept any number of arguments, such as the `concat` function. Implementing a function may be slightly inconvenient because a function explicitly dispatches supported data types and supported `IColumns`. For example, the `plus` function has code generated by instantiation of a C++ template for each combination of numeric types, and constant or non-constant left and right arguments. @@ -152,7 +152,7 @@ Internally, it is just a primitive multithreaded server without coroutines or fi The server initializes the `Context` class with the necessary environment for query execution: the list of available databases, users and access rights, settings, clusters, the process list, the query log, and so on. Interpreters use this environment. -We maintain full backward and forward compatibility for the server TCP protocol: old clients can talk to new servers, and new clients can talk to old servers. But we don’t want to maintain it eternally, and we are removing support for old versions after about one year. +We maintain full backward and forward compatibility for the server TCP protocol: old clients can talk to new servers, and new clients can talk to old servers. But we do not want to maintain it eternally, and we are removing support for old versions after about one year. !!! note "Note" For most external applications, we recommend using the HTTP interface because it is simple and easy to use. The TCP protocol is more tightly linked to internal data structures: it uses an internal format for passing blocks of data, and it uses custom framing for compressed data. We haven’t released a C library for that protocol because it requires linking most of the ClickHouse codebase, which is not practical. @@ -169,13 +169,13 @@ There is no global query plan for distributed query execution. Each node has its `MergeTree` is a family of storage engines that supports indexing by primary key. The primary key can be an arbitrary tuple of columns or expressions. Data in a `MergeTree` table is stored in “parts”. Each part stores data in the primary key order, so data is ordered lexicographically by the primary key tuple. All the table columns are stored in separate `column.bin` files in these parts. The files consist of compressed blocks. Each block is usually from 64 KB to 1 MB of uncompressed data, depending on the average value size. The blocks consist of column values placed contiguously one after the other. Column values are in the same order for each column (the primary key defines the order), so when you iterate by many columns, you get values for the corresponding rows. -The primary key itself is “sparse”. It doesn’t address every single row, but only some ranges of data. A separate `primary.idx` file has the value of the primary key for each N-th row, where N is called `index_granularity` (usually, N = 8192). Also, for each column, we have `column.mrk` files with “marks,” which are offsets to each N-th row in the data file. Each mark is a pair: the offset in the file to the beginning of the compressed block, and the offset in the decompressed block to the beginning of data. Usually, compressed blocks are aligned by marks, and the offset in the decompressed block is zero. Data for `primary.idx` always resides in memory, and data for `column.mrk` files is cached. +The primary key itself is “sparse”. It does not address every single row, but only some ranges of data. A separate `primary.idx` file has the value of the primary key for each N-th row, where N is called `index_granularity` (usually, N = 8192). Also, for each column, we have `column.mrk` files with “marks,” which are offsets to each N-th row in the data file. Each mark is a pair: the offset in the file to the beginning of the compressed block, and the offset in the decompressed block to the beginning of data. Usually, compressed blocks are aligned by marks, and the offset in the decompressed block is zero. Data for `primary.idx` always resides in memory, and data for `column.mrk` files is cached. When we are going to read something from a part in `MergeTree`, we look at `primary.idx` data and locate ranges that could contain requested data, then look at `column.mrk` data and calculate offsets for where to start reading those ranges. Because of sparseness, excess data may be read. ClickHouse is not suitable for a high load of simple point queries, because the entire range with `index_granularity` rows must be read for each key, and the entire compressed block must be decompressed for each column. We made the index sparse because we must be able to maintain trillions of rows per single server without noticeable memory consumption for the index. Also, because the primary key is sparse, it is not unique: it cannot check the existence of the key in the table at INSERT time. You could have many rows with the same key in a table. When you `INSERT` a bunch of data into `MergeTree`, that bunch is sorted by primary key order and forms a new part. There are background threads that periodically select some parts and merge them into a single sorted part to keep the number of parts relatively low. That’s why it is called `MergeTree`. Of course, merging leads to “write amplification”. All parts are immutable: they are only created and deleted, but not modified. When SELECT is executed, it holds a snapshot of the table (a set of parts). After merging, we also keep old parts for some time to make a recovery after failure easier, so if we see that some merged part is probably broken, we can replace it with its source parts. -`MergeTree` is not an LSM tree because it doesn’t contain “memtable” and “log”: inserted data is written directly to the filesystem. This makes it suitable only to INSERT data in batches, not by individual row and not very frequently – about once per second is ok, but a thousand times a second is not. We did it this way for simplicity’s sake, and because we are already inserting data in batches in our applications. +`MergeTree` is not an LSM tree because it does not contain “memtable” and “log”: inserted data is written directly to the filesystem. This makes it suitable only to INSERT data in batches, not by individual row and not very frequently – about once per second is ok, but a thousand times a second is not. We did it this way for simplicity’s sake, and because we are already inserting data in batches in our applications. There are MergeTree engines that are doing additional work during background merges. Examples are `CollapsingMergeTree` and `AggregatingMergeTree`. This could be treated as special support for updates. Keep in mind that these are not real updates because users usually have no control over the time when background merges are executed, and data in a `MergeTree` table is almost always stored in more than one part, not in completely merged form. @@ -185,7 +185,7 @@ Replication in ClickHouse can be configured on a per-table basis. You could have Replication is implemented in the `ReplicatedMergeTree` storage engine. The path in `ZooKeeper` is specified as a parameter for the storage engine. All tables with the same path in `ZooKeeper` become replicas of each other: they synchronize their data and maintain consistency. Replicas can be added and removed dynamically simply by creating or dropping a table. -Replication uses an asynchronous multi-master scheme. You can insert data into any replica that has a session with `ZooKeeper`, and data is replicated to all other replicas asynchronously. Because ClickHouse doesn’t support UPDATEs, replication is conflict-free. As there is no quorum acknowledgment of inserts, just-inserted data might be lost if one node fails. +Replication uses an asynchronous multi-master scheme. You can insert data into any replica that has a session with `ZooKeeper`, and data is replicated to all other replicas asynchronously. Because ClickHouse does not support UPDATEs, replication is conflict-free. As there is no quorum acknowledgment of inserts, just-inserted data might be lost if one node fails. Metadata for replication is stored in ZooKeeper. There is a replication log that lists what actions to do. Actions are: get part; merge parts; drop a partition, and so on. Each replica copies the replication log to its queue and then executes the actions from the queue. For example, on insertion, the “get the part” action is created in the log, and every replica downloads that part. Merges are coordinated between replicas to get byte-identical results. All parts are merged in the same way on all replicas. One of the leaders initiates a new merge first and writes “merge parts” actions to the log. Multiple replicas (or all) can be leaders at the same time. A replica can be prevented from becoming a leader using the `merge_tree` setting `replicated_can_become_leader`. The leaders are responsible for scheduling background merges. diff --git a/docs/en/development/build-osx.md b/docs/en/development/build-osx.md index f34107ca3d3..a862bdeb299 100644 --- a/docs/en/development/build-osx.md +++ b/docs/en/development/build-osx.md @@ -20,7 +20,7 @@ Install the latest [Xcode](https://apps.apple.com/am/app/xcode/id497799835?mt=12 Open it at least once to accept the end-user license agreement and automatically install the required components. -Then, make sure that the latest Comman Line Tools are installed and selected in the system: +Then, make sure that the latest Command Line Tools are installed and selected in the system: ``` bash sudo rm -rf /Library/Developer/CommandLineTools diff --git a/docs/en/development/build.md b/docs/en/development/build.md index b6cb68f7ff8..8ef12221e8d 100644 --- a/docs/en/development/build.md +++ b/docs/en/development/build.md @@ -134,7 +134,7 @@ $ ./release ## Faster builds for development -Normally all tools of the ClickHouse bundle, such as `clickhouse-server`, `clickhouse-client` etc., are linked into a single static executable, `clickhouse`. This executable must be re-linked on every change, which might be slow. Two common ways to improve linking time are to use `lld` linker, and use the 'split' build configuration, which builds a separate binary for every tool, and further splits the code into serveral shared libraries. To enable these tweaks, pass the following flags to `cmake`: +Normally all tools of the ClickHouse bundle, such as `clickhouse-server`, `clickhouse-client` etc., are linked into a single static executable, `clickhouse`. This executable must be re-linked on every change, which might be slow. Two common ways to improve linking time are to use `lld` linker, and use the 'split' build configuration, which builds a separate binary for every tool, and further splits the code into several shared libraries. To enable these tweaks, pass the following flags to `cmake`: ``` -DCMAKE_C_FLAGS="--ld-path=lld" -DCMAKE_CXX_FLAGS="--ld-path=lld" -DUSE_STATIC_LIBRARIES=0 -DSPLIT_SHARED_LIBRARIES=1 -DCLICKHOUSE_SPLIT_BINARY=1 diff --git a/docs/en/development/developer-instruction.md b/docs/en/development/developer-instruction.md index 35ca4725af8..ac6d4a2b563 100644 --- a/docs/en/development/developer-instruction.md +++ b/docs/en/development/developer-instruction.md @@ -15,7 +15,7 @@ ClickHouse cannot work or build on a 32-bit system. You should acquire access to To start working with ClickHouse repository you will need a GitHub account. -You probably already have one, but if you don’t, please register at https://github.com. In case you do not have SSH keys, you should generate them and then upload them on GitHub. It is required for sending over your patches. It is also possible to use the same SSH keys that you use with any other SSH servers - probably you already have those. +You probably already have one, but if you do not, please register at https://github.com. In case you do not have SSH keys, you should generate them and then upload them on GitHub. It is required for sending over your patches. It is also possible to use the same SSH keys that you use with any other SSH servers - probably you already have those. Create a fork of ClickHouse repository. To do that please click on the “fork” button in the upper right corner at https://github.com/ClickHouse/ClickHouse. It will fork your own copy of ClickHouse/ClickHouse to your account. diff --git a/docs/en/development/style.md b/docs/en/development/style.md index b27534d9890..2151735c2f4 100644 --- a/docs/en/development/style.md +++ b/docs/en/development/style.md @@ -195,7 +195,7 @@ std::cerr << static_cast(c) << std::endl; The same is true for small methods in any classes or structs. -For templated classes and structs, don’t separate the method declarations from the implementation (because otherwise they must be defined in the same translation unit). +For templated classes and structs, do not separate the method declarations from the implementation (because otherwise they must be defined in the same translation unit). **31.** You can wrap lines at 140 characters, instead of 80. @@ -442,7 +442,7 @@ Use `RAII` and see above. **3.** Error handling. -Use exceptions. In most cases, you only need to throw an exception, and don’t need to catch it (because of `RAII`). +Use exceptions. In most cases, you only need to throw an exception, and do not need to catch it (because of `RAII`). In offline data processing applications, it’s often acceptable to not catch exceptions. @@ -599,7 +599,7 @@ public: There is no need to use a separate `namespace` for application code. -Small libraries don’t need this, either. +Small libraries do not need this, either. For medium to large libraries, put everything in a `namespace`. @@ -755,9 +755,9 @@ If there is a good solution already available, then use it, even if it means you (But be prepared to remove bad libraries from code.) -**3.** You can install a library that isn’t in the packages, if the packages don’t have what you need or have an outdated version or the wrong type of compilation. +**3.** You can install a library that isn’t in the packages, if the packages do not have what you need or have an outdated version or the wrong type of compilation. -**4.** If the library is small and doesn’t have its own complex build system, put the source files in the `contrib` folder. +**4.** If the library is small and does not have its own complex build system, put the source files in the `contrib` folder. **5.** Preference is always given to libraries that are already in use. diff --git a/docs/en/development/tests.md b/docs/en/development/tests.md index 7547497b9af..4231bda6c35 100644 --- a/docs/en/development/tests.md +++ b/docs/en/development/tests.md @@ -35,7 +35,7 @@ Tests should use (create, drop, etc) only tables in `test` database that is assu ### Choosing the Test Name -The name of the test starts with a five-digit prefix followed by a descriptive name, such as `00422_hash_function_constexpr.sql`. To choose the prefix, find the largest prefix already present in the directory, and increment it by one. In the meantime, some other tests might be added with the same numeric prefix, but this is OK and doesn't lead to any problems, you don't have to change it later. +The name of the test starts with a five-digit prefix followed by a descriptive name, such as `00422_hash_function_constexpr.sql`. To choose the prefix, find the largest prefix already present in the directory, and increment it by one. In the meantime, some other tests might be added with the same numeric prefix, but this is OK and does not lead to any problems, you don't have to change it later. Some tests are marked with `zookeeper`, `shard` or `long` in their names. `zookeeper` is for tests that are using ZooKeeper. `shard` is for tests that requires server to listen `127.0.0.*`; `distributed` or `global` have the same meaning. `long` is for tests that run slightly longer that one second. You can disable these groups of tests using `--no-zookeeper`, `--no-shard` and `--no-long` options, respectively. Make sure to add a proper prefix to your test name if it needs ZooKeeper or distributed queries. @@ -51,7 +51,7 @@ Do not check for a particular wording of error message, it may change in the fut ### Testing a Distributed Query -If you want to use distributed queries in functional tests, you can leverage `remote` table function with `127.0.0.{1..2}` addresses for the server to query itself; or you can use predefined test clusters in server configuration file like `test_shard_localhost`. Remember to add the words `shard` or `distributed` to the test name, so that it is ran in CI in correct configurations, where the server is configured to support distributed queries. +If you want to use distributed queries in functional tests, you can leverage `remote` table function with `127.0.0.{1..2}` addresses for the server to query itself; or you can use predefined test clusters in server configuration file like `test_shard_localhost`. Remember to add the words `shard` or `distributed` to the test name, so that it is run in CI in correct configurations, where the server is configured to support distributed queries. ## Known Bugs {#known-bugs} @@ -60,11 +60,11 @@ If we know some bugs that can be easily reproduced by functional tests, we place ## Integration Tests {#integration-tests} -Integration tests allow to test ClickHouse in clustered configuration and ClickHouse interaction with other servers like MySQL, Postgres, MongoDB. They are useful to emulate network splits, packet drops, etc. These tests are run under Docker and create multiple containers with various software. +Integration tests allow testing ClickHouse in clustered configuration and ClickHouse interaction with other servers like MySQL, Postgres, MongoDB. They are useful to emulate network splits, packet drops, etc. These tests are run under Docker and create multiple containers with various software. See `tests/integration/README.md` on how to run these tests. -Note that integration of ClickHouse with third-party drivers is not tested. Also we currently don’t have integration tests with our JDBC and ODBC drivers. +Note that integration of ClickHouse with third-party drivers is not tested. Also, we currently do not have integration tests with our JDBC and ODBC drivers. ## Unit Tests {#unit-tests} @@ -123,7 +123,7 @@ Example with gdb: $ sudo -u clickhouse gdb --args /usr/bin/clickhouse server --config-file /etc/clickhouse-server/config.xml ``` -If the system clickhouse-server is already running and you don’t want to stop it, you can change port numbers in your `config.xml` (or override them in a file in `config.d` directory), provide appropriate data path, and run it. +If the system clickhouse-server is already running and you do not want to stop it, you can change port numbers in your `config.xml` (or override them in a file in `config.d` directory), provide appropriate data path, and run it. `clickhouse` binary has almost no dependencies and works across wide range of Linux distributions. To quick and dirty test your changes on a server, you can simply `scp` your fresh built `clickhouse` binary to your server and then run it as in examples above. @@ -161,7 +161,7 @@ $ clickhouse benchmark --concurrency 16 < queries.tsv Then leave it for a night or weekend and go take a rest. -You should check that `clickhouse-server` doesn’t crash, memory footprint is bounded and performance not degrading over time. +You should check that `clickhouse-server` does not crash, memory footprint is bounded and performance not degrading over time. Precise query execution timings are not recorded and not compared due to high variability of queries and environment. @@ -230,7 +230,7 @@ Fuzzers are not built by default. To build fuzzers both `-DENABLE_FUZZING=1` and We recommend to disable Jemalloc while building fuzzers. Configuration used to integrate ClickHouse fuzzing to Google OSS-Fuzz can be found at `docker/fuzz`. -We also use simple fuzz test to generate random SQL queries and to check that the server doesn’t die executing them. +We also use simple fuzz test to generate random SQL queries and to check that the server does not die executing them. You can find it in `00746_sql_fuzzy.pl`. This test should be run continuously (overnight and longer). We also use sophisticated AST-based query fuzzer that is able to find huge amount of corner cases. It does random permutations and substitutions in queries AST. It remembers AST nodes from previous tests to use them for fuzzing of subsequent tests while processing them in random order. You can learn more about this fuzzer in [this blog article](https://clickhouse.tech/blog/en/2021/fuzzing-clickhouse/). @@ -332,7 +332,7 @@ We run tests with Yandex internal CI and job automation system named “Sandbox Build jobs and tests are run in Sandbox on per commit basis. Resulting packages and test results are published in GitHub and can be downloaded by direct links. Artifacts are stored for several months. When you send a pull request on GitHub, we tag it as “can be tested” and our CI system will build ClickHouse packages (release, debug, with address sanitizer, etc) for you. -We don’t use Travis CI due to the limit on time and computational power. -We don’t use Jenkins. It was used before and now we are happy we are not using Jenkins. +We do not use Travis CI due to the limit on time and computational power. +We do not use Jenkins. It was used before and now we are happy we are not using Jenkins. [Original article](https://clickhouse.tech/docs/en/development/tests/) diff --git a/docs/en/engines/database-engines/atomic.md b/docs/en/engines/database-engines/atomic.md index d897631dd6e..4f5f69a5ab7 100644 --- a/docs/en/engines/database-engines/atomic.md +++ b/docs/en/engines/database-engines/atomic.md @@ -47,7 +47,7 @@ EXCHANGE TABLES new_table AND old_table; ### ReplicatedMergeTree in Atomic Database {#replicatedmergetree-in-atomic-database} -For [ReplicatedMergeTree](../table-engines/mergetree-family/replication.md#table_engines-replication) tables is recomended do not specify parameters of engine - path in ZooKeeper and replica name. In this case will be used parameters of the configuration [default_replica_path](../../operations/server-configuration-parameters/settings.md#default_replica_path) and [default_replica_name](../../operations/server-configuration-parameters/settings.md#default_replica_name). If you want specify parameters of engine explicitly than recomended to use {uuid} macros. This is useful so that unique paths are automatically generated for each table in the ZooKeeper. +For [ReplicatedMergeTree](../table-engines/mergetree-family/replication.md#table_engines-replication) tables, it is recommended to not specify engine parameters - path in ZooKeeper and replica name. In this case, configuration parameters will be used [default_replica_path](../../operations/server-configuration-parameters/settings.md#default_replica_path) and [default_replica_name](../../operations/server-configuration-parameters/settings.md#default_replica_name). If you want to specify engine parameters explicitly, it is recommended to use {uuid} macros. This is useful so that unique paths are automatically generated for each table in ZooKeeper. ## See Also diff --git a/docs/en/engines/table-engines/index.md b/docs/en/engines/table-engines/index.md index eb4fc583f88..13b3395e15b 100644 --- a/docs/en/engines/table-engines/index.md +++ b/docs/en/engines/table-engines/index.md @@ -82,8 +82,8 @@ Virtual column is an integral table engine attribute that is defined in the engi You shouldn’t specify virtual columns in the `CREATE TABLE` query and you can’t see them in `SHOW CREATE TABLE` and `DESCRIBE TABLE` query results. Virtual columns are also read-only, so you can’t insert data into virtual columns. -To select data from a virtual column, you must specify its name in the `SELECT` query. `SELECT *` doesn’t return values from virtual columns. +To select data from a virtual column, you must specify its name in the `SELECT` query. `SELECT *` does not return values from virtual columns. -If you create a table with a column that has the same name as one of the table virtual columns, the virtual column becomes inaccessible. We don’t recommend doing this. To help avoid conflicts, virtual column names are usually prefixed with an underscore. +If you create a table with a column that has the same name as one of the table virtual columns, the virtual column becomes inaccessible. We do not recommend doing this. To help avoid conflicts, virtual column names are usually prefixed with an underscore. [Original article](https://clickhouse.tech/docs/en/engines/table-engines/) diff --git a/docs/en/engines/table-engines/integrations/kafka.md b/docs/en/engines/table-engines/integrations/kafka.md index 2eebf5bdb92..a3a13f9d152 100644 --- a/docs/en/engines/table-engines/integrations/kafka.md +++ b/docs/en/engines/table-engines/integrations/kafka.md @@ -40,7 +40,7 @@ Required parameters: - `kafka_broker_list` — A comma-separated list of brokers (for example, `localhost:9092`). - `kafka_topic_list` — A list of Kafka topics. -- `kafka_group_name` — A group of Kafka consumers. Reading margins are tracked for each group separately. If you don’t want messages to be duplicated in the cluster, use the same group name everywhere. +- `kafka_group_name` — A group of Kafka consumers. Reading margins are tracked for each group separately. If you do not want messages to be duplicated in the cluster, use the same group name everywhere. - `kafka_format` — Message format. Uses the same notation as the SQL `FORMAT` function, such as `JSONEachRow`. For more information, see the [Formats](../../../interfaces/formats.md) section. Optional parameters: diff --git a/docs/en/engines/table-engines/integrations/mysql.md b/docs/en/engines/table-engines/integrations/mysql.md index 3847e7a9e0e..9bd12e97dd8 100644 --- a/docs/en/engines/table-engines/integrations/mysql.md +++ b/docs/en/engines/table-engines/integrations/mysql.md @@ -15,7 +15,12 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1] [TTL expr1], name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2] [TTL expr2], ... -) ENGINE = MySQL('host:port', 'database', 'table', 'user', 'password'[, replace_query, 'on_duplicate_clause']); +) ENGINE = MySQL('host:port', 'database', 'table', 'user', 'password'[, replace_query, 'on_duplicate_clause']) +SETTINGS + [connection_pool_size=16, ] + [connection_max_tries=3, ] + [connection_auto_close=true ] +; ``` See a detailed description of the [CREATE TABLE](../../../sql-reference/statements/create/table.md#create-table-query) query. diff --git a/docs/en/engines/table-engines/log-family/index.md b/docs/en/engines/table-engines/log-family/index.md index 1f6d88c20e3..8cdde239f44 100644 --- a/docs/en/engines/table-engines/log-family/index.md +++ b/docs/en/engines/table-engines/log-family/index.md @@ -38,7 +38,7 @@ Engines: ## Differences {#differences} -The `TinyLog` engine is the simplest in the family and provides the poorest functionality and lowest efficiency. The `TinyLog` engine doesn’t support parallel data reading by several threads in a single query. It reads data slower than other engines in the family that support parallel reading from a single query and it uses almost as many file descriptors as the `Log` engine because it stores each column in a separate file. Use it only in simple scenarios. +The `TinyLog` engine is the simplest in the family and provides the poorest functionality and lowest efficiency. The `TinyLog` engine does not support parallel data reading by several threads in a single query. It reads data slower than other engines in the family that support parallel reading from a single query and it uses almost as many file descriptors as the `Log` engine because it stores each column in a separate file. Use it only in simple scenarios. The `Log` and `StripeLog` engines support parallel data reading. When reading data, ClickHouse uses multiple threads. Each thread processes a separate data block. The `Log` engine uses a separate file for each column of the table. `StripeLog` stores all the data in one file. As a result, the `StripeLog` engine uses fewer file descriptors, but the `Log` engine provides higher efficiency when reading data. diff --git a/docs/en/engines/table-engines/mergetree-family/collapsingmergetree.md b/docs/en/engines/table-engines/mergetree-family/collapsingmergetree.md index ea0b265d652..4ec976eda30 100644 --- a/docs/en/engines/table-engines/mergetree-family/collapsingmergetree.md +++ b/docs/en/engines/table-engines/mergetree-family/collapsingmergetree.md @@ -126,7 +126,7 @@ Also when there are at least 2 more “state” rows than “cancel” rows, or Thus, collapsing should not change the results of calculating statistics. Changes gradually collapsed so that in the end only the last state of almost every object left. -The `Sign` is required because the merging algorithm doesn’t guarantee that all of the rows with the same sorting key will be in the same resulting data part and even on the same physical server. ClickHouse process `SELECT` queries with multiple threads, and it can not predict the order of rows in the result. The aggregation is required if there is a need to get completely “collapsed” data from `CollapsingMergeTree` table. +The `Sign` is required because the merging algorithm does not guarantee that all of the rows with the same sorting key will be in the same resulting data part and even on the same physical server. ClickHouse process `SELECT` queries with multiple threads, and it can not predict the order of rows in the result. The aggregation is required if there is a need to get completely “collapsed” data from `CollapsingMergeTree` table. To finalize collapsing, write a query with `GROUP BY` clause and aggregate functions that account for the sign. For example, to calculate quantity, use `sum(Sign)` instead of `count()`. To calculate the sum of something, use `sum(Sign * x)` instead of `sum(x)`, and so on, and also add `HAVING sum(Sign) > 0`. diff --git a/docs/en/engines/table-engines/mergetree-family/custom-partitioning-key.md b/docs/en/engines/table-engines/mergetree-family/custom-partitioning-key.md index 855d5fdadf4..535922875ef 100644 --- a/docs/en/engines/table-engines/mergetree-family/custom-partitioning-key.md +++ b/docs/en/engines/table-engines/mergetree-family/custom-partitioning-key.md @@ -33,6 +33,8 @@ ORDER BY (CounterID, StartDate, intHash32(UserID)); In this example, we set partitioning by the event types that occurred during the current week. +By default, the floating-point partition key is not supported. To use it enable the setting [allow_floating_point_partition_key](../../../operations/settings/merge-tree-settings.md#allow_floating_point_partition_key). + When inserting new data to a table, this data is stored as a separate part (chunk) sorted by the primary key. In 10-15 minutes after inserting, the parts of the same partition are merged into the entire part. !!! info "Info" diff --git a/docs/en/engines/table-engines/mergetree-family/graphitemergetree.md b/docs/en/engines/table-engines/mergetree-family/graphitemergetree.md index 14df9ae130e..3ead798503d 100644 --- a/docs/en/engines/table-engines/mergetree-family/graphitemergetree.md +++ b/docs/en/engines/table-engines/mergetree-family/graphitemergetree.md @@ -7,7 +7,7 @@ toc_title: GraphiteMergeTree This engine is designed for thinning and aggregating/averaging (rollup) [Graphite](http://graphite.readthedocs.io/en/latest/index.html) data. It may be helpful to developers who want to use ClickHouse as a data store for Graphite. -You can use any ClickHouse table engine to store the Graphite data if you don’t need rollup, but if you need a rollup use `GraphiteMergeTree`. The engine reduces the volume of storage and increases the efficiency of queries from Graphite. +You can use any ClickHouse table engine to store the Graphite data if you do not need rollup, but if you need a rollup use `GraphiteMergeTree`. The engine reduces the volume of storage and increases the efficiency of queries from Graphite. The engine inherits properties from [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md). diff --git a/docs/en/engines/table-engines/mergetree-family/mergetree.md b/docs/en/engines/table-engines/mergetree-family/mergetree.md index 8743090df41..e385b234cd8 100644 --- a/docs/en/engines/table-engines/mergetree-family/mergetree.md +++ b/docs/en/engines/table-engines/mergetree-family/mergetree.md @@ -64,7 +64,7 @@ For a description of parameters, see the [CREATE query description](../../../sql ClickHouse uses the sorting key as a primary key if the primary key is not defined obviously by the `PRIMARY KEY` clause. - Use the `ORDER BY tuple()` syntax, if you don’t need sorting. See [Selecting the Primary Key](#selecting-the-primary-key). + Use the `ORDER BY tuple()` syntax, if you do not need sorting. See [Selecting the Primary Key](#selecting-the-primary-key). - `PARTITION BY` — The [partitioning key](../../../engines/table-engines/mergetree-family/custom-partitioning-key.md). Optional. @@ -162,7 +162,7 @@ Data parts can be stored in `Wide` or `Compact` format. In `Wide` format each co Data storing format is controlled by the `min_bytes_for_wide_part` and `min_rows_for_wide_part` settings of the table engine. If the number of bytes or rows in a data part is less then the corresponding setting's value, the part is stored in `Compact` format. Otherwise it is stored in `Wide` format. If none of these settings is set, data parts are stored in `Wide` format. -Each data part is logically divided into granules. A granule is the smallest indivisible data set that ClickHouse reads when selecting data. ClickHouse doesn’t split rows or values, so each granule always contains an integer number of rows. The first row of a granule is marked with the value of the primary key for the row. For each data part, ClickHouse creates an index file that stores the marks. For each column, whether it’s in the primary key or not, ClickHouse also stores the same marks. These marks let you find data directly in column files. +Each data part is logically divided into granules. A granule is the smallest indivisible data set that ClickHouse reads when selecting data. ClickHouse does not split rows or values, so each granule always contains an integer number of rows. The first row of a granule is marked with the value of the primary key for the row. For each data part, ClickHouse creates an index file that stores the marks. For each column, whether it’s in the primary key or not, ClickHouse also stores the same marks. These marks let you find data directly in column files. The granule size is restricted by the `index_granularity` and `index_granularity_bytes` settings of the table engine. The number of rows in a granule lays in the `[1, index_granularity]` range, depending on the size of the rows. The size of a granule can exceed `index_granularity_bytes` if the size of a single row is greater than the value of the setting. In this case, the size of the granule equals the size of the row. @@ -227,7 +227,7 @@ This feature is helpful when using the [SummingMergeTree](../../../engines/table In this case it makes sense to leave only a few columns in the primary key that will provide efficient range scans and add the remaining dimension columns to the sorting key tuple. -[ALTER](../../../sql-reference/statements/alter/index.md) of the sorting key is a lightweight operation because when a new column is simultaneously added to the table and to the sorting key, existing data parts don’t need to be changed. Since the old sorting key is a prefix of the new sorting key and there is no data in the newly added column, the data is sorted by both the old and new sorting keys at the moment of table modification. +[ALTER](../../../sql-reference/statements/alter/index.md) of the sorting key is a lightweight operation because when a new column is simultaneously added to the table and to the sorting key, existing data parts do not need to be changed. Since the old sorting key is a prefix of the new sorting key and there is no data in the newly added column, the data is sorted by both the old and new sorting keys at the moment of table modification. ### Use of Indexes and Partitions in Queries {#use-of-indexes-and-partitions-in-queries} @@ -265,7 +265,7 @@ The key for partitioning by month allows reading only those data blocks which co Consider, for example, the days of the month. They form a [monotonic sequence](https://en.wikipedia.org/wiki/Monotonic_function) for one month, but not monotonic for more extended periods. This is a partially-monotonic sequence. If a user creates the table with partially-monotonic primary key, ClickHouse creates a sparse index as usual. When a user selects data from this kind of table, ClickHouse analyzes the query conditions. If the user wants to get data between two marks of the index and both these marks fall within one month, ClickHouse can use the index in this particular case because it can calculate the distance between the parameters of a query and index marks. -ClickHouse cannot use an index if the values of the primary key in the query parameter range don’t represent a monotonic sequence. In this case, ClickHouse uses the full scan method. +ClickHouse cannot use an index if the values of the primary key in the query parameter range do not represent a monotonic sequence. In this case, ClickHouse uses the full scan method. ClickHouse uses this logic not only for days of the month sequences, but for any primary key that represents a partially-monotonic sequence. diff --git a/docs/en/engines/table-engines/mergetree-family/replacingmergetree.md b/docs/en/engines/table-engines/mergetree-family/replacingmergetree.md index b82bc65afc2..ca0db24e640 100644 --- a/docs/en/engines/table-engines/mergetree-family/replacingmergetree.md +++ b/docs/en/engines/table-engines/mergetree-family/replacingmergetree.md @@ -7,9 +7,9 @@ toc_title: ReplacingMergeTree The engine differs from [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md#table_engines-mergetree) in that it removes duplicate entries with the same [sorting key](../../../engines/table-engines/mergetree-family/mergetree.md) value (`ORDER BY` table section, not `PRIMARY KEY`). -Data deduplication occurs only during a merge. Merging occurs in the background at an unknown time, so you can’t plan for it. Some of the data may remain unprocessed. Although you can run an unscheduled merge using the `OPTIMIZE` query, don’t count on using it, because the `OPTIMIZE` query will read and write a large amount of data. +Data deduplication occurs only during a merge. Merging occurs in the background at an unknown time, so you can’t plan for it. Some of the data may remain unprocessed. Although you can run an unscheduled merge using the `OPTIMIZE` query, do not count on using it, because the `OPTIMIZE` query will read and write a large amount of data. -Thus, `ReplacingMergeTree` is suitable for clearing out duplicate data in the background in order to save space, but it doesn’t guarantee the absence of duplicates. +Thus, `ReplacingMergeTree` is suitable for clearing out duplicate data in the background in order to save space, but it does not guarantee the absence of duplicates. ## Creating a Table {#creating-a-table} @@ -34,7 +34,7 @@ For a description of request parameters, see [statement description](../../../sq **ReplacingMergeTree Parameters** -- `ver` — column with version. Type `UInt*`, `Date` or `DateTime`. Optional parameter. +- `ver` — column with the version number. Type `UInt*`, `Date`, `DateTime` or `DateTime64`. Optional parameter. When merging, `ReplacingMergeTree` from all the rows with the same sorting key leaves only one: @@ -66,5 +66,3 @@ All of the parameters excepting `ver` have the same meaning as in `MergeTree`. - `ver` - column with the version. Optional parameter. For a description, see the text above. - -[Original article](https://clickhouse.tech/docs/en/operations/table_engines/replacingmergetree/) diff --git a/docs/en/engines/table-engines/mergetree-family/replication.md b/docs/en/engines/table-engines/mergetree-family/replication.md index ef34c8d3804..2db6686beb7 100644 --- a/docs/en/engines/table-engines/mergetree-family/replication.md +++ b/docs/en/engines/table-engines/mergetree-family/replication.md @@ -95,17 +95,19 @@ If ZooKeeper isn’t set in the config file, you can’t create replicated table ZooKeeper is not used in `SELECT` queries because replication does not affect the performance of `SELECT` and queries run just as fast as they do for non-replicated tables. When querying distributed replicated tables, ClickHouse behavior is controlled by the settings [max_replica_delay_for_distributed_queries](../../../operations/settings/settings.md#settings-max_replica_delay_for_distributed_queries) and [fallback_to_stale_replicas_for_distributed_queries](../../../operations/settings/settings.md#settings-fallback_to_stale_replicas_for_distributed_queries). -For each `INSERT` query, approximately ten entries are added to ZooKeeper through several transactions. (To be more precise, this is for each inserted block of data; an INSERT query contains one block or one block per `max_insert_block_size = 1048576` rows.) This leads to slightly longer latencies for `INSERT` compared to non-replicated tables. But if you follow the recommendations to insert data in batches of no more than one `INSERT` per second, it doesn’t create any problems. The entire ClickHouse cluster used for coordinating one ZooKeeper cluster has a total of several hundred `INSERTs` per second. The throughput on data inserts (the number of rows per second) is just as high as for non-replicated data. +For each `INSERT` query, approximately ten entries are added to ZooKeeper through several transactions. (To be more precise, this is for each inserted block of data; an INSERT query contains one block or one block per `max_insert_block_size = 1048576` rows.) This leads to slightly longer latencies for `INSERT` compared to non-replicated tables. But if you follow the recommendations to insert data in batches of no more than one `INSERT` per second, it does not create any problems. The entire ClickHouse cluster used for coordinating one ZooKeeper cluster has a total of several hundred `INSERTs` per second. The throughput on data inserts (the number of rows per second) is just as high as for non-replicated data. For very large clusters, you can use different ZooKeeper clusters for different shards. However, this hasn’t proven necessary on the Yandex.Metrica cluster (approximately 300 servers). Replication is asynchronous and multi-master. `INSERT` queries (as well as `ALTER`) can be sent to any available server. Data is inserted on the server where the query is run, and then it is copied to the other servers. Because it is asynchronous, recently inserted data appears on the other replicas with some latency. If part of the replicas are not available, the data is written when they become available. If a replica is available, the latency is the amount of time it takes to transfer the block of compressed data over the network. The number of threads performing background tasks for replicated tables can be set by [background_schedule_pool_size](../../../operations/settings/settings.md#background_schedule_pool_size) setting. +`ReplicatedMergeTree` engine uses a separate thread pool for replicated fetches. Size of the pool is limited by the [background_fetches_pool_size](../../../operations/settings/settings.md#background_fetches_pool_size) setting which can be tuned with a server restart. + By default, an INSERT query waits for confirmation of writing the data from only one replica. If the data was successfully written to only one replica and the server with this replica ceases to exist, the stored data will be lost. To enable getting confirmation of data writes from multiple replicas, use the `insert_quorum` option. Each block of data is written atomically. The INSERT query is divided into blocks up to `max_insert_block_size = 1048576` rows. In other words, if the `INSERT` query has less than 1048576 rows, it is made atomically. -Data blocks are deduplicated. For multiple writes of the same data block (data blocks of the same size containing the same rows in the same order), the block is only written once. The reason for this is in case of network failures when the client application doesn’t know if the data was written to the DB, so the `INSERT` query can simply be repeated. It doesn’t matter which replica INSERTs were sent to with identical data. `INSERTs` are idempotent. Deduplication parameters are controlled by [merge_tree](../../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-merge_tree) server settings. +Data blocks are deduplicated. For multiple writes of the same data block (data blocks of the same size containing the same rows in the same order), the block is only written once. The reason for this is in case of network failures when the client application does not know if the data was written to the DB, so the `INSERT` query can simply be repeated. It does not matter which replica INSERTs were sent to with identical data. `INSERTs` are idempotent. Deduplication parameters are controlled by [merge_tree](../../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-merge_tree) server settings. During replication, only the source data to insert is transferred over the network. Further data transformation (merging) is coordinated and performed on all the replicas in the same way. This minimizes network usage, which means that replication works well when replicas reside in different datacenters. (Note that duplicating data in different datacenters is the main goal of replication.) @@ -172,7 +174,7 @@ In this case, the path consists of the following parts: `{layer}-{shard}` is the shard identifier. In this example it consists of two parts, since the Yandex.Metrica cluster uses bi-level sharding. For most tasks, you can leave just the {shard} substitution, which will be expanded to the shard identifier. -`table_name` is the name of the node for the table in ZooKeeper. It is a good idea to make it the same as the table name. It is defined explicitly, because in contrast to the table name, it doesn’t change after a RENAME query. +`table_name` is the name of the node for the table in ZooKeeper. It is a good idea to make it the same as the table name. It is defined explicitly, because in contrast to the table name, it does not change after a RENAME query. *HINT*: you could add a database name in front of `table_name` as well. E.g. `db_name.table_name` The two built-in substitutions `{database}` and `{table}` can be used, they expand into the table name and the database name respectively (unless these macros are defined in the `macros` section). So the zookeeper path can be specified as `'/clickhouse/tables/{layer}-{shard}/{database}/{table}'`. @@ -284,6 +286,7 @@ If the data in ZooKeeper was lost or damaged, you can save data by moving it to **See Also** - [background_schedule_pool_size](../../../operations/settings/settings.md#background_schedule_pool_size) +- [background_fetches_pool_size](../../../operations/settings/settings.md#background_fetches_pool_size) - [execute_merges_on_single_replica_time_threshold](../../../operations/settings/settings.md#execute-merges-on-single-replica-time-threshold) [Original article](https://clickhouse.tech/docs/en/operations/table_engines/replication/) diff --git a/docs/en/engines/table-engines/mergetree-family/summingmergetree.md b/docs/en/engines/table-engines/mergetree-family/summingmergetree.md index 1f23e4daf51..9bfd1816d32 100644 --- a/docs/en/engines/table-engines/mergetree-family/summingmergetree.md +++ b/docs/en/engines/table-engines/mergetree-family/summingmergetree.md @@ -7,7 +7,7 @@ toc_title: SummingMergeTree The engine inherits from [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md#table_engines-mergetree). The difference is that when merging data parts for `SummingMergeTree` tables ClickHouse replaces all the rows with the same primary key (or more accurately, with the same [sorting key](../../../engines/table-engines/mergetree-family/mergetree.md)) with one row which contains summarized values for the columns with the numeric data type. If the sorting key is composed in a way that a single key value corresponds to large number of rows, this significantly reduces storage volume and speeds up data selection. -We recommend to use the engine together with `MergeTree`. Store complete data in `MergeTree` table, and use `SummingMergeTree` for aggregated data storing, for example, when preparing reports. Such an approach will prevent you from losing valuable data due to an incorrectly composed primary key. +We recommend using the engine together with `MergeTree`. Store complete data in `MergeTree` table, and use `SummingMergeTree` for aggregated data storing, for example, when preparing reports. Such an approach will prevent you from losing valuable data due to an incorrectly composed primary key. ## Creating a Table {#creating-a-table} diff --git a/docs/en/engines/table-engines/mergetree-family/versionedcollapsingmergetree.md b/docs/en/engines/table-engines/mergetree-family/versionedcollapsingmergetree.md index b23139b402b..93c35344e24 100644 --- a/docs/en/engines/table-engines/mergetree-family/versionedcollapsingmergetree.md +++ b/docs/en/engines/table-engines/mergetree-family/versionedcollapsingmergetree.md @@ -133,7 +133,7 @@ When ClickHouse inserts data, it orders rows by the primary key. If the `Version ## Selecting Data {#selecting-data} -ClickHouse doesn’t guarantee that all of the rows with the same primary key will be in the same resulting data part or even on the same physical server. This is true both for writing the data and for subsequent merging of the data parts. In addition, ClickHouse processes `SELECT` queries with multiple threads, and it cannot predict the order of rows in the result. This means that aggregation is required if there is a need to get completely “collapsed” data from a `VersionedCollapsingMergeTree` table. +ClickHouse does not guarantee that all of the rows with the same primary key will be in the same resulting data part or even on the same physical server. This is true both for writing the data and for subsequent merging of the data parts. In addition, ClickHouse processes `SELECT` queries with multiple threads, and it cannot predict the order of rows in the result. This means that aggregation is required if there is a need to get completely “collapsed” data from a `VersionedCollapsingMergeTree` table. To finalize collapsing, write a query with a `GROUP BY` clause and aggregate functions that account for the sign. For example, to calculate quantity, use `sum(Sign)` instead of `count()`. To calculate the sum of something, use `sum(Sign * x)` instead of `sum(x)`, and add `HAVING sum(Sign) > 0`. @@ -219,7 +219,7 @@ HAVING sum(Sign) > 0 └─────────────────────┴───────────┴──────────┴─────────┘ ``` -If we don’t need aggregation and want to force collapsing, we can use the `FINAL` modifier for the `FROM` clause. +If we do not need aggregation and want to force collapsing, we can use the `FINAL` modifier for the `FROM` clause. ``` sql SELECT * FROM UAct FINAL diff --git a/docs/en/engines/table-engines/special/buffer.md b/docs/en/engines/table-engines/special/buffer.md index 8245cd19e8c..cacb310a15c 100644 --- a/docs/en/engines/table-engines/special/buffer.md +++ b/docs/en/engines/table-engines/special/buffer.md @@ -20,11 +20,11 @@ Engine parameters: Optional engine parameters: -- `flush_time`, `flush_rows`, `flush_bytes` – Conditions for flushing data from the buffer, that will happen only in background (ommited or zero means no `flush*` parameters). +- `flush_time`, `flush_rows`, `flush_bytes` – Conditions for flushing data from the buffer, that will happen only in background (omitted or zero means no `flush*` parameters). Data is flushed from the buffer and written to the destination table if all the `min*` conditions or at least one `max*` condition are met. -Also if at least one `flush*` condition are met flush initiated in background, this is different from `max*`, since `flush*` allows you to configure background flushes separately to avoid adding latency for `INSERT` (into `Buffer`) queries. +Also, if at least one `flush*` condition are met flush initiated in background, this is different from `max*`, since `flush*` allows you to configure background flushes separately to avoid adding latency for `INSERT` (into `Buffer`) queries. - `min_time`, `max_time`, `flush_time` – Condition for the time in seconds from the moment of the first write to the buffer. - `min_rows`, `max_rows`, `flush_rows` – Condition for the number of rows in the buffer. @@ -49,12 +49,12 @@ You can set empty strings in single quotation marks for the database and table n When reading from a Buffer table, data is processed both from the buffer and from the destination table (if there is one). Note that the Buffer tables does not support an index. In other words, data in the buffer is fully scanned, which might be slow for large buffers. (For data in a subordinate table, the index that it supports will be used.) -If the set of columns in the Buffer table doesn’t match the set of columns in a subordinate table, a subset of columns that exist in both tables is inserted. +If the set of columns in the Buffer table does not match the set of columns in a subordinate table, a subset of columns that exist in both tables is inserted. -If the types don’t match for one of the columns in the Buffer table and a subordinate table, an error message is entered in the server log and the buffer is cleared. -The same thing happens if the subordinate table doesn’t exist when the buffer is flushed. +If the types do not match for one of the columns in the Buffer table and a subordinate table, an error message is entered in the server log, and the buffer is cleared. +The same thing happens if the subordinate table does not exist when the buffer is flushed. -If you need to run ALTER for a subordinate table and the Buffer table, we recommend first deleting the Buffer table, running ALTER for the subordinate table, then creating the Buffer table again. +If you need to run ALTER for a subordinate table, and the Buffer table, we recommend first deleting the Buffer table, running ALTER for the subordinate table, then creating the Buffer table again. If the server is restarted abnormally, the data in the buffer is lost. @@ -70,6 +70,6 @@ Due to these disadvantages, we can only recommend using a Buffer table in rare c A Buffer table is used when too many INSERTs are received from a large number of servers over a unit of time and data can’t be buffered before insertion, which means the INSERTs can’t run fast enough. -Note that it doesn’t make sense to insert data one row at a time, even for Buffer tables. This will only produce a speed of a few thousand rows per second, while inserting larger blocks of data can produce over a million rows per second (see the section “Performance”). +Note that it does not make sense to insert data one row at a time, even for Buffer tables. This will only produce a speed of a few thousand rows per second, while inserting larger blocks of data can produce over a million rows per second (see the section “Performance”). [Original article](https://clickhouse.tech/docs/en/operations/table_engines/buffer/) diff --git a/docs/en/engines/table-engines/special/distributed.md b/docs/en/engines/table-engines/special/distributed.md index c47e0c27cd2..6de6602a216 100644 --- a/docs/en/engines/table-engines/special/distributed.md +++ b/docs/en/engines/table-engines/special/distributed.md @@ -25,7 +25,7 @@ The Distributed engine accepts parameters: - [insert_distributed_sync](../../../operations/settings/settings.md#insert_distributed_sync) setting - [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-multiple-volumes) for the examples -Also it accept the following settings: +Also, it accepts the following settings: - `fsync_after_insert` - do the `fsync` for the file data after asynchronous insert to Distributed. Guarantees that the OS flushed the whole inserted data to a file **on the initiator node** disk. @@ -124,7 +124,7 @@ Replicas are duplicating servers (in order to read all the data, you can access Cluster names must not contain dots. The parameters `host`, `port`, and optionally `user`, `password`, `secure`, `compression` are specified for each server: -- `host` – The address of the remote server. You can use either the domain or the IPv4 or IPv6 address. If you specify the domain, the server makes a DNS request when it starts, and the result is stored as long as the server is running. If the DNS request fails, the server doesn’t start. If you change the DNS record, restart the server. +- `host` – The address of the remote server. You can use either the domain or the IPv4 or IPv6 address. If you specify the domain, the server makes a DNS request when it starts, and the result is stored as long as the server is running. If the DNS request fails, the server does not start. If you change the DNS record, restart the server. - `port` – The TCP port for messenger activity (`tcp_port` in the config, usually set to 9000). Do not confuse it with http_port. - `user` – Name of the user for connecting to a remote server. Default value: default. This user must have access to connect to the specified server. Access is configured in the users.xml file. For more information, see the section [Access rights](../../../operations/access-rights.md). - `password` – The password for connecting to a remote server (not masked). Default value: empty string. @@ -143,13 +143,13 @@ To view your clusters, use the `system.clusters` table. The Distributed engine allows working with a cluster like a local server. However, the cluster is inextensible: you must write its configuration in the server config file (even better, for all the cluster’s servers). -The Distributed engine requires writing clusters to the config file. Clusters from the config file are updated on the fly, without restarting the server. If you need to send a query to an unknown set of shards and replicas each time, you don’t need to create a Distributed table – use the `remote` table function instead. See the section [Table functions](../../../sql-reference/table-functions/index.md). +The Distributed engine requires writing clusters to the config file. Clusters from the config file are updated on the fly, without restarting the server. If you need to send a query to an unknown set of shards and replicas each time, you do not need to create a Distributed table – use the `remote` table function instead. See the section [Table functions](../../../sql-reference/table-functions/index.md). There are two methods for writing data to a cluster: First, you can define which servers to write which data to and perform the write directly on each shard. In other words, perform INSERT in the tables that the distributed table “looks at”. This is the most flexible solution as you can use any sharding scheme, which could be non-trivial due to the requirements of the subject area. This is also the most optimal solution since data can be written to different shards completely independently. -Second, you can perform INSERT in a Distributed table. In this case, the table will distribute the inserted data across the servers itself. In order to write to a Distributed table, it must have a sharding key set (the last parameter). In addition, if there is only one shard, the write operation works without specifying the sharding key, since it doesn’t mean anything in this case. +Second, you can perform INSERT in a Distributed table. In this case, the table will distribute the inserted data across the servers itself. In order to write to a Distributed table, it must have a sharding key set (the last parameter). In addition, if there is only one shard, the write operation works without specifying the sharding key, since it does not mean anything in this case. Each shard can have a weight defined in the config file. By default, the weight is equal to one. Data is distributed across shards in the amount proportional to the shard weight. For example, if there are two shards and the first has a weight of 9 while the second has a weight of 10, the first will be sent 9 / 19 parts of the rows, and the second will be sent 10 / 19. @@ -165,7 +165,7 @@ The sharding expression can be any expression from constants and table columns t A simple reminder from the division is a limited solution for sharding and isn’t always appropriate. It works for medium and large volumes of data (dozens of servers), but not for very large volumes of data (hundreds of servers or more). In the latter case, use the sharding scheme required by the subject area, rather than using entries in Distributed tables. -SELECT queries are sent to all the shards and work regardless of how data is distributed across the shards (they can be distributed completely randomly). When you add a new shard, you don’t have to transfer the old data to it. You can write new data with a heavier weight – the data will be distributed slightly unevenly, but queries will work correctly and efficiently. +SELECT queries are sent to all the shards and work regardless of how data is distributed across the shards (they can be distributed completely randomly). When you add a new shard, you do not have to transfer the old data to it. You can write new data with a heavier weight – the data will be distributed slightly unevenly, but queries will work correctly and efficiently. You should be concerned about the sharding scheme in the following cases: diff --git a/docs/en/engines/table-engines/special/external-data.md b/docs/en/engines/table-engines/special/external-data.md index 88d76b3805e..b5429ccad12 100644 --- a/docs/en/engines/table-engines/special/external-data.md +++ b/docs/en/engines/table-engines/special/external-data.md @@ -9,7 +9,7 @@ ClickHouse allows sending a server the data that is needed for processing a quer For example, if you have a text file with important user identifiers, you can upload it to the server along with a query that uses filtration by this list. -If you need to run more than one query with a large volume of external data, don’t use this feature. It is better to upload the data to the DB ahead of time. +If you need to run more than one query with a large volume of external data, do not use this feature. It is better to upload the data to the DB ahead of time. External data can be uploaded using the command-line client (in non-interactive mode), or using the HTTP interface. diff --git a/docs/en/engines/table-engines/special/file.md b/docs/en/engines/table-engines/special/file.md index 2acec40ef02..17eef2b4941 100644 --- a/docs/en/engines/table-engines/special/file.md +++ b/docs/en/engines/table-engines/special/file.md @@ -24,7 +24,7 @@ The `Format` parameter specifies one of the available file formats. To perform `INSERT` queries – for output. The available formats are listed in the [Formats](../../../interfaces/formats.md#formats) section. -ClickHouse does not allow to specify filesystem path for`File`. It will use folder defined by [path](../../../operations/server-configuration-parameters/settings.md) setting in server configuration. +ClickHouse does not allow specifying filesystem path for`File`. It will use folder defined by [path](../../../operations/server-configuration-parameters/settings.md) setting in server configuration. When creating table using `File(Format)` it creates empty subdirectory in that folder. When data is written to that table, it’s put into `data.Format` file in that subdirectory. diff --git a/docs/en/engines/table-engines/special/join.md b/docs/en/engines/table-engines/special/join.md index 30dbec73939..4cd1c741352 100644 --- a/docs/en/engines/table-engines/special/join.md +++ b/docs/en/engines/table-engines/special/join.md @@ -28,7 +28,7 @@ See the detailed description of the [CREATE TABLE](../../../sql-reference/statem - `join_type` – [JOIN type](../../../sql-reference/statements/select/join.md#select-join-types). - `k1[, k2, ...]` – Key columns from the `USING` clause that the `JOIN` operation is made with. -Enter `join_strictness` and `join_type` parameters without quotes, for example, `Join(ANY, LEFT, col1)`. They must match the `JOIN` operation that the table will be used for. If the parameters don’t match, ClickHouse doesn’t throw an exception and may return incorrect data. +Enter `join_strictness` and `join_type` parameters without quotes, for example, `Join(ANY, LEFT, col1)`. They must match the `JOIN` operation that the table will be used for. If the parameters do not match, ClickHouse does not throw an exception and may return incorrect data. ## Table Usage {#table-usage} diff --git a/docs/en/engines/table-engines/special/memory.md b/docs/en/engines/table-engines/special/memory.md index a6c833ebdba..b6402c2030b 100644 --- a/docs/en/engines/table-engines/special/memory.md +++ b/docs/en/engines/table-engines/special/memory.md @@ -6,7 +6,7 @@ toc_title: Memory # Memory Table Engine {#memory} The Memory engine stores data in RAM, in uncompressed form. Data is stored in exactly the same form as it is received when read. In other words, reading from this table is completely free. -Concurrent data access is synchronized. Locks are short: read and write operations don’t block each other. +Concurrent data access is synchronized. Locks are short: read and write operations do not block each other. Indexes are not supported. Reading is parallelized. Maximal productivity (over 10 GB/sec) is reached on simple queries, because there is no reading from the disk, decompressing, or deserializing data. (We should note that in many cases, the productivity of the MergeTree engine is almost as high.) diff --git a/docs/en/faq/general/columnar-database.md b/docs/en/faq/general/columnar-database.md index 1c6a2bc2989..e30b4a94a87 100644 --- a/docs/en/faq/general/columnar-database.md +++ b/docs/en/faq/general/columnar-database.md @@ -22,4 +22,4 @@ Here is the illustration of the difference between traditional row-oriented syst **Columnar** ![Columnar](https://clickhouse.tech/docs/en/images/column-oriented.gif#) -A columnar database is a preferred choice for analytical applications because it allows to have many columns in a table just in case, but don’t pay the cost for unused columns on read query execution time. Column-oriented databases are designed for big data processing because and data warehousing, they often natively scale using distributed clusters of low-cost hardware to increase throughput. ClickHouse does it with combination of [distributed](../../engines/table-engines/special/distributed.md) and [replicated](../../engines/table-engines/mergetree-family/replication.md) tables. +A columnar database is a preferred choice for analytical applications because it allows to have many columns in a table just in case, but do not pay the cost for unused columns on read query execution time. Column-oriented databases are designed for big data processing because and data warehousing, they often natively scale using distributed clusters of low-cost hardware to increase throughput. ClickHouse does it with combination of [distributed](../../engines/table-engines/special/distributed.md) and [replicated](../../engines/table-engines/mergetree-family/replication.md) tables. diff --git a/docs/en/faq/general/dbms-naming.md b/docs/en/faq/general/dbms-naming.md index 88a66659ab3..d4e87ff450a 100644 --- a/docs/en/faq/general/dbms-naming.md +++ b/docs/en/faq/general/dbms-naming.md @@ -6,7 +6,7 @@ toc_priority: 10 # What Does “ClickHouse” Mean? {#what-does-clickhouse-mean} -It’s a combination of “**Click**stream” and “Data ware**House**”. It comes from the original use case at Yandex.Metrica, where ClickHouse was supposed to keep records of all clicks by people from all over the Internet and it still does the job. You can read more about this use case on [ClickHouse history](../../introduction/history.md) page. +It’s a combination of “**Click**stream” and “Data ware**House**”. It comes from the original use case at Yandex.Metrica, where ClickHouse was supposed to keep records of all clicks by people from all over the Internet, and it still does the job. You can read more about this use case on [ClickHouse history](../../introduction/history.md) page. This two-part meaning has two consequences: diff --git a/docs/en/faq/general/ne-tormozit.md b/docs/en/faq/general/ne-tormozit.md index 44fe686d670..17c5479fa6d 100644 --- a/docs/en/faq/general/ne-tormozit.md +++ b/docs/en/faq/general/ne-tormozit.md @@ -15,9 +15,9 @@ One of the following batches of those t-shirts was supposed to be given away on So, what does it mean? Here are some ways to translate *“не тормозит”*: -- If you translate it literally, it’d be something like *“ClickHouse doesn’t press the brake pedal”*. +- If you translate it literally, it’d be something like *“ClickHouse does not press the brake pedal”*. - If you’d want to express it as close to how it sounds to a Russian person with IT background, it’d be something like *“If your larger system lags, it’s not because it uses ClickHouse”*. -- Shorter, but not so precise versions could be *“ClickHouse is not slow”*, *“ClickHouse doesn’t lag”* or just *“ClickHouse is fast”*. +- Shorter, but not so precise versions could be *“ClickHouse is not slow”*, *“ClickHouse does not lag”* or just *“ClickHouse is fast”*. If you haven’t seen one of those t-shirts in person, you can check them out online in many ClickHouse-related videos. For example, this one: diff --git a/docs/en/faq/general/olap.md b/docs/en/faq/general/olap.md index f023b8c3524..1f6df183f8c 100644 --- a/docs/en/faq/general/olap.md +++ b/docs/en/faq/general/olap.md @@ -31,7 +31,7 @@ All database management systems could be classified into two groups: OLAP (Onlin In practice OLAP and OLTP are not categories, it’s more like a spectrum. Most real systems usually focus on one of them but provide some solutions or workarounds if the opposite kind of workload is also desired. This situation often forces businesses to operate multiple storage systems integrated, which might be not so big deal but having more systems make it more expensive to maintain. So the trend of recent years is HTAP (**Hybrid Transactional/Analytical Processing**) when both kinds of the workload are handled equally well by a single database management system. -Even if a DBMS started as a pure OLAP or pure OLTP, they are forced to move towards that HTAP direction to keep up with their competition. And ClickHouse is no exception, initially, it has been designed as [fast-as-possible OLAP system](../../faq/general/why-clickhouse-is-so-fast.md) and it still doesn’t have full-fledged transaction support, but some features like consistent read/writes and mutations for updating/deleting data had to be added. +Even if a DBMS started as a pure OLAP or pure OLTP, they are forced to move towards that HTAP direction to keep up with their competition. And ClickHouse is no exception, initially, it has been designed as [fast-as-possible OLAP system](../../faq/general/why-clickhouse-is-so-fast.md) and it still does not have full-fledged transaction support, but some features like consistent read/writes and mutations for updating/deleting data had to be added. The fundamental trade-off between OLAP and OLTP systems remains: diff --git a/docs/en/faq/general/who-is-using-clickhouse.md b/docs/en/faq/general/who-is-using-clickhouse.md index 2ae07507123..b7ff867d726 100644 --- a/docs/en/faq/general/who-is-using-clickhouse.md +++ b/docs/en/faq/general/who-is-using-clickhouse.md @@ -6,9 +6,9 @@ toc_priority: 9 # Who Is Using ClickHouse? {#who-is-using-clickhouse} -Being an open-source product makes this question not so straightforward to answer. You don’t have to tell anyone if you want to start using ClickHouse, you just go grab source code or pre-compiled packages. There’s no contract to sign and the [Apache 2.0 license](https://github.com/ClickHouse/ClickHouse/blob/master/LICENSE) allows for unconstrained software distribution. +Being an open-source product makes this question not so straightforward to answer. You do not have to tell anyone if you want to start using ClickHouse, you just go grab source code or pre-compiled packages. There’s no contract to sign and the [Apache 2.0 license](https://github.com/ClickHouse/ClickHouse/blob/master/LICENSE) allows for unconstrained software distribution. -Also, the technology stack is often in a grey zone of what’s covered by an NDA. Some companies consider technologies they use as a competitive advantage even if they are open-source and don’t allow employees to share any details publicly. Some see some PR risks and allow employees to share implementation details only with their PR department approval. +Also, the technology stack is often in a grey zone of what’s covered by an NDA. Some companies consider technologies they use as a competitive advantage even if they are open-source and do not allow employees to share any details publicly. Some see some PR risks and allow employees to share implementation details only with their PR department approval. So how to tell who is using ClickHouse? diff --git a/docs/en/faq/index.md b/docs/en/faq/index.md index 1ae71cf680c..1e9c3b8ae64 100644 --- a/docs/en/faq/index.md +++ b/docs/en/faq/index.md @@ -39,7 +39,7 @@ Question candidates: - How to kill a process (query) in ClickHouse? - How to implement pivot (like in pandas)? - How to remove the default ClickHouse user through users.d? -- Importing MySQL dump to Clickhouse +- Importing MySQL dump to ClickHouse - Window function workarounds (row_number, lag/lead, running diff/sum/average) ##} diff --git a/docs/en/faq/operations/delete-old-data.md b/docs/en/faq/operations/delete-old-data.md index fdf1f1f290e..32fc485e98a 100644 --- a/docs/en/faq/operations/delete-old-data.md +++ b/docs/en/faq/operations/delete-old-data.md @@ -12,7 +12,7 @@ The short answer is “yes”. ClickHouse has multiple mechanisms that allow fre ClickHouse allows to automatically drop values when some condition happens. This condition is configured as an expression based on any columns, usually just static offset for any timestamp column. -The key advantage of this approach is that it doesn’t need any external system to trigger, once TTL is configured, data removal happens automatically in background. +The key advantage of this approach is that it does not need any external system to trigger, once TTL is configured, data removal happens automatically in background. !!! note "Note" TTL can also be used to move data not only to [/dev/null](https://en.wikipedia.org/wiki/Null_device), but also between different storage systems, like from SSD to HDD. @@ -21,7 +21,7 @@ More details on [configuring TTL](../../engines/table-engines/mergetree-family/m ## ALTER DELETE {#alter-delete} -ClickHouse doesn’t have real-time point deletes like in [OLTP](https://en.wikipedia.org/wiki/Online_transaction_processing) databases. The closest thing to them are mutations. They are issued as `ALTER ... DELETE` or `ALTER ... UPDATE` queries to distinguish from normal `DELETE` or `UPDATE` as they are asynchronous batch operations, not immediate modifications. The rest of syntax after `ALTER TABLE` prefix is similar. +ClickHouse does not have real-time point deletes like in [OLTP](https://en.wikipedia.org/wiki/Online_transaction_processing) databases. The closest thing to them are mutations. They are issued as `ALTER ... DELETE` or `ALTER ... UPDATE` queries to distinguish from normal `DELETE` or `UPDATE` as they are asynchronous batch operations, not immediate modifications. The rest of syntax after `ALTER TABLE` prefix is similar. `ALTER DELETE` can be issued to flexibly remove old data. If you need to do it regularly, the main downside will be the need to have an external system to submit the query. There are also some performance considerations since mutation rewrite complete parts even there’s only a single row to be deleted. diff --git a/docs/en/faq/operations/production.md b/docs/en/faq/operations/production.md index 77f7a76f2f9..52ca300ced0 100644 --- a/docs/en/faq/operations/production.md +++ b/docs/en/faq/operations/production.md @@ -25,7 +25,7 @@ Here’re some key points to get reasonable fidelity in a pre-production environ - Don’t make it read-only with some frozen data. - Don’t make it write-only with just copying data without building some typical reports. - Don’t wipe it clean instead of applying schema migrations. -- Use a sample of real production data and queries. Try to choose a sample that’s still representative and makes `SELECT` queries return reasonable results. Use obfuscation if your data is sensitive and internal policies don’t allow it to leave the production environment. +- Use a sample of real production data and queries. Try to choose a sample that’s still representative and makes `SELECT` queries return reasonable results. Use obfuscation if your data is sensitive and internal policies do not allow it to leave the production environment. - Make sure that pre-production is covered by your monitoring and alerting software the same way as your production environment does. - If your production spans across multiple datacenters or regions, make your pre-production does the same. - If your production uses complex features like replication, distributed table, cascading materialize views, make sure they are configured similarly in pre-production. @@ -61,8 +61,8 @@ For production use, there are two key options: `stable` and `lts`. Here is some - `stable` is the kind of package we recommend by default. They are released roughly monthly (and thus provide new features with reasonable delay) and three latest stable releases are supported in terms of diagnostics and backporting of bugfixes. - `lts` are released twice a year and are supported for a year after their initial release. You might prefer them over `stable` in the following cases: - - Your company has some internal policies that don’t allow for frequent upgrades or using non-LTS software. - - You are using ClickHouse in some secondary products that either doesn’t require any complex ClickHouse features and don’t have enough resources to keep it updated. + - Your company has some internal policies that do not allow for frequent upgrades or using non-LTS software. + - You are using ClickHouse in some secondary products that either does not require any complex ClickHouse features and do not have enough resources to keep it updated. Many teams who initially thought that `lts` is the way to go, often switch to `stable` anyway because of some recent feature that’s important for their product. diff --git a/docs/en/getting-started/install.md b/docs/en/getting-started/install.md index c444264b71f..9a4848a3ef0 100644 --- a/docs/en/getting-started/install.md +++ b/docs/en/getting-started/install.md @@ -132,7 +132,7 @@ To start the server as a daemon, run: $ sudo service clickhouse-server start ``` -If you don’t have `service` command, run as +If you do not have `service` command, run as ``` bash $ sudo /etc/init.d/clickhouse-server start @@ -140,7 +140,7 @@ $ sudo /etc/init.d/clickhouse-server start See the logs in the `/var/log/clickhouse-server/` directory. -If the server doesn’t start, check the configurations in the file `/etc/clickhouse-server/config.xml`. +If the server does not start, check the configurations in the file `/etc/clickhouse-server/config.xml`. You can also manually launch the server from the console: @@ -149,7 +149,7 @@ $ clickhouse-server --config-file=/etc/clickhouse-server/config.xml ``` In this case, the log will be printed to the console, which is convenient during development. -If the configuration file is in the current directory, you don’t need to specify the `--config-file` parameter. By default, it uses `./config.xml`. +If the configuration file is in the current directory, you do not need to specify the `--config-file` parameter. By default, it uses `./config.xml`. ClickHouse supports access restriction settings. They are located in the `users.xml` file (next to `config.xml`). By default, access is allowed from anywhere for the `default` user, without a password. See `user/default/networks`. diff --git a/docs/en/getting-started/tutorial.md b/docs/en/getting-started/tutorial.md index fe697972dff..694a82e100e 100644 --- a/docs/en/getting-started/tutorial.md +++ b/docs/en/getting-started/tutorial.md @@ -105,7 +105,7 @@ Syntax for creating tables is way more complicated compared to databases (see [r 2. Table schema, i.e. list of columns and their [data types](../sql-reference/data-types/index.md). 3. [Table engine](../engines/table-engines/index.md) and its settings, which determines all the details on how queries to this table will be physically executed. -Yandex.Metrica is a web analytics service, and sample dataset doesn’t cover its full functionality, so there are only two tables to create: +Yandex.Metrica is a web analytics service, and sample dataset does not cover its full functionality, so there are only two tables to create: - `hits` is a table with each action done by all users on all websites covered by the service. - `visits` is a table that contains pre-built sessions instead of individual actions. diff --git a/docs/en/guides/apply-catboost-model.md b/docs/en/guides/apply-catboost-model.md index 7c2c8a575ec..ec3ecc92141 100644 --- a/docs/en/guides/apply-catboost-model.md +++ b/docs/en/guides/apply-catboost-model.md @@ -20,7 +20,7 @@ For more information about training CatBoost models, see [Training and applying ## Prerequisites {#prerequisites} -If you don’t have the [Docker](https://docs.docker.com/install/) yet, install it. +If you do not have the [Docker](https://docs.docker.com/install/) yet, install it. !!! note "Note" [Docker](https://www.docker.com) is a software platform that allows you to create containers that isolate a CatBoost and ClickHouse installation from the rest of the system. diff --git a/docs/en/index.md b/docs/en/index.md index 676fd444995..12e72ebdf3b 100644 --- a/docs/en/index.md +++ b/docs/en/index.md @@ -54,7 +54,7 @@ The higher the load on the system, the more important it is to customize the sys - There is one large table per query. All tables are small, except for one. - A query result is significantly smaller than the source data. In other words, data is filtered or aggregated, so the result fits in a single server’s RAM. -It is easy to see that the OLAP scenario is very different from other popular scenarios (such as OLTP or Key-Value access). So it doesn’t make sense to try to use OLTP or a Key-Value DB for processing analytical queries if you want to get decent performance. For example, if you try to use MongoDB or Redis for analytics, you will get very poor performance compared to OLAP databases. +It is easy to see that the OLAP scenario is very different from other popular scenarios (such as OLTP or Key-Value access). So it does not make sense to try to use OLTP or a Key-Value DB for processing analytical queries if you want to get decent performance. For example, if you try to use MongoDB or Redis for analytics, you will get very poor performance compared to OLAP databases. ## Why Column-Oriented Databases Work Better in the OLAP Scenario {#why-column-oriented-databases-work-better-in-the-olap-scenario} @@ -80,15 +80,15 @@ For example, the query “count the number of records for each advertising platf ### CPU {#cpu} -Since executing a query requires processing a large number of rows, it helps to dispatch all operations for entire vectors instead of for separate rows, or to implement the query engine so that there is almost no dispatching cost. If you don’t do this, with any half-decent disk subsystem, the query interpreter inevitably stalls the CPU. It makes sense to both store data in columns and process it, when possible, by columns. +Since executing a query requires processing a large number of rows, it helps to dispatch all operations for entire vectors instead of for separate rows, or to implement the query engine so that there is almost no dispatching cost. If you do not do this, with any half-decent disk subsystem, the query interpreter inevitably stalls the CPU. It makes sense to both store data in columns and process it, when possible, by columns. There are two ways to do this: -1. A vector engine. All operations are written for vectors, instead of for separate values. This means you don’t need to call operations very often, and dispatching costs are negligible. Operation code contains an optimized internal cycle. +1. A vector engine. All operations are written for vectors, instead of for separate values. This means you do not need to call operations very often, and dispatching costs are negligible. Operation code contains an optimized internal cycle. 2. Code generation. The code generated for the query has all the indirect calls in it. -This is not done in “normal” databases, because it doesn’t make sense when running simple queries. However, there are exceptions. For example, MemSQL uses code generation to reduce latency when processing SQL queries. (For comparison, analytical DBMSs require optimization of throughput, not latency.) +This is not done in “normal” databases, because it does not make sense when running simple queries. However, there are exceptions. For example, MemSQL uses code generation to reduce latency when processing SQL queries. (For comparison, analytical DBMSs require optimization of throughput, not latency.) Note that for CPU efficiency, the query language must be declarative (SQL or MDX), or at least a vector (J, K). The query should only contain implicit loops, allowing for optimization. diff --git a/docs/en/interfaces/cli.md b/docs/en/interfaces/cli.md index 7e072e366dc..8457ea41857 100644 --- a/docs/en/interfaces/cli.md +++ b/docs/en/interfaces/cli.md @@ -66,7 +66,7 @@ When processing a query, the client shows: 3. The result in the specified format. 4. The number of lines in the result, the time passed, and the average speed of query processing. -You can cancel a long query by pressing Ctrl+C. However, you will still need to wait for a little for the server to abort the request. It is not possible to cancel a query at certain stages. If you don’t wait and press Ctrl+C a second time, the client will exit. +You can cancel a long query by pressing Ctrl+C. However, you will still need to wait for a little for the server to abort the request. It is not possible to cancel a query at certain stages. If you do not wait and press Ctrl+C a second time, the client will exit. The command-line client allows passing external data (external temporary tables) for querying. For more information, see the section “External data for query processing”. diff --git a/docs/en/interfaces/formats.md b/docs/en/interfaces/formats.md index 5987ba0f676..c616d843173 100644 --- a/docs/en/interfaces/formats.md +++ b/docs/en/interfaces/formats.md @@ -662,7 +662,7 @@ ClickHouse allows: - Any order of key-value pairs in the object. - Omitting some values. -ClickHouse ignores spaces between elements and commas after the objects. You can pass all the objects in one line. You don’t have to separate them with line breaks. +ClickHouse ignores spaces between elements and commas after the objects. You can pass all the objects in one line. You do not have to separate them with line breaks. **Omitted values processing** @@ -770,9 +770,9 @@ SELECT * FROM json_each_row_nested ## Native {#native} -The most efficient format. Data is written and read by blocks in binary format. For each block, the number of rows, number of columns, column names and types, and parts of columns in this block are recorded one after another. In other words, this format is “columnar” – it doesn’t convert columns to rows. This is the format used in the native interface for interaction between servers, for using the command-line client, and for C++ clients. +The most efficient format. Data is written and read by blocks in binary format. For each block, the number of rows, number of columns, column names and types, and parts of columns in this block are recorded one after another. In other words, this format is “columnar” – it does not convert columns to rows. This is the format used in the native interface for interaction between servers, for using the command-line client, and for C++ clients. -You can use this format to quickly generate dumps that can only be read by the ClickHouse DBMS. It doesn’t make sense to work with this format yourself. +You can use this format to quickly generate dumps that can only be read by the ClickHouse DBMS. It does not make sense to work with this format yourself. ## Null {#null} @@ -1039,7 +1039,7 @@ struct Message { } ``` -Deserialization is effective and usually doesn’t increase the system load. +Deserialization is effective and usually does not increase the system load. See also [Format Schema](#formatschema). @@ -1312,7 +1312,7 @@ ClickHouse supports configurable precision of the `Decimal` type. The `INSERT` q Unsupported ORC data types: `TIME32`, `FIXED_SIZE_BINARY`, `JSON`, `UUID`, `ENUM`. -The data types of ClickHouse table columns don’t have to match the corresponding ORC data fields. When inserting data, ClickHouse interprets data types according to the table above and then [casts](../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) the data to the data type set for the ClickHouse table column. +The data types of ClickHouse table columns do not have to match the corresponding ORC data fields. When inserting data, ClickHouse interprets data types according to the table above and then [casts](../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) the data to the data type set for the ClickHouse table column. ### Inserting Data {#inserting-data-2} diff --git a/docs/en/interfaces/http.md b/docs/en/interfaces/http.md index 18533cfc6c2..dec3c839020 100644 --- a/docs/en/interfaces/http.md +++ b/docs/en/interfaces/http.md @@ -52,7 +52,7 @@ X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","writ ``` As you can see, curl is somewhat inconvenient in that spaces must be URL escaped. -Although wget escapes everything itself, we don’t recommend using it because it doesn’t work well over HTTP 1.1 when using keep-alive and Transfer-Encoding: chunked. +Although wget escapes everything itself, we do not recommend using it because it does not work well over HTTP 1.1 when using keep-alive and Transfer-Encoding: chunked. ``` bash $ echo 'SELECT 1' | curl 'http://localhost:8123/' --data-binary @- @@ -146,7 +146,7 @@ Deleting the table. $ echo 'DROP TABLE t' | curl 'http://localhost:8123/' --data-binary @- ``` -For successful requests that don’t return a data table, an empty response body is returned. +For successful requests that do not return a data table, an empty response body is returned. ## Compression {#compression} @@ -273,7 +273,7 @@ Possible header fields: - `written_rows` — Number of rows written. - `written_bytes` — Volume of data written in bytes. -Running requests don’t stop automatically if the HTTP connection is lost. Parsing and data formatting are performed on the server-side, and using the network might be ineffective. +Running requests do not stop automatically if the HTTP connection is lost. Parsing and data formatting are performed on the server-side, and using the network might be ineffective. The optional ‘query_id’ parameter can be passed as the query ID (any string). For more information, see the section “Settings, replace_running_query”. The optional ‘quota_key’ parameter can be passed as the quota key (any string). For more information, see the section “Quotas”. diff --git a/docs/en/introduction/distinctive-features.md b/docs/en/introduction/distinctive-features.md index be7c2d2e7c1..ee9f9484c67 100644 --- a/docs/en/introduction/distinctive-features.md +++ b/docs/en/introduction/distinctive-features.md @@ -61,7 +61,7 @@ Unlike other database management systems, secondary indexes in ClickHouse does n ## Suitable for Online Queries {#suitable-for-online-queries} -Most OLAP database management systems don’t aim for online queries with sub-second latencies. In alternative systems, report building time of tens of seconds or even minutes is often considered acceptable. Sometimes it takes even more which forces to prepare reports offline (in advance or by responding with “come back later”). +Most OLAP database management systems do not aim for online queries with sub-second latencies. In alternative systems, report building time of tens of seconds or even minutes is often considered acceptable. Sometimes it takes even more which forces to prepare reports offline (in advance or by responding with “come back later”). In ClickHouse low latency means that queries can be processed without delay and without trying to prepare an answer in advance, right at the same moment while the user interface page is loading. In other words, online. diff --git a/docs/en/operations/access-rights.md b/docs/en/operations/access-rights.md index 9f7d2a0b95b..8d48218f417 100644 --- a/docs/en/operations/access-rights.md +++ b/docs/en/operations/access-rights.md @@ -31,7 +31,7 @@ To see all users, roles, profiles, etc. and all their grants use [SHOW ACCESS](. ## Usage {#access-control-usage} -By default, the ClickHouse server provides the `default` user account which is not allowed using SQL-driven access control and account management but has all the rights and permissions. The `default` user account is used in any cases when the username is not defined, for example, at login from client or in distributed queries. In distributed query processing a default user account is used, if the configuration of the server or cluster doesn’t specify the [user and password](../engines/table-engines/special/distributed.md) properties. +By default, the ClickHouse server provides the `default` user account which is not allowed using SQL-driven access control and account management but has all the rights and permissions. The `default` user account is used in any cases when the username is not defined, for example, at login from client or in distributed queries. In distributed query processing a default user account is used, if the configuration of the server or cluster does not specify the [user and password](../engines/table-engines/special/distributed.md) properties. If you just started using ClickHouse, consider the following scenario: diff --git a/docs/en/operations/backup.md b/docs/en/operations/backup.md index f4206f5d70c..9c8f5389ccd 100644 --- a/docs/en/operations/backup.md +++ b/docs/en/operations/backup.md @@ -5,7 +5,7 @@ toc_title: Data Backup # Data Backup {#data-backup} -While [replication](../engines/table-engines/mergetree-family/replication.md) provides protection from hardware failures, it does not protect against human errors: accidental deletion of data, deletion of the wrong table or a table on the wrong cluster, and software bugs that result in incorrect data processing or data corruption. In many cases mistakes like these will affect all replicas. ClickHouse has built-in safeguards to prevent some types of mistakes — for example, by default [you can’t just drop tables with a MergeTree-like engine containing more than 50 Gb of data](server-configuration-parameters/settings.md#max-table-size-to-drop). However, these safeguards don’t cover all possible cases and can be circumvented. +While [replication](../engines/table-engines/mergetree-family/replication.md) provides protection from hardware failures, it does not protect against human errors: accidental deletion of data, deletion of the wrong table or a table on the wrong cluster, and software bugs that result in incorrect data processing or data corruption. In many cases mistakes like these will affect all replicas. ClickHouse has built-in safeguards to prevent some types of mistakes — for example, by default [you can’t just drop tables with a MergeTree-like engine containing more than 50 Gb of data](server-configuration-parameters/settings.md#max-table-size-to-drop). However, these safeguards do not cover all possible cases and can be circumvented. In order to effectively mitigate possible human errors, you should carefully prepare a strategy for backing up and restoring your data **in advance**. @@ -30,7 +30,7 @@ For smaller volumes of data, a simple `INSERT INTO ... SELECT ...` to remote tab ## Manipulations with Parts {#manipulations-with-parts} -ClickHouse allows using the `ALTER TABLE ... FREEZE PARTITION ...` query to create a local copy of table partitions. This is implemented using hardlinks to the `/var/lib/clickhouse/shadow/` folder, so it usually does not consume extra disk space for old data. The created copies of files are not handled by ClickHouse server, so you can just leave them there: you will have a simple backup that doesn’t require any additional external system, but it will still be prone to hardware issues. For this reason, it’s better to remotely copy them to another location and then remove the local copies. Distributed filesystems and object stores are still a good options for this, but normal attached file servers with a large enough capacity might work as well (in this case the transfer will occur via the network filesystem or maybe [rsync](https://en.wikipedia.org/wiki/Rsync)). +ClickHouse allows using the `ALTER TABLE ... FREEZE PARTITION ...` query to create a local copy of table partitions. This is implemented using hardlinks to the `/var/lib/clickhouse/shadow/` folder, so it usually does not consume extra disk space for old data. The created copies of files are not handled by ClickHouse server, so you can just leave them there: you will have a simple backup that does not require any additional external system, but it will still be prone to hardware issues. For this reason, it’s better to remotely copy them to another location and then remove the local copies. Distributed filesystems and object stores are still a good options for this, but normal attached file servers with a large enough capacity might work as well (in this case the transfer will occur via the network filesystem or maybe [rsync](https://en.wikipedia.org/wiki/Rsync)). Data can be restored from backup using the `ALTER TABLE ... ATTACH PARTITION ...` For more information about queries related to partition manipulations, see the [ALTER documentation](../sql-reference/statements/alter/partition.md#alter_manipulations-with-partitions). diff --git a/docs/en/operations/configuration-files.md b/docs/en/operations/configuration-files.md index 9864efd648a..96009c75af1 100644 --- a/docs/en/operations/configuration-files.md +++ b/docs/en/operations/configuration-files.md @@ -5,9 +5,9 @@ toc_title: Configuration Files # Configuration Files {#configuration_files} -ClickHouse supports multi-file configuration management. The main server configuration file is `/etc/clickhouse-server/config.xml`. Other files must be in the `/etc/clickhouse-server/config.d` directory. +ClickHouse supports multi-file configuration management. The main server configuration file is `/etc/clickhouse-server/config.xml` or `/etc/clickhouse-server/config.yaml`. Other files must be in the `/etc/clickhouse-server/config.d` directory. Note, that any configuration file can be written either in XML or YAML, but mixing formats in one file is not supported. For example, you can have main configs as `config.xml` and `users.xml` and write additional files in `config.d` and `users.d` directories in `.yaml`. -All the configuration files should be in XML format. Also, they should have the same root element, usually ``. +All the configuration files should be in XML or YAML formats. All XML files should have the same root element, usually ``. As for YAML, `yandex:` should not be present, the parser will insert it automatically. ## Override {#override} @@ -32,7 +32,7 @@ Users configuration can be splitted into separate files similar to `config.xml` Directory name is defined as `users_config` setting without `.xml` postfix concatenated with `.d`. Directory `users.d` is used by default, as `users_config` defaults to `users.xml`. -## Example {#example} +## XML example {#example} For example, you can have separate config file for each user like this: @@ -55,6 +55,70 @@ $ cat /etc/clickhouse-server/users.d/alice.xml ``` +## YAML examples {#example} + +Here you can see default config written in YAML: [config.yaml.example](https://github.com/ClickHouse/ClickHouse/blob/master/programs/server/config.yaml.example). + +There are some differences between YAML and XML formats in terms of ClickHouse configurations. Here are some tips for writing a configuration in YAML format. + +You should use a Scalar node to write a key-value pair: +``` yaml +key: value +``` + +To create a node, containing other nodes you should use a Map: +``` yaml +map_key: + key1: val1 + key2: val2 + key3: val3 +``` + +To create a list of values or nodes assigned to one tag you should use a Sequence: +``` yaml +seq_key: + - val1 + - val2 + - key1: val3 + - map: + key2: val4 + key3: val5 +``` + +If you want to write an attribute for a Sequence or Map node, you should use a @ prefix before the attribute key. Note, that @ is reserved by YAML standard, so you should also to wrap it into double quotes: + +``` yaml +map: + "@attr1": value1 + "@attr2": value2 + key: 123 +``` + +From that Map we will get these XML nodes: + +``` xml + + 123 + +``` + +You can also set attributes for Sequence: + +``` yaml +seq: + - "@attr1": value1 + - "@attr2": value2 + - 123 + - abc +``` + +So, we can get YAML config equal to this XML one: + +``` xml +123 +abc +``` + ## Implementation Details {#implementation-details} For each config file, the server also generates `file-preprocessed.xml` files when starting. These files contain all the completed substitutions and overrides, and they are intended for informational use. If ZooKeeper substitutions were used in the config files but ZooKeeper is not available on the server start, the server loads the configuration from the preprocessed file. diff --git a/docs/en/operations/external-authenticators/ldap.md b/docs/en/operations/external-authenticators/ldap.md index 1b65ecc968b..805d45e1b38 100644 --- a/docs/en/operations/external-authenticators/ldap.md +++ b/docs/en/operations/external-authenticators/ldap.md @@ -17,6 +17,7 @@ To define LDAP server you must add `ldap_servers` section to the `config.xml`. + localhost 636 @@ -31,6 +32,18 @@ To define LDAP server you must add `ldap_servers` section to the `config.xml`. /path/to/tls_ca_cert_dir ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:AES256-GCM-SHA384 + + + + localhost + 389 + EXAMPLE\{user_name} + + CN=Users,DC=example,DC=com + (&(objectClass=user)(sAMAccountName={user_name})) + + no + ``` @@ -43,6 +56,15 @@ Note, that you can define multiple LDAP servers inside the `ldap_servers` sectio - `port` — LDAP server port, default is `636` if `enable_tls` is set to `true`, `389` otherwise. - `bind_dn` — Template used to construct the DN to bind to. - The resulting DN will be constructed by replacing all `{user_name}` substrings of the template with the actual user name during each authentication attempt. +- `user_dn_detection` - Section with LDAP search parameters for detecting the actual user DN of the bound user. + - This is mainly used in search filters for further role mapping when the server is Active Directory. The resulting user DN will be used when replacing `{user_dn}` substrings wherever they are allowed. By default, user DN is set equal to bind DN, but once search is performed, it will be updated with to the actual detected user DN value. + - `base_dn` - Template used to construct the base DN for the LDAP search. + - The resulting DN will be constructed by replacing all `{user_name}` and `{bind_dn}` substrings of the template with the actual user name and bind DN during the LDAP search. + - `scope` - Scope of the LDAP search. + - Accepted values are: `base`, `one_level`, `children`, `subtree` (the default). + - `search_filter` - Template used to construct the search filter for the LDAP search. + - The resulting filter will be constructed by replacing all `{user_name}`, `{bind_dn}`, and `{base_dn}` substrings of the template with the actual user name, bind DN, and base DN during the LDAP search. + - Note, that the special characters must be escaped properly in XML. - `verification_cooldown` — A period of time, in seconds, after a successful bind attempt, during which the user will be assumed to be successfully authenticated for all consecutive requests without contacting the LDAP server. - Specify `0` (the default) to disable caching and force contacting the LDAP server for each authentication request. - `enable_tls` — A flag to trigger the use of the secure connection to the LDAP server. @@ -107,7 +129,7 @@ Goes into `config.xml`. - + my_ldap_server @@ -122,6 +144,18 @@ Goes into `config.xml`. clickhouse_ + + + + my_ad_server + + CN=Users,DC=example,DC=com + CN + subtree + (&(objectClass=group)(member={user_dn})) + clickhouse_ + + ``` @@ -137,13 +171,13 @@ Note that `my_ldap_server` referred in the `ldap` section inside the `user_direc - When a user authenticates, while still bound to LDAP, an LDAP search is performed using `search_filter` and the name of the logged-in user. For each entry found during that search, the value of the specified attribute is extracted. For each attribute value that has the specified prefix, the prefix is removed, and the rest of the value becomes the name of a local role defined in ClickHouse, which is expected to be created beforehand by the [CREATE ROLE](../../sql-reference/statements/create/role.md#create-role-statement) statement. - There can be multiple `role_mapping` sections defined inside the same `ldap` section. All of them will be applied. - `base_dn` — Template used to construct the base DN for the LDAP search. - - The resulting DN will be constructed by replacing all `{user_name}` and `{bind_dn}` substrings of the template with the actual user name and bind DN during each LDAP search. + - The resulting DN will be constructed by replacing all `{user_name}`, `{bind_dn}`, and `{user_dn}` substrings of the template with the actual user name, bind DN, and user DN during each LDAP search. - `scope` — Scope of the LDAP search. - Accepted values are: `base`, `one_level`, `children`, `subtree` (the default). - `search_filter` — Template used to construct the search filter for the LDAP search. - - The resulting filter will be constructed by replacing all `{user_name}`, `{bind_dn}` and `{base_dn}` substrings of the template with the actual user name, bind DN and base DN during each LDAP search. + - The resulting filter will be constructed by replacing all `{user_name}`, `{bind_dn}`, `{user_dn}`, and `{base_dn}` substrings of the template with the actual user name, bind DN, user DN, and base DN during each LDAP search. - Note, that the special characters must be escaped properly in XML. - - `attribute` — Attribute name whose values will be returned by the LDAP search. + - `attribute` — Attribute name whose values will be returned by the LDAP search. `cn`, by default. - `prefix` — Prefix, that will be expected to be in front of each string in the original list of strings returned by the LDAP search. The prefix will be removed from the original strings and the resulting strings will be treated as local role names. Empty by default. [Original article](https://clickhouse.tech/docs/en/operations/external-authenticators/ldap/) diff --git a/docs/en/operations/optimizing-performance/sampling-query-profiler.md b/docs/en/operations/optimizing-performance/sampling-query-profiler.md index 0c075180530..9244592d515 100644 --- a/docs/en/operations/optimizing-performance/sampling-query-profiler.md +++ b/docs/en/operations/optimizing-performance/sampling-query-profiler.md @@ -11,13 +11,13 @@ To use profiler: - Setup the [trace_log](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-trace_log) section of the server configuration. - This section configures the [trace_log](../../operations/system-tables/trace_log.md#system_tables-trace_log) system table containing the results of the profiler functioning. It is configured by default. Remember that data in this table is valid only for a running server. After the server restart, ClickHouse doesn’t clean up the table and all the stored virtual memory address may become invalid. + This section configures the [trace_log](../../operations/system-tables/trace_log.md#system_tables-trace_log) system table containing the results of the profiler functioning. It is configured by default. Remember that data in this table is valid only for a running server. After the server restart, ClickHouse does not clean up the table and all the stored virtual memory address may become invalid. - Setup the [query_profiler_cpu_time_period_ns](../../operations/settings/settings.md#query_profiler_cpu_time_period_ns) or [query_profiler_real_time_period_ns](../../operations/settings/settings.md#query_profiler_real_time_period_ns) settings. Both settings can be used simultaneously. These settings allow you to configure profiler timers. As these are the session settings, you can get different sampling frequency for the whole server, individual users or user profiles, for your interactive session, and for each individual query. -The default sampling frequency is one sample per second and both CPU and real timers are enabled. This frequency allows collecting enough information about ClickHouse cluster. At the same time, working with this frequency, profiler doesn’t affect ClickHouse server’s performance. If you need to profile each individual query try to use higher sampling frequency. +The default sampling frequency is one sample per second and both CPU and real timers are enabled. This frequency allows collecting enough information about ClickHouse cluster. At the same time, working with this frequency, profiler does not affect ClickHouse server’s performance. If you need to profile each individual query try to use higher sampling frequency. To analyze the `trace_log` system table: diff --git a/docs/en/operations/quotas.md b/docs/en/operations/quotas.md index 56c3eaf6455..bbea735cdba 100644 --- a/docs/en/operations/quotas.md +++ b/docs/en/operations/quotas.md @@ -72,7 +72,7 @@ The resource consumption calculated for each interval is output to the server lo ``` -For the ‘statbox’ quota, restrictions are set for every hour and for every 24 hours (86,400 seconds). The time interval is counted, starting from an implementation-defined fixed moment in time. In other words, the 24-hour interval doesn’t necessarily begin at midnight. +For the ‘statbox’ quota, restrictions are set for every hour and for every 24 hours (86,400 seconds). The time interval is counted, starting from an implementation-defined fixed moment in time. In other words, the 24-hour interval does not necessarily begin at midnight. When the interval ends, all collected values are cleared. For the next hour, the quota calculation starts over. diff --git a/docs/en/operations/server-configuration-parameters/settings.md b/docs/en/operations/server-configuration-parameters/settings.md index 19671b523e3..801a1d27add 100644 --- a/docs/en/operations/server-configuration-parameters/settings.md +++ b/docs/en/operations/server-configuration-parameters/settings.md @@ -502,12 +502,12 @@ The default `max_server_memory_usage` value is calculated as `memory_amount * ma ## max_server_memory_usage_to_ram_ratio {#max_server_memory_usage_to_ram_ratio} -Defines the fraction of total physical RAM amount, available to the Clickhouse server. If the server tries to utilize more, the memory is cut down to the appropriate amount. +Defines the fraction of total physical RAM amount, available to the ClickHouse server. If the server tries to utilize more, the memory is cut down to the appropriate amount. Possible values: - Positive double. -- 0 — The Clickhouse server can use all available RAM. +- 0 — The ClickHouse server can use all available RAM. Default value: `0`. @@ -826,7 +826,7 @@ Use the following parameters to configure logging: - `engine` - [MergeTree Engine Definition](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) for a system table. Can't be used if `partition_by` defined. - `flush_interval_milliseconds` – Interval for flushing data from the buffer in memory to the table. -If the table doesn’t exist, ClickHouse will create it. If the structure of the query log changed when the ClickHouse server was updated, the table with the old structure is renamed, and a new table is created automatically. +If the table does not exist, ClickHouse will create it. If the structure of the query log changed when the ClickHouse server was updated, the table with the old structure is renamed, and a new table is created automatically. **Example** @@ -853,7 +853,7 @@ Use the following parameters to configure logging: - `engine` - [MergeTree Engine Definition](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) for a system table. Can't be used if `partition_by` defined. - `flush_interval_milliseconds` – Interval for flushing data from the buffer in memory to the table. -If the table doesn’t exist, ClickHouse will create it. If the structure of the query thread log changed when the ClickHouse server was updated, the table with the old structure is renamed, and a new table is created automatically. +If the table does not exist, ClickHouse will create it. If the structure of the query thread log changed when the ClickHouse server was updated, the table with the old structure is renamed, and a new table is created automatically. **Example** @@ -1155,7 +1155,7 @@ This setting only applies to the `MergeTree` family. It can be specified: If `use_minimalistic_part_header_in_zookeeper = 1`, then [replicated](../../engines/table-engines/mergetree-family/replication.md) tables store the headers of the data parts compactly using a single `znode`. If the table contains many columns, this storage method significantly reduces the volume of the data stored in Zookeeper. !!! attention "Attention" - After applying `use_minimalistic_part_header_in_zookeeper = 1`, you can’t downgrade the ClickHouse server to a version that doesn’t support this setting. Be careful when upgrading ClickHouse on servers in a cluster. Don’t upgrade all the servers at once. It is safer to test new versions of ClickHouse in a test environment, or on just a few servers of a cluster. + After applying `use_minimalistic_part_header_in_zookeeper = 1`, you can’t downgrade the ClickHouse server to a version that does not support this setting. Be careful when upgrading ClickHouse on servers in a cluster. Don’t upgrade all the servers at once. It is safer to test new versions of ClickHouse in a test environment, or on just a few servers of a cluster. Data part headers already stored with this setting can't be restored to their previous (non-compact) representation. diff --git a/docs/en/operations/settings/merge-tree-settings.md b/docs/en/operations/settings/merge-tree-settings.md index 6e3d0bc0fde..10ea46098d4 100644 --- a/docs/en/operations/settings/merge-tree-settings.md +++ b/docs/en/operations/settings/merge-tree-settings.md @@ -251,4 +251,15 @@ Possible values: Default value: -1 (unlimited). +## allow_floating_point_partition_key {#allow_floating_point_partition_key} + +Enables to allow floating-point number as a partition key. + +Possible values: + +- 0 — Floating-point partition key not allowed. +- 1 — Floating-point partition key allowed. + +Default value: `0`. + [Original article](https://clickhouse.tech/docs/en/operations/settings/merge_tree_settings/) diff --git a/docs/en/operations/settings/query-complexity.md b/docs/en/operations/settings/query-complexity.md index 2ecf50762d5..d60aa170907 100644 --- a/docs/en/operations/settings/query-complexity.md +++ b/docs/en/operations/settings/query-complexity.md @@ -19,7 +19,7 @@ It can take one of two values: `throw` or `break`. Restrictions on aggregation ( `break` – Stop executing the query and return the partial result, as if the source data ran out. -`any (only for group_by_overflow_mode)` – Continuing aggregation for the keys that got into the set, but don’t add new keys to the set. +`any (only for group_by_overflow_mode)` – Continuing aggregation for the keys that got into the set, but do not add new keys to the set. ## max_memory_usage {#settings_max_memory_usage} @@ -27,7 +27,7 @@ The maximum amount of RAM to use for running a query on a single server. In the default configuration file, the maximum is 10 GB. -The setting doesn’t consider the volume of available memory or the total volume of memory on the machine. +The setting does not consider the volume of available memory or the total volume of memory on the machine. The restriction applies to a single query within a single server. You can use `SHOW PROCESSLIST` to see the current memory consumption for each query. Besides, the peak memory consumption is tracked for each query and written to the log. @@ -288,7 +288,7 @@ Defines what action ClickHouse performs when any of the following join limits is Possible values: - `THROW` — ClickHouse throws an exception and breaks operation. -- `BREAK` — ClickHouse breaks operation and doesn’t throw an exception. +- `BREAK` — ClickHouse breaks operation and does not throw an exception. Default value: `THROW`. diff --git a/docs/en/operations/settings/settings.md b/docs/en/operations/settings/settings.md index a5dc66cf0d6..10461eacbff 100644 --- a/docs/en/operations/settings/settings.md +++ b/docs/en/operations/settings/settings.md @@ -365,13 +365,37 @@ throws an exception. ## input_format_null_as_default {#settings-input-format-null-as-default} -Enables or disables using default values if input data contain `NULL`, but the data type of the corresponding column in not `Nullable(T)` (for text input formats). +Enables or disables the initialization of [NULL](../../sql-reference/syntax.md#null-literal) fields with [default values](../../sql-reference/statements/create/table.md#create-default-values), if data type of these fields is not [nullable](../../sql-reference/data-types/nullable.md#data_type-nullable). +If column type is not nullable and this setting is disabled, then inserting `NULL` causes an exception. If column type is nullable, then `NULL` values are inserted as is, regardless of this setting. + +This setting is applicable to [INSERT ... VALUES](../../sql-reference/statements/insert-into.md) queries for text input formats. + +Possible values: + +- 0 — Inserting `NULL` into a not nullable column causes an exception. +- 1 — `NULL` fields are initialized with default column values. + +Default value: `1`. + +## insert_null_as_default {#insert_null_as_default} + +Enables or disables the insertion of [default values](../../sql-reference/statements/create/table.md#create-default-values) instead of [NULL](../../sql-reference/syntax.md#null-literal) into columns with not [nullable](../../sql-reference/data-types/nullable.md#data_type-nullable) data type. +If column type is not nullable and this setting is disabled, then inserting `NULL` causes an exception. If column type is nullable, then `NULL` values are inserted as is, regardless of this setting. + +This setting is applicable to [INSERT ... SELECT](../../sql-reference/statements/insert-into.md#insert_query_insert-select) queries. Note that `SELECT` subqueries may be concatenated with `UNION ALL` clause. + +Possible values: + +- 0 — Inserting `NULL` into a not nullable column causes an exception. +- 1 — Default column value is inserted instead of `NULL`. + +Default value: `1`. ## input_format_skip_unknown_fields {#settings-input-format-skip-unknown-fields} Enables or disables skipping insertion of extra data. -When writing data, ClickHouse throws an exception if input data contain columns that do not exist in the target table. If skipping is enabled, ClickHouse doesn’t insert extra data and doesn’t throw an exception. +When writing data, ClickHouse throws an exception if input data contain columns that do not exist in the target table. If skipping is enabled, ClickHouse does not insert extra data and does not throw an exception. Supported formats: @@ -428,7 +452,7 @@ Default value: 1. Allows choosing a parser of the text representation of date and time. -The setting doesn’t apply to [date and time functions](../../sql-reference/functions/date-time-functions.md). +The setting does not apply to [date and time functions](../../sql-reference/functions/date-time-functions.md). Possible values: @@ -455,15 +479,15 @@ Possible values: - `simple` - Simple output format. - Clickhouse output date and time `YYYY-MM-DD hh:mm:ss` format. For example, `2019-08-20 10:18:56`. The calculation is performed according to the data type's time zone (if present) or server time zone. + ClickHouse output date and time `YYYY-MM-DD hh:mm:ss` format. For example, `2019-08-20 10:18:56`. The calculation is performed according to the data type's time zone (if present) or server time zone. - `iso` - ISO output format. - Clickhouse output date and time in [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) `YYYY-MM-DDThh:mm:ssZ` format. For example, `2019-08-20T10:18:56Z`. Note that output is in UTC (`Z` means UTC). + ClickHouse output date and time in [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) `YYYY-MM-DDThh:mm:ssZ` format. For example, `2019-08-20T10:18:56Z`. Note that output is in UTC (`Z` means UTC). - `unix_timestamp` - Unix timestamp output format. - Clickhouse output date and time in [Unix timestamp](https://en.wikipedia.org/wiki/Unix_time) format. For example `1566285536`. + ClickHouse output date and time in [Unix timestamp](https://en.wikipedia.org/wiki/Unix_time) format. For example `1566285536`. Default value: `simple`. @@ -662,7 +686,7 @@ Default value: 8. ## merge_tree_max_rows_to_use_cache {#setting-merge-tree-max-rows-to-use-cache} -If ClickHouse should read more than `merge_tree_max_rows_to_use_cache` rows in one query, it doesn’t use the cache of uncompressed blocks. +If ClickHouse should read more than `merge_tree_max_rows_to_use_cache` rows in one query, it does not use the cache of uncompressed blocks. The cache of uncompressed blocks stores data extracted for queries. ClickHouse uses this cache to speed up responses to repeated small queries. This setting protects the cache from trashing by queries that read a large amount of data. The [uncompressed_cache_size](../../operations/server-configuration-parameters/settings.md#server-settings-uncompressed_cache_size) server setting defines the size of the cache of uncompressed blocks. @@ -674,7 +698,7 @@ Default value: 128 ✕ 8192. ## merge_tree_max_bytes_to_use_cache {#setting-merge-tree-max-bytes-to-use-cache} -If ClickHouse should read more than `merge_tree_max_bytes_to_use_cache` bytes in one query, it doesn’t use the cache of uncompressed blocks. +If ClickHouse should read more than `merge_tree_max_bytes_to_use_cache` bytes in one query, it does not use the cache of uncompressed blocks. The cache of uncompressed blocks stores data extracted for queries. ClickHouse uses this cache to speed up responses to repeated small queries. This setting protects the cache from trashing by queries that read a large amount of data. The [uncompressed_cache_size](../../operations/server-configuration-parameters/settings.md#server-settings-uncompressed_cache_size) server setting defines the size of the cache of uncompressed blocks. @@ -816,8 +840,8 @@ Result: The size of blocks (in a count of rows) to form for insertion into a table. This setting only applies in cases when the server forms the blocks. For example, for an INSERT via the HTTP interface, the server parses the data format and forms blocks of the specified size. -But when using clickhouse-client, the client parses the data itself, and the ‘max_insert_block_size’ setting on the server doesn’t affect the size of the inserted blocks. -The setting also doesn’t have a purpose when using INSERT SELECT, since data is inserted using the same blocks that are formed after SELECT. +But when using clickhouse-client, the client parses the data itself, and the ‘max_insert_block_size’ setting on the server does not affect the size of the inserted blocks. +The setting also does not have a purpose when using INSERT SELECT, since data is inserted using the same blocks that are formed after SELECT. Default value: 1,048,576. @@ -887,7 +911,7 @@ Higher values will lead to higher memory usage. The maximum size of blocks of uncompressed data before compressing for writing to a table. By default, 1,048,576 (1 MiB). Specifying smaller block size generally leads to slightly reduced compression ratio, the compression and decompression speed increases slightly due to cache locality, and memory consumption is reduced. !!! note "Warning" - This is an expert-level setting, and you shouldn't change it if you're just getting started with Clickhouse. + This is an expert-level setting, and you shouldn't change it if you're just getting started with ClickHouse. Don’t confuse blocks for compression (a chunk of memory consisting of bytes) with blocks for query processing (a set of rows from a table). @@ -904,7 +928,7 @@ We are writing a UInt32-type column (4 bytes per value). When writing 8192 rows, We are writing a URL column with the String type (average size of 60 bytes per value). When writing 8192 rows, the average will be slightly less than 500 KB of data. Since this is more than 65,536, a compressed block will be formed for each mark. In this case, when reading data from the disk in the range of a single mark, extra data won’t be decompressed. !!! note "Warning" - This is an expert-level setting, and you shouldn't change it if you're just getting started with Clickhouse. + This is an expert-level setting, and you shouldn't change it if you're just getting started with ClickHouse. ## max_query_size {#settings-max_query_size} @@ -1018,7 +1042,7 @@ For queries that read at least a somewhat large volume of data (one million rows When using the HTTP interface, the ‘query_id’ parameter can be passed. This is any string that serves as the query identifier. If a query from the same user with the same ‘query_id’ already exists at this time, the behaviour depends on the ‘replace_running_query’ parameter. -`0` (default) – Throw an exception (don’t allow the query to run if a query with the same ‘query_id’ is already running). +`0` (default) – Throw an exception (do not allow the query to run if a query with the same ‘query_id’ is already running). `1` – Cancel the old query and start running the new one. @@ -1077,7 +1101,7 @@ load_balancing = nearest_hostname The number of errors is counted for each replica. Every 5 minutes, the number of errors is integrally divided by 2. Thus, the number of errors is calculated for a recent time with exponential smoothing. If there is one replica with a minimal number of errors (i.e. errors occurred recently on the other replicas), the query is sent to it. If there are multiple replicas with the same minimal number of errors, the query is sent to the replica with a hostname that is most similar to the server’s hostname in the config file (for the number of different characters in identical positions, up to the minimum length of both hostnames). For instance, example01-01-1 and example01-01-2.yandex.ru are different in one position, while example01-01-1 and example01-02-2 differ in two places. -This method might seem primitive, but it doesn’t require external data about network topology, and it doesn’t compare IP addresses, which would be complicated for our IPv6 addresses. +This method might seem primitive, but it does not require external data about network topology, and it does not compare IP addresses, which would be complicated for our IPv6 addresses. Thus, if there are equivalent replicas, the closest one by name is preferred. We can also assume that when sending a query to the same server, in the absence of failures, a distributed query will also go to the same servers. So even if different data is placed on the replicas, the query will return mostly the same results. @@ -1149,7 +1173,7 @@ Default value: `1`. This setting is useful for replicated tables with a sampling key. A query may be processed faster if it is executed on several servers in parallel. But the query performance may degrade in the following cases: -- The position of the sampling key in the partitioning key doesn't allow efficient range scans. +- The position of the sampling key in the partitioning key does not allow efficient range scans. - Adding a sampling key to the table makes filtering by other columns less efficient. - The sampling key is an expression that is expensive to calculate. - The cluster latency distribution has a long tail, so that querying more servers increases the query overall latency. @@ -1171,7 +1195,7 @@ For testing, the value can be set to 0: compilation runs synchronously and the q If the value is 1 or more, compilation occurs asynchronously in a separate thread. The result will be used as soon as it is ready, including queries that are currently running. Compiled code is required for each different combination of aggregate functions used in the query and the type of keys in the GROUP BY clause. -The results of the compilation are saved in the build directory in the form of .so files. There is no restriction on the number of compilation results since they don’t use very much space. Old results will be used after server restarts, except in the case of a server upgrade – in this case, the old results are deleted. +The results of the compilation are saved in the build directory in the form of .so files. There is no restriction on the number of compilation results since they do not use very much space. Old results will be used after server restarts, except in the case of a server upgrade – in this case, the old results are deleted. ## output_format_json_quote_64bit_integers {#session_settings-output_format_json_quote_64bit_integers} @@ -1505,7 +1529,7 @@ Possible values: - 1 — skipping enabled. - If a shard is unavailable, ClickHouse returns a result based on partial data and doesn’t report node availability issues. + If a shard is unavailable, ClickHouse returns a result based on partial data and does not report node availability issues. - 0 — skipping disabled. @@ -1520,8 +1544,8 @@ Do not merge aggregation states from different servers for distributed query pro Possible values: - 0 — Disabled (final query processing is done on the initiator node). -- 1 - Do not merge aggregation states from different servers for distributed query processing (query completelly processed on the shard, initiator only proxy the data). -- 2 - Same as 1 but apply `ORDER BY` and `LIMIT` on the initiator (can be used for queries with `ORDER BY` and/or `LIMIT`). +- 1 - Do not merge aggregation states from different servers for distributed query processing (query completelly processed on the shard, initiator only proxy the data), can be used in case it is for certain that there are different keys on different shards. +- 2 - Same as `1` but applies `ORDER BY` and `LIMIT` (it is not possilbe when the query processed completelly on the remote node, like for `distributed_group_by_no_merge=1`) on the initiator (can be used for queries with `ORDER BY` and/or `LIMIT`). **Example** @@ -1613,7 +1637,7 @@ Enables or disables query execution if [optimize_skip_unused_shards](#optimize-s Possible values: -- 0 — Disabled. ClickHouse doesn’t throw an exception. +- 0 — Disabled. ClickHouse does not throw an exception. - 1 — Enabled. Query execution is disabled only if the table has a sharding key. - 2 — Enabled. Query execution is disabled regardless of whether a sharding key is defined for the table. @@ -1758,7 +1782,7 @@ Default value: 0. Sets the priority ([nice](https://en.wikipedia.org/wiki/Nice_(Unix))) for threads that execute queries. The OS scheduler considers this priority when choosing the next thread to run on each available CPU core. !!! warning "Warning" - To use this setting, you need to set the `CAP_SYS_NICE` capability. The `clickhouse-server` package sets it up during installation. Some virtual environments don’t allow you to set the `CAP_SYS_NICE` capability. In this case, `clickhouse-server` shows a message about it at the start. + To use this setting, you need to set the `CAP_SYS_NICE` capability. The `clickhouse-server` package sets it up during installation. Some virtual environments do not allow you to set the `CAP_SYS_NICE` capability. In this case, `clickhouse-server` shows a message about it at the start. Possible values: @@ -2034,6 +2058,16 @@ Possible values: Default value: 16. +## background_fetches_pool_size {#background_fetches_pool_size} + +Sets the number of threads performing background fetches for [replicated](../../engines/table-engines/mergetree-family/replication.md) tables. This setting is applied at the ClickHouse server start and can’t be changed in a user session. For production usage with frequent small insertions or slow ZooKeeper cluster is recomended to use default value. + +Possible values: + +- Any positive integer. + +Default value: 8. + ## always_fetch_merged_part {#always_fetch_merged_part} Prohibits data parts merging in [Replicated\*MergeTree](../../engines/table-engines/mergetree-family/replication.md)-engine tables. @@ -2043,7 +2077,7 @@ When merging is prohibited, the replica never merges parts and always downloads Possible values: - 0 — `Replicated*MergeTree`-engine tables merge data parts at the replica. -- 1 — `Replicated*MergeTree`-engine tables don’t merge data parts at the replica. The tables download merged data parts from other replicas. +- 1 — `Replicated*MergeTree`-engine tables do not merge data parts at the replica. The tables download merged data parts from other replicas. Default value: 0. @@ -2174,7 +2208,7 @@ Allows or restricts using the [LowCardinality](../../sql-reference/data-types/lo If usage of `LowCardinality` is restricted, ClickHouse server converts `LowCardinality`-columns to ordinary ones for `SELECT` queries, and convert ordinary columns to `LowCardinality`-columns for `INSERT` queries. -This setting is required mainly for third-party clients which don’t support `LowCardinality` data type. +This setting is required mainly for third-party clients which do not support `LowCardinality` data type. Possible values: @@ -2662,7 +2696,7 @@ Possible values: - `'DISTINCT'` — ClickHouse outputs rows as a result of combining queries removing duplicate rows. - `'ALL'` — ClickHouse outputs all rows as a result of combining queries including duplicate rows. -- `''` — Clickhouse generates an exception when used with `UNION`. +- `''` — ClickHouse generates an exception when used with `UNION`. Default value: `''`. @@ -2989,7 +3023,6 @@ SET limit = 5; SET offset = 7; SELECT * FROM test LIMIT 10 OFFSET 100; ``` - Result: ``` text @@ -3000,4 +3033,36 @@ Result: └─────┘ ``` +## optimize_fuse_sum_count_avg {#optimize_fuse_sum_count_avg} + +Enables to fuse aggregate functions with identical argument. It rewrites query contains at least two aggregate functions from [sum](../../sql-reference/aggregate-functions/reference/sum.md#agg_function-sum), [count](../../sql-reference/aggregate-functions/reference/count.md#agg_function-count) or [avg](../../sql-reference/aggregate-functions/reference/avg.md#agg_function-avg) with identical argument to [sumCount](../../sql-reference/aggregate-functions/reference/sumcount.md#agg_function-sumCount). + +Possible values: + +- 0 — Functions with identical argument are not fused. +- 1 — Functions with identical argument are fused. + +Default value: `0`. + +**Example** + +Query: + +``` sql +CREATE TABLE fuse_tbl(a Int8, b Int8) Engine = Log; +SET optimize_fuse_sum_count_avg = 1; +EXPLAIN SYNTAX SELECT sum(a), sum(b), count(b), avg(b) from fuse_tbl FORMAT TSV; +``` + +Result: + +``` text +SELECT + sum(a), + sumCount(b).1, + sumCount(b).2, + (sumCount(b).1) / (sumCount(b).2) +FROM fuse_tbl +``` + [Original article](https://clickhouse.tech/docs/en/operations/settings/settings/) diff --git a/docs/en/operations/system-tables/index.md b/docs/en/operations/system-tables/index.md index e66f082167e..ab3ba25493a 100644 --- a/docs/en/operations/system-tables/index.md +++ b/docs/en/operations/system-tables/index.md @@ -59,7 +59,7 @@ For collecting system metrics ClickHouse server uses: **procfs** -If ClickHouse server doesn’t have `CAP_NET_ADMIN` capability, it tries to fall back to `ProcfsMetricsProvider`. `ProcfsMetricsProvider` allows collecting per-query system metrics (for CPU and I/O). +If ClickHouse server does not have `CAP_NET_ADMIN` capability, it tries to fall back to `ProcfsMetricsProvider`. `ProcfsMetricsProvider` allows collecting per-query system metrics (for CPU and I/O). If procfs is supported and enabled on the system, ClickHouse server collects these metrics: diff --git a/docs/en/operations/system-tables/one.md b/docs/en/operations/system-tables/one.md index a85e01bc75a..51316dfbc44 100644 --- a/docs/en/operations/system-tables/one.md +++ b/docs/en/operations/system-tables/one.md @@ -2,7 +2,7 @@ This table contains a single row with a single `dummy` UInt8 column containing the value 0. -This table is used if a `SELECT` query doesn’t specify the `FROM` clause. +This table is used if a `SELECT` query does not specify the `FROM` clause. This is similar to the `DUAL` table found in other DBMSs. diff --git a/docs/en/operations/system-tables/parts.md b/docs/en/operations/system-tables/parts.md index f02d1ebc114..5a4715a4513 100644 --- a/docs/en/operations/system-tables/parts.md +++ b/docs/en/operations/system-tables/parts.md @@ -26,7 +26,7 @@ Columns: - `active` ([UInt8](../../sql-reference/data-types/int-uint.md)) – Flag that indicates whether the data part is active. If a data part is active, it’s used in a table. Otherwise, it’s deleted. Inactive data parts remain after merging. -- `marks` ([UInt64](../../sql-reference/data-types/int-uint.md)) – The number of marks. To get the approximate number of rows in a data part, multiply `marks` by the index granularity (usually 8192) (this hint doesn’t work for adaptive granularity). +- `marks` ([UInt64](../../sql-reference/data-types/int-uint.md)) – The number of marks. To get the approximate number of rows in a data part, multiply `marks` by the index granularity (usually 8192) (this hint does not work for adaptive granularity). - `rows` ([UInt64](../../sql-reference/data-types/int-uint.md)) – The number of rows. @@ -66,7 +66,7 @@ Columns: - `primary_key_bytes_in_memory_allocated` ([UInt64](../../sql-reference/data-types/int-uint.md)) – The amount of memory (in bytes) reserved for primary key values. -- `is_frozen` ([UInt8](../../sql-reference/data-types/int-uint.md)) – Flag that shows that a partition data backup exists. 1, the backup exists. 0, the backup doesn’t exist. For more details, see [FREEZE PARTITION](../../sql-reference/statements/alter/partition.md#alter_freeze-partition) +- `is_frozen` ([UInt8](../../sql-reference/data-types/int-uint.md)) – Flag that shows that a partition data backup exists. 1, the backup exists. 0, the backup does not exist. For more details, see [FREEZE PARTITION](../../sql-reference/statements/alter/partition.md#alter_freeze-partition) - `database` ([String](../../sql-reference/data-types/string.md)) – Name of the database. diff --git a/docs/en/operations/system-tables/parts_columns.md b/docs/en/operations/system-tables/parts_columns.md index 5c3dd7155f7..293abb18a50 100644 --- a/docs/en/operations/system-tables/parts_columns.md +++ b/docs/en/operations/system-tables/parts_columns.md @@ -26,7 +26,7 @@ Columns: - `active` ([UInt8](../../sql-reference/data-types/int-uint.md)) — Flag that indicates whether the data part is active. If a data part is active, it’s used in a table. Otherwise, it’s deleted. Inactive data parts remain after merging. -- `marks` ([UInt64](../../sql-reference/data-types/int-uint.md)) — The number of marks. To get the approximate number of rows in a data part, multiply `marks` by the index granularity (usually 8192) (this hint doesn’t work for adaptive granularity). +- `marks` ([UInt64](../../sql-reference/data-types/int-uint.md)) — The number of marks. To get the approximate number of rows in a data part, multiply `marks` by the index granularity (usually 8192) (this hint does not work for adaptive granularity). - `rows` ([UInt64](../../sql-reference/data-types/int-uint.md)) — The number of rows. diff --git a/docs/en/operations/system-tables/processes.md b/docs/en/operations/system-tables/processes.md index a379fc4a07a..9ef3c648006 100644 --- a/docs/en/operations/system-tables/processes.md +++ b/docs/en/operations/system-tables/processes.md @@ -11,7 +11,7 @@ Columns: - `bytes_read` (UInt64) – The number of uncompressed bytes read from the table. For distributed processing, on the requestor server, this is the total for all remote servers. - `total_rows_approx` (UInt64) – The approximation of the total number of rows that should be read. For distributed processing, on the requestor server, this is the total for all remote servers. It can be updated during request processing, when new sources to process become known. - `memory_usage` (UInt64) – Amount of RAM the request uses. It might not include some types of dedicated memory. See the [max_memory_usage](../../operations/settings/query-complexity.md#settings_max_memory_usage) setting. -- `query` (String) – The query text. For `INSERT`, it doesn’t include the data to insert. +- `query` (String) – The query text. For `INSERT`, it does not include the data to insert. - `query_id` (String) – Query ID, if defined. diff --git a/docs/en/operations/system-tables/query_log.md b/docs/en/operations/system-tables/query_log.md index 6cf87ee1f17..85f0679fe37 100644 --- a/docs/en/operations/system-tables/query_log.md +++ b/docs/en/operations/system-tables/query_log.md @@ -3,15 +3,15 @@ Contains information about executed queries, for example, start time, duration of processing, error messages. !!! note "Note" - This table doesn’t contain the ingested data for `INSERT` queries. + This table does not contain the ingested data for `INSERT` queries. You can change settings of queries logging in the [query_log](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-query-log) section of the server configuration. -You can disable queries logging by setting [log_queries = 0](../../operations/settings/settings.md#settings-log-queries). We don’t recommend to turn off logging because information in this table is important for solving issues. +You can disable queries logging by setting [log_queries = 0](../../operations/settings/settings.md#settings-log-queries). We do not recommend to turn off logging because information in this table is important for solving issues. The flushing period of data is set in `flush_interval_milliseconds` parameter of the [query_log](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-query-log) server settings section. To force flushing, use the [SYSTEM FLUSH LOGS](../../sql-reference/statements/system.md#query_language-system-flush_logs) query. -ClickHouse doesn’t delete data from the table automatically. See [Introduction](../../operations/system-tables/index.md#system-tables-introduction) for more details. +ClickHouse does not delete data from the table automatically. See [Introduction](../../operations/system-tables/index.md#system-tables-introduction) for more details. The `system.query_log` table registers two kinds of queries: @@ -37,8 +37,8 @@ Columns: - `query_start_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — Start time of query execution. - `query_start_time_microseconds` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — Start time of query execution with microsecond precision. - `query_duration_ms` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Duration of query execution in milliseconds. -- `read_rows` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Total number of rows read from all tables and table functions participated in query. It includes usual subqueries, subqueries for `IN` and `JOIN`. For distributed queries `read_rows` includes the total number of rows read at all replicas. Each replica sends it’s `read_rows` value, and the server-initiator of the query summarizes all received and local values. The cache volumes don’t affect this value. -- `read_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Total number of bytes read from all tables and table functions participated in query. It includes usual subqueries, subqueries for `IN` and `JOIN`. For distributed queries `read_bytes` includes the total number of rows read at all replicas. Each replica sends it’s `read_bytes` value, and the server-initiator of the query summarizes all received and local values. The cache volumes don’t affect this value. +- `read_rows` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Total number of rows read from all tables and table functions participated in query. It includes usual subqueries, subqueries for `IN` and `JOIN`. For distributed queries `read_rows` includes the total number of rows read at all replicas. Each replica sends it’s `read_rows` value, and the server-initiator of the query summarizes all received and local values. The cache volumes do not affect this value. +- `read_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Total number of bytes read from all tables and table functions participated in query. It includes usual subqueries, subqueries for `IN` and `JOIN`. For distributed queries `read_bytes` includes the total number of rows read at all replicas. Each replica sends it’s `read_bytes` value, and the server-initiator of the query summarizes all received and local values. The cache volumes do not affect this value. - `written_rows` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — For `INSERT` queries, the number of written rows. For other queries, the column value is 0. - `written_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — For `INSERT` queries, the number of written bytes. For other queries, the column value is 0. - `result_rows` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Number of rows in a result of the `SELECT` query, or a number of rows in the `INSERT` query. diff --git a/docs/en/operations/system-tables/query_thread_log.md b/docs/en/operations/system-tables/query_thread_log.md index 0ae2e7d5d3b..296a33259b3 100644 --- a/docs/en/operations/system-tables/query_thread_log.md +++ b/docs/en/operations/system-tables/query_thread_log.md @@ -9,7 +9,7 @@ To start logging: The flushing period of data is set in `flush_interval_milliseconds` parameter of the [query_thread_log](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-query_thread_log) server settings section. To force flushing, use the [SYSTEM FLUSH LOGS](../../sql-reference/statements/system.md#query_language-system-flush_logs) query. -ClickHouse doesn’t delete data from the table automatically. See [Introduction](../../operations/system-tables/index.md#system-tables-introduction) for more details. +ClickHouse does not delete data from the table automatically. See [Introduction](../../operations/system-tables/index.md#system-tables-introduction) for more details. Columns: diff --git a/docs/en/operations/system-tables/replicas.md b/docs/en/operations/system-tables/replicas.md index 8da68d2d2ab..63a2141e399 100644 --- a/docs/en/operations/system-tables/replicas.md +++ b/docs/en/operations/system-tables/replicas.md @@ -57,7 +57,7 @@ Columns: Note that writes can be performed to any replica that is available and has a session in ZK, regardless of whether it is a leader. - `can_become_leader` (`UInt8`) - Whether the replica can be a leader. - `is_readonly` (`UInt8`) - Whether the replica is in read-only mode. - This mode is turned on if the config doesn’t have sections with ZooKeeper, if an unknown error occurred when reinitializing sessions in ZooKeeper, and during session reinitialization in ZooKeeper. + This mode is turned on if the config does not have sections with ZooKeeper, if an unknown error occurred when reinitializing sessions in ZooKeeper, and during session reinitialization in ZooKeeper. - `is_session_expired` (`UInt8`) - the session with ZooKeeper has expired. Basically the same as `is_readonly`. - `future_parts` (`UInt32`) - The number of data parts that will appear as the result of INSERTs or merges that haven’t been done yet. - `parts_to_check` (`UInt32`) - The number of data parts in the queue for verification. A part is put in the verification queue if there is suspicion that it might be damaged. @@ -84,7 +84,7 @@ The next 4 columns have a non-zero value only where there is an active session w - `active_replicas` (`UInt8`) - The number of replicas of this table that have a session in ZooKeeper (i.e., the number of functioning replicas). If you request all the columns, the table may work a bit slowly, since several reads from ZooKeeper are made for each row. -If you don’t request the last 4 columns (log_max_index, log_pointer, total_replicas, active_replicas), the table works quickly. +If you do not request the last 4 columns (log_max_index, log_pointer, total_replicas, active_replicas), the table works quickly. For example, you can check that everything is working correctly like this: @@ -118,7 +118,7 @@ WHERE OR active_replicas < total_replicas ``` -If this query doesn’t return anything, it means that everything is fine. +If this query does not return anything, it means that everything is fine. [Original article](https://clickhouse.tech/docs/en/operations/system_tables/replicas) diff --git a/docs/en/operations/system-tables/replication_queue.md b/docs/en/operations/system-tables/replication_queue.md index 539a29432ac..965774b81bf 100644 --- a/docs/en/operations/system-tables/replication_queue.md +++ b/docs/en/operations/system-tables/replication_queue.md @@ -77,7 +77,7 @@ parts_to_merge: ['20201130_121373_121378_1','20201130_121379_121379_0',' is_detach: 0 is_currently_executing: 0 num_tries: 36 -last_exception: Code: 226, e.displayText() = DB::Exception: Marks file '/opt/clickhouse/data/merge/visits_v2/tmp_fetch_20201130_121373_121384_2/CounterID.mrk' doesn't exist (version 20.8.7.15 (official build)) +last_exception: Code: 226, e.displayText() = DB::Exception: Marks file '/opt/clickhouse/data/merge/visits_v2/tmp_fetch_20201130_121373_121384_2/CounterID.mrk' does not exist (version 20.8.7.15 (official build)) last_attempt_time: 2020-12-08 17:35:54 num_postponed: 0 postpone_reason: diff --git a/docs/en/operations/system-tables/stack_trace.md b/docs/en/operations/system-tables/stack_trace.md index 44b13047cc3..eb1824a6f66 100644 --- a/docs/en/operations/system-tables/stack_trace.md +++ b/docs/en/operations/system-tables/stack_trace.md @@ -6,6 +6,7 @@ To analyze stack frames, use the `addressToLine`, `addressToSymbol` and `demangl Columns: +- `thread_name` ([String](../../sql-reference/data-types/string.md)) — Thread name. - `thread_id` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Thread identifier. - `query_id` ([String](../../sql-reference/data-types/string.md)) — Query identifier that can be used to get details about a query that was running from the [query_log](../system-tables/query_log.md) system table. - `trace` ([Array(UInt64)](../../sql-reference/data-types/array.md)) — A [stack trace](https://en.wikipedia.org/wiki/Stack_trace) which represents a list of physical addresses where the called methods are stored. @@ -21,12 +22,14 @@ SET allow_introspection_functions = 1; Getting symbols from ClickHouse object files: ``` sql -WITH arrayMap(x -> demangle(addressToSymbol(x)), trace) AS all SELECT thread_id, query_id, arrayStringConcat(all, '\n') AS res FROM system.stack_trace LIMIT 1 \G +WITH arrayMap(x -> demangle(addressToSymbol(x)), trace) AS all SELECT thread_name, thread_id, query_id, arrayStringConcat(all, '\n') AS res FROM system.stack_trace LIMIT 1 \G; ``` ``` text Row 1: ────── +thread_name: clickhouse-serv + thread_id: 686 query_id: 1a11f70b-626d-47c1-b948-f9c7b206395d res: sigqueue @@ -51,12 +54,14 @@ __clone Getting filenames and line numbers in ClickHouse source code: ``` sql -WITH arrayMap(x -> addressToLine(x), trace) AS all, arrayFilter(x -> x LIKE '%/dbms/%', all) AS dbms SELECT thread_id, query_id, arrayStringConcat(notEmpty(dbms) ? dbms : all, '\n') AS res FROM system.stack_trace LIMIT 1 \G +WITH arrayMap(x -> addressToLine(x), trace) AS all, arrayFilter(x -> x LIKE '%/dbms/%', all) AS dbms SELECT thread_name, thread_id, query_id, arrayStringConcat(notEmpty(dbms) ? dbms : all, '\n') AS res FROM system.stack_trace LIMIT 1 \G; ``` ``` text Row 1: ────── +thread_name: clickhouse-serv + thread_id: 686 query_id: cad353e7-1c29-4b2e-949f-93e597ab7a54 res: /lib/x86_64-linux-gnu/libc-2.27.so @@ -84,6 +89,3 @@ res: /lib/x86_64-linux-gnu/libc-2.27.so - [system.trace_log](../system-tables/trace_log.md) — Contains stack traces collected by the sampling query profiler. - [arrayMap](../../sql-reference/functions/array-functions.md#array-map) — Description and usage example of the `arrayMap` function. - [arrayFilter](../../sql-reference/functions/array-functions.md#array-filter) — Description and usage example of the `arrayFilter` function. - - -[Original article](https://clickhouse.tech/docs/en/operations/system-tables/stack_trace) diff --git a/docs/en/operations/system-tables/tables.md b/docs/en/operations/system-tables/tables.md index ccc9ab94f8b..480db3087f6 100644 --- a/docs/en/operations/system-tables/tables.md +++ b/docs/en/operations/system-tables/tables.md @@ -54,6 +54,9 @@ Columns: - `lifetime_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) - Total number of bytes INSERTed since server start (only for `Buffer` tables). +- `comment` ([String](../../sql-reference/data-types/string.md)) - The comment for the table. + + The `system.tables` table is used in `SHOW TABLES` query implementation. **Example** @@ -65,47 +68,53 @@ SELECT * FROM system.tables LIMIT 2 FORMAT Vertical; ```text Row 1: ────── -database: system -name: aggregate_function_combinators -uuid: 00000000-0000-0000-0000-000000000000 -engine: SystemAggregateFunctionCombinators +database: base +name: t1 +uuid: 81b1c20a-b7c6-4116-a2ce-7583fb6b6736 +engine: MergeTree is_temporary: 0 -data_paths: [] -metadata_path: /var/lib/clickhouse/metadata/system/aggregate_function_combinators.sql -metadata_modification_time: 1970-01-01 03:00:00 +data_paths: ['/var/lib/clickhouse/store/81b/81b1c20a-b7c6-4116-a2ce-7583fb6b6736/'] +metadata_path: /var/lib/clickhouse/store/461/461cf698-fd0b-406d-8c01-5d8fd5748a91/t1.sql +metadata_modification_time: 2021-01-25 19:14:32 dependencies_database: [] dependencies_table: [] -create_table_query: -engine_full: -partition_key: -sorting_key: -primary_key: -sampling_key: -storage_policy: -total_rows: ᴺᵁᴸᴸ -total_bytes: ᴺᵁᴸᴸ +create_table_query: CREATE TABLE base.t1 (`n` UInt64) ENGINE = MergeTree ORDER BY n SETTINGS index_granularity = 8192 +engine_full: MergeTree ORDER BY n SETTINGS index_granularity = 8192 +partition_key: +sorting_key: n +primary_key: n +sampling_key: +storage_policy: default +total_rows: 1 +total_bytes: 99 +lifetime_rows: ᴺᵁᴸᴸ +lifetime_bytes: ᴺᵁᴸᴸ +comment: Row 2: ────── -database: system -name: asynchronous_metrics +database: default +name: 53r93yleapyears uuid: 00000000-0000-0000-0000-000000000000 -engine: SystemAsynchronousMetrics +engine: MergeTree is_temporary: 0 -data_paths: [] -metadata_path: /var/lib/clickhouse/metadata/system/asynchronous_metrics.sql -metadata_modification_time: 1970-01-01 03:00:00 +data_paths: ['/var/lib/clickhouse/data/default/53r93yleapyears/'] +metadata_path: /var/lib/clickhouse/metadata/default/53r93yleapyears.sql +metadata_modification_time: 2020-09-23 09:05:36 dependencies_database: [] dependencies_table: [] -create_table_query: -engine_full: -partition_key: -sorting_key: -primary_key: -sampling_key: -storage_policy: -total_rows: ᴺᵁᴸᴸ -total_bytes: ᴺᵁᴸᴸ +create_table_query: CREATE TABLE default.`53r93yleapyears` (`id` Int8, `febdays` Int8) ENGINE = MergeTree ORDER BY id SETTINGS index_granularity = 8192 +engine_full: MergeTree ORDER BY id SETTINGS index_granularity = 8192 +partition_key: +sorting_key: id +primary_key: id +sampling_key: +storage_policy: default +total_rows: 2 +total_bytes: 155 +lifetime_rows: ᴺᵁᴸᴸ +lifetime_bytes: ᴺᵁᴸᴸ +comment: ``` [Original article](https://clickhouse.tech/docs/en/operations/system_tables/tables) diff --git a/docs/en/operations/system-tables/zookeeper.md b/docs/en/operations/system-tables/zookeeper.md index 82ace5e81dc..3b8db14934e 100644 --- a/docs/en/operations/system-tables/zookeeper.md +++ b/docs/en/operations/system-tables/zookeeper.md @@ -5,10 +5,10 @@ The query must either have a ‘path =’ condition or a `path IN` condition The query `SELECT * FROM system.zookeeper WHERE path = '/clickhouse'` outputs data for all children on the `/clickhouse` node. To output data for all root nodes, write path = ‘/’. -If the path specified in ‘path’ doesn’t exist, an exception will be thrown. +If the path specified in ‘path’ does not exist, an exception will be thrown. The query `SELECT * FROM system.zookeeper WHERE path IN ('/', '/clickhouse')` outputs data for all children on the `/` and `/clickhouse` node. -If in the specified ‘path’ collection has doesn't exist path, an exception will be thrown. +If in the specified ‘path’ collection has does not exist path, an exception will be thrown. It can be used to do a batch of ZooKeeper path queries. Columns: diff --git a/docs/en/operations/tips.md b/docs/en/operations/tips.md index 865fe58d7cd..0b74ae95b06 100644 --- a/docs/en/operations/tips.md +++ b/docs/en/operations/tips.md @@ -52,7 +52,7 @@ But for storing archives with rare queries, shelves will work. ## RAID {#raid} When using HDD, you can combine their RAID-10, RAID-5, RAID-6 or RAID-50. -For Linux, software RAID is better (with `mdadm`). We don’t recommend using LVM. +For Linux, software RAID is better (with `mdadm`). We do not recommend using LVM. When creating RAID-10, select the `far` layout. If your budget allows, choose RAID-10. diff --git a/docs/en/operations/troubleshooting.md b/docs/en/operations/troubleshooting.md index 39449afccef..f2695ce8437 100644 --- a/docs/en/operations/troubleshooting.md +++ b/docs/en/operations/troubleshooting.md @@ -55,7 +55,7 @@ If `clickhouse-server` start failed with a configuration error, you should see t 2019.01.11 15:23:25.549505 [ 45 ] {} ExternalDictionaries: Failed reloading 'event2id' external dictionary: Poco::Exception. Code: 1000, e.code() = 111, e.displayText() = Connection refused, e.what() = Connection refused ``` -If you don’t see an error at the end of the file, look through the entire file starting from the string: +If you do not see an error at the end of the file, look through the entire file starting from the string: ``` text Application: starting up. @@ -79,7 +79,7 @@ Revision: 54413 **See system.d logs** -If you don’t find any useful information in `clickhouse-server` logs or there aren’t any logs, you can view `system.d` logs using the command: +If you do not find any useful information in `clickhouse-server` logs or there aren’t any logs, you can view `system.d` logs using the command: ``` bash $ sudo journalctl -u clickhouse-server diff --git a/docs/en/operations/utilities/clickhouse-obfuscator.md b/docs/en/operations/utilities/clickhouse-obfuscator.md index 7fd608fcac0..b01a7624b56 100644 --- a/docs/en/operations/utilities/clickhouse-obfuscator.md +++ b/docs/en/operations/utilities/clickhouse-obfuscator.md @@ -1,42 +1,42 @@ -# ClickHouse obfuscator - -A simple tool for table data obfuscation. - -It reads an input table and produces an output table, that retains some properties of input, but contains different data. -It allows publishing almost real production data for usage in benchmarks. - -It is designed to retain the following properties of data: -- cardinalities of values (number of distinct values) for every column and every tuple of columns; -- conditional cardinalities: number of distinct values of one column under the condition on the value of another column; -- probability distributions of the absolute value of integers; the sign of signed integers; exponent and sign for floats; -- probability distributions of the length of strings; -- probability of zero values of numbers; empty strings and arrays, `NULL`s; - -- data compression ratio when compressed with LZ77 and entropy family of codecs; -- continuity (magnitude of difference) of time values across the table; continuity of floating-point values; -- date component of `DateTime` values; - -- UTF-8 validity of string values; -- string values look natural. - -Most of the properties above are viable for performance testing: - -reading data, filtering, aggregatio, and sorting will work at almost the same speed -as on original data due to saved cardinalities, magnitudes, compression ratios, etc. - -It works in a deterministic fashion: you define a seed value and the transformation is determined by input data and by seed. -Some transformations are one to one and could be reversed, so you need to have a large seed and keep it in secret. - -It uses some cryptographic primitives to transform data but from the cryptographic point of view, it doesn't do it properly, that is why you should not consider the result as secure unless you have another reason. The result may retain some data you don't want to publish. - - -It always leaves 0, 1, -1 numbers, dates, lengths of arrays, and null flags exactly as in source data. -For example, you have a column `IsMobile` in your table with values 0 and 1. In transformed data, it will have the same value. - -So, the user will be able to count the exact ratio of mobile traffic. - -Let's give another example. When you have some private data in your table, like user email and you don't want to publish any single email address. -If your table is large enough and contains multiple different emails and no email has a very high frequency than all others, it will anonymize all data. But if you have a small number of different values in a column, it can reproduce some of them. -You should look at the working algorithm of this tool works, and fine-tune its command line parameters. - -This tool works fine only with an average amount of data (at least 1000s of rows). +# ClickHouse obfuscator + +A simple tool for table data obfuscation. + +It reads an input table and produces an output table, that retains some properties of input, but contains different data. +It allows publishing almost real production data for usage in benchmarks. + +It is designed to retain the following properties of data: +- cardinalities of values (number of distinct values) for every column and every tuple of columns; +- conditional cardinalities: number of distinct values of one column under the condition on the value of another column; +- probability distributions of the absolute value of integers; the sign of signed integers; exponent and sign for floats; +- probability distributions of the length of strings; +- probability of zero values of numbers; empty strings and arrays, `NULL`s; + +- data compression ratio when compressed with LZ77 and entropy family of codecs; +- continuity (magnitude of difference) of time values across the table; continuity of floating-point values; +- date component of `DateTime` values; + +- UTF-8 validity of string values; +- string values look natural. + +Most of the properties above are viable for performance testing: + +reading data, filtering, aggregatio, and sorting will work at almost the same speed +as on original data due to saved cardinalities, magnitudes, compression ratios, etc. + +It works in a deterministic fashion: you define a seed value and the transformation is determined by input data and by seed. +Some transformations are one to one and could be reversed, so you need to have a large seed and keep it in secret. + +It uses some cryptographic primitives to transform data but from the cryptographic point of view, it does not do it properly, that is why you should not consider the result as secure unless you have another reason. The result may retain some data you don't want to publish. + + +It always leaves 0, 1, -1 numbers, dates, lengths of arrays, and null flags exactly as in source data. +For example, you have a column `IsMobile` in your table with values 0 and 1. In transformed data, it will have the same value. + +So, the user will be able to count the exact ratio of mobile traffic. + +Let's give another example. When you have some private data in your table, like user email and you don't want to publish any single email address. +If your table is large enough and contains multiple different emails and no email has a very high frequency than all others, it will anonymize all data. But if you have a small number of different values in a column, it can reproduce some of them. +You should look at the working algorithm of this tool works, and fine-tune its command line parameters. + +This tool works fine only with an average amount of data (at least 1000s of rows). diff --git a/docs/en/sql-reference/aggregate-functions/combinators.md b/docs/en/sql-reference/aggregate-functions/combinators.md index 259202805d3..3fc5121ebcc 100644 --- a/docs/en/sql-reference/aggregate-functions/combinators.md +++ b/docs/en/sql-reference/aggregate-functions/combinators.md @@ -61,7 +61,7 @@ Result: ## -State {#agg-functions-combinator-state} -If you apply this combinator, the aggregate function doesn’t return the resulting value (such as the number of unique values for the [uniq](../../sql-reference/aggregate-functions/reference/uniq.md#agg_function-uniq) function), but an intermediate state of the aggregation (for `uniq`, this is the hash table for calculating the number of unique values). This is an `AggregateFunction(...)` that can be used for further processing or stored in a table to finish aggregating later. +If you apply this combinator, the aggregate function does not return the resulting value (such as the number of unique values for the [uniq](../../sql-reference/aggregate-functions/reference/uniq.md#agg_function-uniq) function), but an intermediate state of the aggregation (for `uniq`, this is the hash table for calculating the number of unique values). This is an `AggregateFunction(...)` that can be used for further processing or stored in a table to finish aggregating later. To work with these states, use: @@ -77,7 +77,7 @@ If you apply this combinator, the aggregate function takes the intermediate aggr ## -MergeState {#aggregate_functions_combinators-mergestate} -Merges the intermediate aggregation states in the same way as the -Merge combinator. However, it doesn’t return the resulting value, but an intermediate aggregation state, similar to the -State combinator. +Merges the intermediate aggregation states in the same way as the -Merge combinator. However, it does not return the resulting value, but an intermediate aggregation state, similar to the -State combinator. ## -ForEach {#agg-functions-combinator-foreach} @@ -92,7 +92,7 @@ Examples: `sum(DISTINCT x)`, `groupArray(DISTINCT x)`, `corrStableDistinct(DISTI Changes behavior of an aggregate function. -If an aggregate function doesn’t have input values, with this combinator it returns the default value for its return data type. Applies to the aggregate functions that can take empty input data. +If an aggregate function does not have input values, with this combinator it returns the default value for its return data type. Applies to the aggregate functions that can take empty input data. `-OrDefault` can be used with other combinators. @@ -222,7 +222,7 @@ Lets you divide data into groups, and then separately aggregates the data in tho **Arguments** - `start` — Starting value of the whole required interval for `resampling_key` values. -- `stop` — Ending value of the whole required interval for `resampling_key` values. The whole interval doesn’t include the `stop` value `[start, stop)`. +- `stop` — Ending value of the whole required interval for `resampling_key` values. The whole interval does not include the `stop` value `[start, stop)`. - `step` — Step for separating the whole interval into subintervals. The `aggFunction` is executed over each of those subintervals independently. - `resampling_key` — Column whose values are used for separating data into intervals. - `aggFunction_params` — `aggFunction` parameters. diff --git a/docs/en/sql-reference/aggregate-functions/parametric-functions.md b/docs/en/sql-reference/aggregate-functions/parametric-functions.md index b9d504241db..8e97171d31b 100644 --- a/docs/en/sql-reference/aggregate-functions/parametric-functions.md +++ b/docs/en/sql-reference/aggregate-functions/parametric-functions.md @@ -9,7 +9,7 @@ Some aggregate functions can accept not only argument columns (used for compress ## histogram {#histogram} -Calculates an adaptive histogram. It doesn’t guarantee precise results. +Calculates an adaptive histogram. It does not guarantee precise results. ``` sql histogram(number_of_bins)(values) @@ -79,7 +79,7 @@ FROM └────────┴───────┘ ``` -In this case, you should remember that you don’t know the histogram bin borders. +In this case, you should remember that you do not know the histogram bin borders. ## sequenceMatch(pattern)(timestamp, cond1, cond2, …) {#function-sequencematch} @@ -114,7 +114,7 @@ Type: `UInt8`. - `(?N)` — Matches the condition argument at position `N`. Conditions are numbered in the `[1, 32]` range. For example, `(?1)` matches the argument passed to the `cond1` parameter. -- `.*` — Matches any number of events. You don’t need conditional arguments to match this element of the pattern. +- `.*` — Matches any number of events. You do not need conditional arguments to match this element of the pattern. - `(?t operator value)` — Sets the time in seconds that should separate two events. For example, pattern `(?1)(?t>1800)(?2)` matches events that occur more than 1800 seconds from each other. An arbitrary number of any events can lay between these events. You can use the `>=`, `>`, `<`, `<=` operators. @@ -172,7 +172,7 @@ SELECT sequenceMatch('(?1)(?2)')(time, number = 1, number = 2, number = 4) FROM ## sequenceCount(pattern)(time, cond1, cond2, …) {#function-sequencecount} -Counts the number of event chains that matched the pattern. The function searches event chains that don’t overlap. It starts to search for the next chain after the current chain is matched. +Counts the number of event chains that matched the pattern. The function searches event chains that do not overlap. It starts to search for the next chain after the current chain is matched. !!! warning "Warning" Events that occur at the same second may lay in the sequence in an undefined order affecting the result. @@ -253,7 +253,7 @@ windowFunnel(window, [mode, [mode, ... ]])(timestamp, cond1, cond2, ..., condN) **Parameters** -- `window` — Length of the sliding window, it is the time interval between first condition and last condition. The unit of `window` depends on the `timestamp` itself and varies. Determined using the expression `timestamp of cond1 <= timestamp of cond2 <= ... <= timestamp of condN <= timestamp of cond1 + window`. +- `window` — Length of the sliding window, it is the time interval between the first and the last condition. The unit of `window` depends on the `timestamp` itself and varies. Determined using the expression `timestamp of cond1 <= timestamp of cond2 <= ... <= timestamp of condN <= timestamp of cond1 + window`. - `mode` — It is an optional argument. One or more modes can be set. - `'strict'` — If same condition holds for sequence of events then such non-unique events would be skipped. - `'strict_order'` — Don't allow interventions of other events. E.g. in the case of `A->B->D->C`, it stops finding `A->B->C` at the `D` and the max event level is 2. @@ -312,7 +312,7 @@ FROM GROUP BY user_id ) GROUP BY level -ORDER BY level ASC +ORDER BY level ASC; ``` Result: diff --git a/docs/en/sql-reference/aggregate-functions/reference/quantiletdigestweighted.md b/docs/en/sql-reference/aggregate-functions/reference/quantiletdigestweighted.md index 32d174136e0..70f30f3a480 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/quantiletdigestweighted.md +++ b/docs/en/sql-reference/aggregate-functions/reference/quantiletdigestweighted.md @@ -18,10 +18,10 @@ When using multiple `quantile*` functions with different levels in a query, the **Syntax** ``` sql -quantileTDigest(level)(expr) +quantileTDigestWeighted(level)(expr, weight) ``` -Alias: `medianTDigest`. +Alias: `medianTDigestWeighted`. **Arguments** diff --git a/docs/en/sql-reference/aggregate-functions/reference/quantiletiming.md b/docs/en/sql-reference/aggregate-functions/reference/quantiletiming.md index 58ce6495a96..dd545c1a485 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/quantiletiming.md +++ b/docs/en/sql-reference/aggregate-functions/reference/quantiletiming.md @@ -6,7 +6,7 @@ toc_priority: 204 With the determined precision computes the [quantile](https://en.wikipedia.org/wiki/Quantile) of a numeric data sequence. -The result is deterministic (it doesn’t depend on the query processing order). The function is optimized for working with sequences which describe distributions like loading web pages times or backend response times. +The result is deterministic (it does not depend on the query processing order). The function is optimized for working with sequences which describe distributions like loading web pages times or backend response times. When using multiple `quantile*` functions with different levels in a query, the internal states are not combined (that is, the query works less efficiently than it could). In this case, use the [quantiles](../../../sql-reference/aggregate-functions/reference/quantiles.md#quantiles) function. @@ -31,7 +31,7 @@ Alias: `medianTiming`. The calculation is accurate if: -- Total number of values doesn’t exceed 5670. +- Total number of values does not exceed 5670. - Total number of values exceeds 5670, but the page loading time is less than 1024ms. Otherwise, the result of the calculation is rounded to the nearest multiple of 16 ms. diff --git a/docs/en/sql-reference/aggregate-functions/reference/quantiletimingweighted.md b/docs/en/sql-reference/aggregate-functions/reference/quantiletimingweighted.md index fb3b9dbf4d2..25846cde636 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/quantiletimingweighted.md +++ b/docs/en/sql-reference/aggregate-functions/reference/quantiletimingweighted.md @@ -6,7 +6,7 @@ toc_priority: 205 With the determined precision computes the [quantile](https://en.wikipedia.org/wiki/Quantile) of a numeric data sequence according to the weight of each sequence member. -The result is deterministic (it doesn’t depend on the query processing order). The function is optimized for working with sequences which describe distributions like loading web pages times or backend response times. +The result is deterministic (it does not depend on the query processing order). The function is optimized for working with sequences which describe distributions like loading web pages times or backend response times. When using multiple `quantile*` functions with different levels in a query, the internal states are not combined (that is, the query works less efficiently than it could). In this case, use the [quantiles](../../../sql-reference/aggregate-functions/reference/quantiles.md#quantiles) function. @@ -33,7 +33,7 @@ Alias: `medianTimingWeighted`. The calculation is accurate if: -- Total number of values doesn’t exceed 5670. +- Total number of values does not exceed 5670. - Total number of values exceeds 5670, but the page loading time is less than 1024ms. Otherwise, the result of the calculation is rounded to the nearest multiple of 16 ms. diff --git a/docs/en/sql-reference/aggregate-functions/reference/rankCorr.md b/docs/en/sql-reference/aggregate-functions/reference/rankCorr.md index 55ee1b8289b..b364317c22b 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/rankCorr.md +++ b/docs/en/sql-reference/aggregate-functions/reference/rankCorr.md @@ -1,4 +1,8 @@ -## rankCorr {#agg_function-rankcorr} +--- +toc_priority: 145 +--- + +# rankCorr {#agg_function-rankcorr} Computes a rank correlation coefficient. diff --git a/docs/en/sql-reference/aggregate-functions/reference/sumcount.md b/docs/en/sql-reference/aggregate-functions/reference/sumcount.md new file mode 100644 index 00000000000..80e87663f89 --- /dev/null +++ b/docs/en/sql-reference/aggregate-functions/reference/sumcount.md @@ -0,0 +1,46 @@ +--- +toc_priority: 144 +--- + +# sumCount {#agg_function-sumCount} + +Calculates the sum of the numbers and counts the number of rows at the same time. + +**Syntax** + +``` sql +sumCount(x) +``` + +**Arguments** + +- `x` — Input value, must be [Integer](../../../sql-reference/data-types/int-uint.md), [Float](../../../sql-reference/data-types/float.md), or [Decimal](../../../sql-reference/data-types/decimal.md). + +**Returned value** + +- Tuple `(sum, count)`, where `sum` is the sum of numbers and `count` is the number of rows with not-NULL values. + +Type: [Tuple](../../../sql-reference/data-types/tuple.md). + +**Example** + +Query: + +``` sql +CREATE TABLE s_table (x Int8) Engine = Log; +INSERT INTO s_table SELECT number FROM numbers(0, 20); +INSERT INTO s_table VALUES (NULL); +SELECT sumCount(x) from s_table; +``` + +Result: + +``` text +┌─sumCount(x)─┐ +│ (190,20) │ +└─────────────┘ +``` + +**See also** + +- [optimize_fuse_sum_count_avg](../../../operations/settings/settings.md#optimize_fuse_sum_count_avg) setting. diff --git a/docs/en/sql-reference/aggregate-functions/reference/topk.md b/docs/en/sql-reference/aggregate-functions/reference/topk.md index b9bea013ea8..7e6d0db4946 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/topk.md +++ b/docs/en/sql-reference/aggregate-functions/reference/topk.md @@ -12,7 +12,7 @@ Implements the [Filtered Space-Saving](http://www.l2f.inesc-id.pt/~fmmb/wiki/upl topK(N)(column) ``` -This function doesn’t provide a guaranteed result. In certain situations, errors might occur and it might return frequent values that aren’t the most frequent values. +This function does not provide a guaranteed result. In certain situations, errors might occur and it might return frequent values that aren’t the most frequent values. We recommend using the `N < 10` value; performance is reduced with large `N` values. Maximum value of `N = 65536`. diff --git a/docs/en/sql-reference/aggregate-functions/reference/uniq.md b/docs/en/sql-reference/aggregate-functions/reference/uniq.md index 94771c59cc8..598af24c0de 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/uniq.md +++ b/docs/en/sql-reference/aggregate-functions/reference/uniq.md @@ -28,7 +28,7 @@ Function: This algorithm is very accurate and very efficient on the CPU. When the query contains several of these functions, using `uniq` is almost as fast as using other aggregate functions. -- Provides the result deterministically (it doesn’t depend on the query processing order). +- Provides the result deterministically (it does not depend on the query processing order). We recommend using this function in almost all scenarios. diff --git a/docs/en/sql-reference/aggregate-functions/reference/uniqcombined.md b/docs/en/sql-reference/aggregate-functions/reference/uniqcombined.md index 8ae916961f9..623c43ae10c 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/uniqcombined.md +++ b/docs/en/sql-reference/aggregate-functions/reference/uniqcombined.md @@ -32,7 +32,7 @@ Function: For a small number of distinct elements, an array is used. When the set size is larger, a hash table is used. For a larger number of elements, HyperLogLog is used, which will occupy a fixed amount of memory. -- Provides the result deterministically (it doesn’t depend on the query processing order). +- Provides the result deterministically (it does not depend on the query processing order). !!! note "Note" Since it uses 32-bit hash for non-`String` type, the result will have very high error for cardinalities significantly larger than `UINT_MAX` (error will raise quickly after a few tens of billions of distinct values), hence in this case you should use [uniqCombined64](../../../sql-reference/aggregate-functions/reference/uniqcombined64.md#agg_function-uniqcombined64) diff --git a/docs/en/sql-reference/aggregate-functions/reference/uniqhll12.md b/docs/en/sql-reference/aggregate-functions/reference/uniqhll12.md index 80b1e935b55..1d619ab7d93 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/uniqhll12.md +++ b/docs/en/sql-reference/aggregate-functions/reference/uniqhll12.md @@ -28,9 +28,9 @@ Function: 2^12 5-bit cells are used. The size of the state is slightly more than 2.5 KB. The result is not very accurate (up to ~10% error) for small data sets (<10K elements). However, the result is fairly accurate for high-cardinality data sets (10K-100M), with a maximum error of ~1.6%. Starting from 100M, the estimation error increases, and the function will return very inaccurate results for data sets with extremely high cardinality (1B+ elements). -- Provides the determinate result (it doesn’t depend on the query processing order). +- Provides the determinate result (it does not depend on the query processing order). -We don’t recommend using this function. In most cases, use the [uniq](../../../sql-reference/aggregate-functions/reference/uniq.md#agg_function-uniq) or [uniqCombined](../../../sql-reference/aggregate-functions/reference/uniqcombined.md#agg_function-uniqcombined) function. +We do not recommend using this function. In most cases, use the [uniq](../../../sql-reference/aggregate-functions/reference/uniq.md#agg_function-uniq) or [uniqCombined](../../../sql-reference/aggregate-functions/reference/uniqcombined.md#agg_function-uniqcombined) function. **See Also** diff --git a/docs/en/sql-reference/data-types/geo.md b/docs/en/sql-reference/data-types/geo.md index 9ed328e0de6..50093053686 100644 --- a/docs/en/sql-reference/data-types/geo.md +++ b/docs/en/sql-reference/data-types/geo.md @@ -5,7 +5,7 @@ toc_title: Geo # Geo Data Types {#geo-data-types} -Clickhouse supports data types for representing geographical objects — locations, lands, etc. +ClickHouse supports data types for representing geographical objects — locations, lands, etc. !!! warning "Warning" Currently geo data types are an experimental feature. To work with them you must set `allow_experimental_geo_types = 1`. diff --git a/docs/en/sql-reference/data-types/lowcardinality.md b/docs/en/sql-reference/data-types/lowcardinality.md index e0a483973e6..5f0f400ce43 100644 --- a/docs/en/sql-reference/data-types/lowcardinality.md +++ b/docs/en/sql-reference/data-types/lowcardinality.md @@ -55,7 +55,7 @@ Functions: ## See Also {#see-also} - [A Magical Mystery Tour of the LowCardinality Data Type](https://www.altinity.com/blog/2019/3/27/low-cardinality). -- [Reducing Clickhouse Storage Cost with the Low Cardinality Type – Lessons from an Instana Engineer](https://www.instana.com/blog/reducing-clickhouse-storage-cost-with-the-low-cardinality-type-lessons-from-an-instana-engineer/). +- [Reducing ClickHouse Storage Cost with the Low Cardinality Type – Lessons from an Instana Engineer](https://www.instana.com/blog/reducing-clickhouse-storage-cost-with-the-low-cardinality-type-lessons-from-an-instana-engineer/). - [String Optimization (video presentation in Russian)](https://youtu.be/rqf-ILRgBdY?list=PL0Z2YDlm0b3iwXCpEFiOOYmwXzVmjJfEt). [Slides in English](https://github.com/yandex/clickhouse-presentations/raw/master/meetup19/string_optimization.pdf). [Original article](https://clickhouse.tech/docs/en/sql-reference/data-types/lowcardinality/) diff --git a/docs/en/sql-reference/data-types/nullable.md b/docs/en/sql-reference/data-types/nullable.md index 2cf5e41867e..4207e389734 100644 --- a/docs/en/sql-reference/data-types/nullable.md +++ b/docs/en/sql-reference/data-types/nullable.md @@ -5,7 +5,7 @@ toc_title: Nullable # Nullable(typename) {#data_type-nullable} -Allows to store special marker ([NULL](../../sql-reference/syntax.md)) that denotes “missing value” alongside normal values allowed by `TypeName`. For example, a `Nullable(Int8)` type column can store `Int8` type values, and the rows that don’t have a value will store `NULL`. +Allows to store special marker ([NULL](../../sql-reference/syntax.md)) that denotes “missing value” alongside normal values allowed by `TypeName`. For example, a `Nullable(Int8)` type column can store `Int8` type values, and the rows that do not have a value will store `NULL`. For a `TypeName`, you can’t use composite data types [Array](../../sql-reference/data-types/array.md) and [Tuple](../../sql-reference/data-types/tuple.md). Composite data types can contain `Nullable` type values, such as `Array(Nullable(Int8))`. diff --git a/docs/en/sql-reference/data-types/simpleaggregatefunction.md b/docs/en/sql-reference/data-types/simpleaggregatefunction.md index f3a245e9627..8138d4a4103 100644 --- a/docs/en/sql-reference/data-types/simpleaggregatefunction.md +++ b/docs/en/sql-reference/data-types/simpleaggregatefunction.md @@ -1,6 +1,6 @@ # SimpleAggregateFunction {#data-type-simpleaggregatefunction} -`SimpleAggregateFunction(name, types_of_arguments…)` data type stores current value of the aggregate function, and does not store its full state as [`AggregateFunction`](../../sql-reference/data-types/aggregatefunction.md) does. This optimization can be applied to functions for which the following property holds: the result of applying a function `f` to a row set `S1 UNION ALL S2` can be obtained by applying `f` to parts of the row set separately, and then again applying `f` to the results: `f(S1 UNION ALL S2) = f(f(S1) UNION ALL f(S2))`. This property guarantees that partial aggregation results are enough to compute the combined one, so we don’t have to store and process any extra data. +`SimpleAggregateFunction(name, types_of_arguments…)` data type stores current value of the aggregate function, and does not store its full state as [`AggregateFunction`](../../sql-reference/data-types/aggregatefunction.md) does. This optimization can be applied to functions for which the following property holds: the result of applying a function `f` to a row set `S1 UNION ALL S2` can be obtained by applying `f` to parts of the row set separately, and then again applying `f` to the results: `f(S1 UNION ALL S2) = f(f(S1) UNION ALL f(S2))`. This property guarantees that partial aggregation results are enough to compute the combined one, so we do not have to store and process any extra data. The common way to produce an aggregate function value is by calling the aggregate function with the [-SimpleState](../../sql-reference/aggregate-functions/combinators.md#agg-functions-combinator-simplestate) suffix. diff --git a/docs/en/sql-reference/data-types/string.md b/docs/en/sql-reference/data-types/string.md index 42d8798c8a3..e72ce8f0b5a 100644 --- a/docs/en/sql-reference/data-types/string.md +++ b/docs/en/sql-reference/data-types/string.md @@ -12,7 +12,7 @@ When creating tables, numeric parameters for string fields can be set (e.g. `VAR ## Encodings {#encodings} -ClickHouse doesn’t have the concept of encodings. Strings can contain an arbitrary set of bytes, which are stored and output as-is. +ClickHouse does not have the concept of encodings. Strings can contain an arbitrary set of bytes, which are stored and output as-is. If you need to store texts, we recommend using UTF-8 encoding. At the very least, if your terminal uses UTF-8 (as recommended), you can read and write your values without making conversions. Similarly, certain functions for working with strings have separate variations that work under the assumption that the string contains a set of bytes representing a UTF-8 encoded text. For example, the ‘length’ function calculates the string length in bytes, while the ‘lengthUTF8’ function calculates the string length in Unicode code points, assuming that the value is UTF-8 encoded. diff --git a/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md b/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md index 84ac45d2f35..c69dc4224e6 100644 --- a/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md +++ b/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md @@ -338,7 +338,7 @@ or ``` sql LAYOUT(SSD_CACHE(BLOCK_SIZE 4096 FILE_SIZE 16777216 READ_BUFFER_SIZE 1048576 - PATH /var/lib/clickhouse/clickhouse_dictionaries/test_dict)) + PATH ./user_files/test_dict)) ``` ### complex_key_ssd_cache {#complex-key-ssd-cache} diff --git a/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md b/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md index dc0b6e17198..5a7efa37fd1 100644 --- a/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md +++ b/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md @@ -53,7 +53,7 @@ optional settings are available: or ``` sql -SOURCE(FILE(path '/opt/dictionaries/os.tsv' format 'TabSeparated')) +SOURCE(FILE(path './user_files/os.tsv' format 'TabSeparated')) SETTINGS(format_csv_allow_single_quotes = 0) ``` @@ -70,7 +70,7 @@ Types of sources (`source_type`): - [MongoDB](#dicts-external_dicts_dict_sources-mongodb) - [Redis](#dicts-external_dicts_dict_sources-redis) - [Cassandra](#dicts-external_dicts_dict_sources-cassandra) - - [PostgreSQL](#dicts-external_dicts_dict_sources-postgresql) + - [PostgreSQL](#dicts-external_dicts_dict_sources-postgresql) ## Local File {#dicts-external_dicts_dict_sources-local_file} @@ -88,7 +88,7 @@ Example of settings: or ``` sql -SOURCE(FILE(path '/opt/dictionaries/os.tsv' format 'TabSeparated')) +SOURCE(FILE(path './user_files/os.tsv' format 'TabSeparated')) ``` Setting fields: diff --git a/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts.md b/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts.md index 8217fb8da3a..d229336c58d 100644 --- a/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts.md +++ b/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts.md @@ -43,7 +43,7 @@ The dictionary configuration file has the following format: You can [configure](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict.md) any number of dictionaries in the same file. -[DDL queries for dictionaries](../../../sql-reference/statements/create/dictionary.md) doesn’t require any additional records in server configuration. They allow to work with dictionaries as first-class entities, like tables or views. +[DDL queries for dictionaries](../../../sql-reference/statements/create/dictionary.md) does not require any additional records in server configuration. They allow to work with dictionaries as first-class entities, like tables or views. !!! attention "Attention" You can convert values for a small dictionary by describing it in a `SELECT` query (see the [transform](../../../sql-reference/functions/other-functions.md) function). This functionality is not related to external dictionaries. diff --git a/docs/en/sql-reference/dictionaries/internal-dicts.md b/docs/en/sql-reference/dictionaries/internal-dicts.md index 472351a19a4..9f142c4207d 100644 --- a/docs/en/sql-reference/dictionaries/internal-dicts.md +++ b/docs/en/sql-reference/dictionaries/internal-dicts.md @@ -31,7 +31,7 @@ You can also create these files yourself. The file format is as follows: - region ID (`UInt32`) - parent region ID (`UInt32`) -- region type (`UInt8`): 1 - continent, 3 - country, 4 - federal district, 5 - region, 6 - city; other types don’t have values +- region type (`UInt8`): 1 - continent, 3 - country, 4 - federal district, 5 - region, 6 - city; other types do not have values - population (`UInt32`) — optional column `regions_names_*.txt`: TabSeparated (no header), columns: diff --git a/docs/en/sql-reference/functions/arithmetic-functions.md b/docs/en/sql-reference/functions/arithmetic-functions.md index faa03dfc9d3..3187f13b5b9 100644 --- a/docs/en/sql-reference/functions/arithmetic-functions.md +++ b/docs/en/sql-reference/functions/arithmetic-functions.md @@ -70,7 +70,7 @@ Calculates a number with the reverse sign. The result is always signed. ## abs(a) {#arithm_func-abs} -Calculates the absolute value of the number (a). That is, if a \< 0, it returns -a. For unsigned types it doesn’t do anything. For signed integer types, it returns an unsigned number. +Calculates the absolute value of the number (a). That is, if a \< 0, it returns -a. For unsigned types it does not do anything. For signed integer types, it returns an unsigned number. ## gcd(a, b) {#gcda-b} diff --git a/docs/en/sql-reference/functions/array-functions.md b/docs/en/sql-reference/functions/array-functions.md index 7d4fcf29476..611f620ed9f 100644 --- a/docs/en/sql-reference/functions/array-functions.md +++ b/docs/en/sql-reference/functions/array-functions.md @@ -125,7 +125,7 @@ hasAll(set, subset) - An empty array is a subset of any array. - `Null` processed as a value. -- Order of values in both of arrays doesn’t matter. +- Order of values in both of arrays does not matter. **Examples** @@ -162,7 +162,7 @@ hasAny(array1, array2) **Peculiar properties** - `Null` processed as a value. -- Order of values in both of arrays doesn’t matter. +- Order of values in both of arrays does not matter. **Examples** @@ -602,7 +602,7 @@ SELECT arraySort((x, y) -> y, ['hello', 'world'], [2, 1]) as res; └────────────────────┘ ``` -Here, the elements that are passed in the second array (\[2, 1\]) define a sorting key for the corresponding element from the source array (\[‘hello’, ‘world’\]), that is, \[‘hello’ –\> 2, ‘world’ –\> 1\]. Since the lambda function doesn’t use `x`, actual values of the source array don’t affect the order in the result. So, ‘hello’ will be the second element in the result, and ‘world’ will be the first. +Here, the elements that are passed in the second array (\[2, 1\]) define a sorting key for the corresponding element from the source array (\[‘hello’, ‘world’\]), that is, \[‘hello’ –\> 2, ‘world’ –\> 1\]. Since the lambda function does not use `x`, actual values of the source array do not affect the order in the result. So, ‘hello’ will be the second element in the result, and ‘world’ will be the first. Other examples are shown below. diff --git a/docs/en/sql-reference/functions/array-join.md b/docs/en/sql-reference/functions/array-join.md index f35e0d10117..e87d0bca4bb 100644 --- a/docs/en/sql-reference/functions/array-join.md +++ b/docs/en/sql-reference/functions/array-join.md @@ -7,7 +7,7 @@ toc_title: arrayJoin This is a very unusual function. -Normal functions don’t change a set of rows, but just change the values in each row (map). +Normal functions do not change a set of rows, but just change the values in each row (map). Aggregate functions compress a set of rows (fold or reduce). The ‘arrayJoin’ function takes each row and generates a set of rows (unfold). diff --git a/docs/en/sql-reference/functions/bit-functions.md b/docs/en/sql-reference/functions/bit-functions.md index e07f28c0f24..57e55a7da56 100644 --- a/docs/en/sql-reference/functions/bit-functions.md +++ b/docs/en/sql-reference/functions/bit-functions.md @@ -228,7 +228,7 @@ bitCount(x) - Number of bits set to one in the input number. -The function doesn’t convert input value to a larger type ([sign extension](https://en.wikipedia.org/wiki/Sign_extension)). So, for example, `bitCount(toUInt8(-1)) = 8`. +The function does not convert input value to a larger type ([sign extension](https://en.wikipedia.org/wiki/Sign_extension)). So, for example, `bitCount(toUInt8(-1)) = 8`. Type: `UInt8`. diff --git a/docs/en/sql-reference/functions/bitmap-functions.md b/docs/en/sql-reference/functions/bitmap-functions.md index 4875532605e..c695c894784 100644 --- a/docs/en/sql-reference/functions/bitmap-functions.md +++ b/docs/en/sql-reference/functions/bitmap-functions.md @@ -140,7 +140,7 @@ bitmapContains(haystack, needle) **Returned values** -- 0 — If `haystack` doesn’t contain `needle`. +- 0 — If `haystack` does not contain `needle`. - 1 — If `haystack` contains `needle`. Type: `UInt8`. diff --git a/docs/en/sql-reference/functions/date-time-functions.md b/docs/en/sql-reference/functions/date-time-functions.md index 69baf64ef55..afbaed2b413 100644 --- a/docs/en/sql-reference/functions/date-time-functions.md +++ b/docs/en/sql-reference/functions/date-time-functions.md @@ -460,7 +460,7 @@ For mode values with a meaning of “with 4 or more days this year,” weeks are - Otherwise, it is the last week of the previous year, and the next week is week 1. -For mode values with a meaning of “contains January 1”, the week contains January 1 is week 1. It doesn’t matter how many days in the new year the week contained, even if it contained only one day. +For mode values with a meaning of “contains January 1”, the week contains January 1 is week 1. It does not matter how many days in the new year the week contained, even if it contained only one day. ``` sql toWeek(date, [, mode][, Timezone]) diff --git a/docs/en/sql-reference/functions/encoding-functions.md b/docs/en/sql-reference/functions/encoding-functions.md index 6b72d3c2269..167afdabb80 100644 --- a/docs/en/sql-reference/functions/encoding-functions.md +++ b/docs/en/sql-reference/functions/encoding-functions.md @@ -87,8 +87,6 @@ The function is using uppercase letters `A-F` and not using any prefixes (like ` For integer arguments, it prints hex digits (“nibbles”) from the most significant to least significant (big endian or “human readable” order). It starts with the most significant non-zero byte (leading zero bytes are omitted) but always prints both digits of every byte even if leading digit is zero. -Example: - **Example** Query: @@ -151,10 +149,62 @@ Result: └──────────────────┘ ``` -## unhex(str) {#unhexstr} +## unhex {#unhexstr} -Accepts a string containing any number of hexadecimal digits, and returns a string containing the corresponding bytes. Supports both uppercase and lowercase letters A-F. The number of hexadecimal digits does not have to be even. If it is odd, the last digit is interpreted as the least significant half of the 00-0F byte. If the argument string contains anything other than hexadecimal digits, some implementation-defined result is returned (an exception isn’t thrown). -If you want to convert the result to a number, you can use the ‘reverse’ and ‘reinterpretAsType’ functions. +Performs the opposite operation of [hex](#hex). It interprets each pair of hexadecimal digits (in the argument) as a number and converts it to the byte represented by the number. The return value is a binary string (BLOB). + +If you want to convert the result to a number, you can use the [reverse](../../sql-reference/functions/string-functions.md#reverse) and [reinterpretAs](../../sql-reference/functions/type-conversion-functions.md#type-conversion-functions) functions. + +!!! note "Note" + If `unhex` is invoked from within the `clickhouse-client`, binary strings display using UTF-8. + +Alias: `UNHEX`. + +**Syntax** + +``` sql +unhex(arg) +``` + +**Arguments** + +- `arg` — A string containing any number of hexadecimal digits. Type: [String](../../sql-reference/data-types/string.md). + +Supports both uppercase and lowercase letters `A-F`. The number of hexadecimal digits does not have to be even. If it is odd, the last digit is interpreted as the least significant half of the `00-0F` byte. If the argument string contains anything other than hexadecimal digits, some implementation-defined result is returned (an exception isn’t thrown). For a numeric argument the inverse of hex(N) is not performed by unhex(). + +**Returned value** + +- A binary string (BLOB). + +Type: [String](../../sql-reference/data-types/string.md). + +**Example** + +Query: +``` sql +SELECT unhex('303132'), UNHEX('4D7953514C'); +``` + +Result: +``` text +┌─unhex('303132')─┬─unhex('4D7953514C')─┐ +│ 012 │ MySQL │ +└─────────────────┴─────────────────────┘ +``` + +Query: + +``` sql +SELECT reinterpretAsUInt64(reverse(unhex('FFF'))) AS num; +``` + +Result: + +``` text +┌──num─┐ +│ 4095 │ +└──────┘ +``` ## UUIDStringToNum(str) {#uuidstringtonumstr} @@ -171,4 +221,3 @@ Accepts an integer. Returns a string containing the list of powers of two that t ## bitmaskToArray(num) {#bitmasktoarraynum} Accepts an integer. Returns an array of UInt64 numbers containing the list of powers of two that total the source number when summed. Numbers in the array are in ascending order. - diff --git a/docs/en/sql-reference/functions/ext-dict-functions.md b/docs/en/sql-reference/functions/ext-dict-functions.md index 8eb10bd0208..7c0fe11ae64 100644 --- a/docs/en/sql-reference/functions/ext-dict-functions.md +++ b/docs/en/sql-reference/functions/ext-dict-functions.md @@ -4,7 +4,7 @@ toc_title: External Dictionaries --- !!! attention "Attention" - `dict_name` parameter must be fully qualified for dictionaries created with DDL queries. Eg. `.`. + For dictionaries, created with [DDL queries](../../sql-reference/statements/create/dictionary.md), the `dict_name` parameter must be fully specified, like `.`. Otherwise, the current database is used. # Functions for Working with External Dictionaries {#ext_dict_functions} @@ -25,7 +25,7 @@ dictGetOrNull('dict_name', attr_name, id_expr) - `dict_name` — Name of the dictionary. [String literal](../../sql-reference/syntax.md#syntax-string-literal). - `attr_names` — Name of the column of the dictionary, [String literal](../../sql-reference/syntax.md#syntax-string-literal), or tuple of column names, [Tuple](../../sql-reference/data-types/tuple.md)([String literal](../../sql-reference/syntax.md#syntax-string-literal)). - `id_expr` — Key value. [Expression](../../sql-reference/syntax.md#syntax-expressions) returning a [UInt64](../../sql-reference/data-types/int-uint.md) or [Tuple](../../sql-reference/data-types/tuple.md)-type value depending on the dictionary configuration. -- `default_value_expr` — Values returned if the dictionary doesn’t contain a row with the `id_expr` key. [Expression](../../sql-reference/syntax.md#syntax-expressions) or [Tuple](../../sql-reference/data-types/tuple.md)([Expression](../../sql-reference/syntax.md#syntax-expressions)), returning the value (or values) in the data types configured for the `attr_names` attribute. +- `default_value_expr` — Values returned if the dictionary does not contain a row with the `id_expr` key. [Expression](../../sql-reference/syntax.md#syntax-expressions) or [Tuple](../../sql-reference/data-types/tuple.md)([Expression](../../sql-reference/syntax.md#syntax-expressions)), returning the value (or values) in the data types configured for the `attr_names` attribute. **Returned value** @@ -37,7 +37,7 @@ dictGetOrNull('dict_name', attr_name, id_expr) - `dictGetOrDefault` returns the value passed as the `default_value_expr` parameter. - `dictGetOrNull` returns `NULL` in case key was not found in dictionary. -ClickHouse throws an exception if it cannot parse the value of the attribute or the value doesn’t match the attribute data type. +ClickHouse throws an exception if it cannot parse the value of the attribute or the value does not match the attribute data type. **Example for simple key dictionary** @@ -288,6 +288,119 @@ dictIsIn('dict_name', child_id_expr, ancestor_id_expr) Type: `UInt8`. +## dictGetChildren {#dictgetchildren} + +Returns first-level children as an array of indexes. It is the inverse transformation for [dictGetHierarchy](#dictgethierarchy). + +**Syntax** + +``` sql +dictGetChildren(dict_name, key) +``` + +**Arguments** + +- `dict_name` — Name of the dictionary. [String literal](../../sql-reference/syntax.md#syntax-string-literal). +- `key` — Key value. [Expression](../../sql-reference/syntax.md#syntax-expressions) returning a [UInt64](../../sql-reference/data-types/int-uint.md)-type value. + +**Returned values** + +- First-level descendants for the key. + +Type: [Array](../../sql-reference/data-types/array.md)([UInt64](../../sql-reference/data-types/int-uint.md)). + +**Example** + +Consider the hierarchic dictionary: + +``` text +┌─id─┬─parent_id─┐ +│ 1 │ 0 │ +│ 2 │ 1 │ +│ 3 │ 1 │ +│ 4 │ 2 │ +└────┴───────────┘ +``` + +First-level children: + +``` sql +SELECT dictGetChildren('hierarchy_flat_dictionary', number) FROM system.numbers LIMIT 4; +``` + +``` text +┌─dictGetChildren('hierarchy_flat_dictionary', number)─┐ +│ [1] │ +│ [2,3] │ +│ [4] │ +│ [] │ +└──────────────────────────────────────────────────────┘ +``` + +## dictGetDescendant {#dictgetdescendant} + +Returns all descendants as if [dictGetChildren](#dictgetchildren) function was applied `level` times recursively. + +**Syntax** + +``` sql +dictGetDescendants(dict_name, key, level) +``` + +**Arguments** + +- `dict_name` — Name of the dictionary. [String literal](../../sql-reference/syntax.md#syntax-string-literal). +- `key` — Key value. [Expression](../../sql-reference/syntax.md#syntax-expressions) returning a [UInt64](../../sql-reference/data-types/int-uint.md)-type value. +- `level` — Hierarchy level. If `level = 0` returns all descendants to the end. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Returned values** + +- Descendants for the key. + +Type: [Array](../../sql-reference/data-types/array.md)([UInt64](../../sql-reference/data-types/int-uint.md)). + +**Example** + +Consider the hierarchic dictionary: + +``` text +┌─id─┬─parent_id─┐ +│ 1 │ 0 │ +│ 2 │ 1 │ +│ 3 │ 1 │ +│ 4 │ 2 │ +└────┴───────────┘ +``` +All descendants: + +``` sql +SELECT dictGetDescendants('hierarchy_flat_dictionary', number) FROM system.numbers LIMIT 4; +``` + +``` text +┌─dictGetDescendants('hierarchy_flat_dictionary', number)─┐ +│ [1,2,3,4] │ +│ [2,3,4] │ +│ [4] │ +│ [] │ +└─────────────────────────────────────────────────────────┘ +``` + +First-level descendants: + +``` sql +SELECT dictGetDescendants('hierarchy_flat_dictionary', number, 1) FROM system.numbers LIMIT 4; +``` + +``` text +┌─dictGetDescendants('hierarchy_flat_dictionary', number, 1)─┐ +│ [1] │ +│ [2,3] │ +│ [4] │ +│ [] │ +└────────────────────────────────────────────────────────────┘ +``` + ## Other Functions {#ext_dict_functions-other} ClickHouse supports specialized functions that convert dictionary attribute values to a specific data type regardless of the dictionary configuration. @@ -316,7 +429,7 @@ dictGet[Type]OrDefault('dict_name', 'attr_name', id_expr, default_value_expr) - `dict_name` — Name of the dictionary. [String literal](../../sql-reference/syntax.md#syntax-string-literal). - `attr_name` — Name of the column of the dictionary. [String literal](../../sql-reference/syntax.md#syntax-string-literal). - `id_expr` — Key value. [Expression](../../sql-reference/syntax.md#syntax-expressions) returning a [UInt64](../../sql-reference/data-types/int-uint.md) or [Tuple](../../sql-reference/data-types/tuple.md)-type value depending on the dictionary configuration. -- `default_value_expr` — Value returned if the dictionary doesn’t contain a row with the `id_expr` key. [Expression](../../sql-reference/syntax.md#syntax-expressions) returning the value in the data type configured for the `attr_name` attribute. +- `default_value_expr` — Value returned if the dictionary does not contain a row with the `id_expr` key. [Expression](../../sql-reference/syntax.md#syntax-expressions) returning the value in the data type configured for the `attr_name` attribute. **Returned value** @@ -327,4 +440,4 @@ dictGet[Type]OrDefault('dict_name', 'attr_name', id_expr, default_value_expr) - `dictGet[Type]` returns the content of the `` element specified for the attribute in the dictionary configuration. - `dictGet[Type]OrDefault` returns the value passed as the `default_value_expr` parameter. -ClickHouse throws an exception if it cannot parse the value of the attribute or the value doesn’t match the attribute data type. +ClickHouse throws an exception if it cannot parse the value of the attribute or the value does not match the attribute data type. diff --git a/docs/en/sql-reference/functions/hash-functions.md b/docs/en/sql-reference/functions/hash-functions.md index 0ea4cfd6fbe..35a42c49a41 100644 --- a/docs/en/sql-reference/functions/hash-functions.md +++ b/docs/en/sql-reference/functions/hash-functions.md @@ -43,7 +43,7 @@ SELECT halfMD5(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00') ## MD5 {#hash_functions-md5} Calculates the MD5 from a string and returns the resulting set of bytes as FixedString(16). -If you don’t need MD5 in particular, but you need a decent cryptographic 128-bit hash, use the ‘sipHash128’ function instead. +If you do not need MD5 in particular, but you need a decent cryptographic 128-bit hash, use the ‘sipHash128’ function instead. If you want to get the same result as output by the md5sum utility, use lower(hex(MD5(s))). ## sipHash64 {#hash_functions-siphash64} diff --git a/docs/en/sql-reference/functions/index.md b/docs/en/sql-reference/functions/index.md index 32408759b98..58e0994a11d 100644 --- a/docs/en/sql-reference/functions/index.md +++ b/docs/en/sql-reference/functions/index.md @@ -6,7 +6,7 @@ toc_title: Introduction # Functions {#functions} -There are at least\* two types of functions - regular functions (they are just called “functions”) and aggregate functions. These are completely different concepts. Regular functions work as if they are applied to each row separately (for each row, the result of the function doesn’t depend on the other rows). Aggregate functions accumulate a set of values from various rows (i.e. they depend on the entire set of rows). +There are at least\* two types of functions - regular functions (they are just called “functions”) and aggregate functions. These are completely different concepts. Regular functions work as if they are applied to each row separately (for each row, the result of the function does not depend on the other rows). Aggregate functions accumulate a set of values from various rows (i.e. they depend on the entire set of rows). In this section we discuss regular functions. For aggregate functions, see the section “Aggregate functions”. @@ -14,7 +14,7 @@ In this section we discuss regular functions. For aggregate functions, see the s ## Strong Typing {#strong-typing} -In contrast to standard SQL, ClickHouse has strong typing. In other words, it doesn’t make implicit conversions between types. Each function works for a specific set of types. This means that sometimes you need to use type conversion functions. +In contrast to standard SQL, ClickHouse has strong typing. In other words, it does not make implicit conversions between types. Each function works for a specific set of types. This means that sometimes you need to use type conversion functions. ## Common Subexpression Elimination {#common-subexpression-elimination} @@ -78,7 +78,7 @@ For example, in the query `SELECT f(sum(g(x))) FROM distributed_table GROUP BY h - if a `distributed_table` has at least two shards, the functions ‘g’ and ‘h’ are performed on remote servers, and the function ‘f’ is performed on the requestor server. - if a `distributed_table` has only one shard, all the ‘f’, ‘g’, and ‘h’ functions are performed on this shard’s server. -The result of a function usually doesn’t depend on which server it is performed on. However, sometimes this is important. +The result of a function usually does not depend on which server it is performed on. However, sometimes this is important. For example, functions that work with dictionaries use the dictionary that exists on the server they are running on. Another example is the `hostName` function, which returns the name of the server it is running on in order to make `GROUP BY` by servers in a `SELECT` query. diff --git a/docs/en/sql-reference/functions/ip-address-functions.md b/docs/en/sql-reference/functions/ip-address-functions.md index 0b5dd7160b8..137ebc2407d 100644 --- a/docs/en/sql-reference/functions/ip-address-functions.md +++ b/docs/en/sql-reference/functions/ip-address-functions.md @@ -48,7 +48,7 @@ LIMIT 10 └────────────────┴───────┘ ``` -Since using ‘xxx’ is highly unusual, this may be changed in the future. We recommend that you don’t rely on the exact format of this fragment. +Since using ‘xxx’ is highly unusual, this may be changed in the future. We recommend that you do not rely on the exact format of this fragment. ### IPv6NumToString(x) {#ipv6numtostringx} @@ -422,7 +422,7 @@ Type: [UInt8](../../sql-reference/data-types/int-uint.md). Query: ``` sql -SELECT isIPAddressInRange('127.0.0.1', '127.0.0.0/8') +SELECT isIPAddressInRange('127.0.0.1', '127.0.0.0/8'); ``` Result: @@ -436,7 +436,7 @@ Result: Query: ``` sql -SELECT isIPAddressInRange('127.0.0.1', 'ffff::/16') +SELECT isIPAddressInRange('127.0.0.1', 'ffff::/16'); ``` Result: diff --git a/docs/en/sql-reference/functions/json-functions.md b/docs/en/sql-reference/functions/json-functions.md index d545a0ae4e6..e731180c393 100644 --- a/docs/en/sql-reference/functions/json-functions.md +++ b/docs/en/sql-reference/functions/json-functions.md @@ -12,7 +12,7 @@ The following assumptions are made: 1. The field name (function argument) must be a constant. 2. The field name is somehow canonically encoded in JSON. For example: `visitParamHas('{"abc":"def"}', 'abc') = 1`, but `visitParamHas('{"\\u0061\\u0062\\u0063":"def"}', 'abc') = 0` 3. Fields are searched for on any nesting level, indiscriminately. If there are multiple matching fields, the first occurrence is used. -4. The JSON doesn’t have space characters outside of string literals. +4. The JSON does not have space characters outside of string literals. ## visitParamHas(params, name) {#visitparamhasparams-name} @@ -22,7 +22,7 @@ Alias: `simpleJSONHas`. ## visitParamExtractUInt(params, name) {#visitparamextractuintparams-name} -Parses UInt64 from the value of the field named `name`. If this is a string field, it tries to parse a number from the beginning of the string. If the field doesn’t exist, or it exists but doesn’t contain a number, it returns 0. +Parses UInt64 from the value of the field named `name`. If this is a string field, it tries to parse a number from the beginning of the string. If the field does not exist, or it exists but does not contain a number, it returns 0. Alias: `simpleJSONExtractUInt`. @@ -106,7 +106,7 @@ SELECT JSONHas('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', 4) = 0 - Positive integer = access the n-th member/key from the beginning. - Negative integer = access the n-th member/key from the end. -Minimum index of the element is 1. Thus the element 0 doesn’t exist. +Minimum index of the element is 1. Thus the element 0 does not exist. You may use integers to access both JSON arrays and JSON objects. diff --git a/docs/en/sql-reference/functions/other-functions.md b/docs/en/sql-reference/functions/other-functions.md index ecb7b982157..8163650efab 100644 --- a/docs/en/sql-reference/functions/other-functions.md +++ b/docs/en/sql-reference/functions/other-functions.md @@ -1654,7 +1654,7 @@ joinGet(join_storage_table_name, `value_column`, join_keys) Returns list of values corresponded to list of keys. -If certain doesn’t exist in source table then `0` or `null` will be returned based on [join_use_nulls](../../operations/settings/settings.md#join_use_nulls) setting. +If certain does not exist in source table then `0` or `null` will be returned based on [join_use_nulls](../../operations/settings/settings.md#join_use_nulls) setting. More info about `join_use_nulls` in [Join operation](../../engines/table-engines/special/join.md). @@ -1714,7 +1714,7 @@ Code: 395. DB::Exception: Received from localhost:9000. DB::Exception: Too many. ## identity {#identity} -Returns the same value that was used as its argument. Used for debugging and testing, allows to cancel using index, and get the query performance of a full scan. When query is analyzed for possible use of index, the analyzer doesn’t look inside `identity` functions. Also constant folding is not applied too. +Returns the same value that was used as its argument. Used for debugging and testing, allows to cancel using index, and get the query performance of a full scan. When query is analyzed for possible use of index, the analyzer does not look inside `identity` functions. Also constant folding is not applied too. **Syntax** diff --git a/docs/en/sql-reference/functions/rounding-functions.md b/docs/en/sql-reference/functions/rounding-functions.md index c0bd44a6467..5f74c6329d1 100644 --- a/docs/en/sql-reference/functions/rounding-functions.md +++ b/docs/en/sql-reference/functions/rounding-functions.md @@ -14,7 +14,7 @@ Returns the largest round number that is less than or equal to `x`. A round numb Examples: `floor(123.45, 1) = 123.4, floor(123.45, -1) = 120.` `x` is any numeric type. The result is a number of the same type. -For integer arguments, it makes sense to round with a negative `N` value (for non-negative `N`, the function doesn’t do anything). +For integer arguments, it makes sense to round with a negative `N` value (for non-negative `N`, the function does not do anything). If rounding causes overflow (for example, floor(-128, -1)), an implementation-specific result is returned. ## ceil(x\[, N\]), ceiling(x\[, N\]) {#ceilx-n-ceilingx-n} diff --git a/docs/en/sql-reference/functions/splitting-merging-functions.md b/docs/en/sql-reference/functions/splitting-merging-functions.md index d61896b6d98..2d384f1aa3c 100644 --- a/docs/en/sql-reference/functions/splitting-merging-functions.md +++ b/docs/en/sql-reference/functions/splitting-merging-functions.md @@ -13,7 +13,7 @@ Returns an array of selected substrings. Empty substrings may be selected if the **Syntax** ``` sql -splitByChar(, ) +splitByChar(separator, s) ``` **Arguments** @@ -29,12 +29,12 @@ Returns an array of selected substrings. Empty substrings may be selected when: - There are multiple consecutive separators; - The original string `s` is empty. -Type: [Array](../../sql-reference/data-types/array.md) of [String](../../sql-reference/data-types/string.md). +Type: [Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md)). **Example** ``` sql -SELECT splitByChar(',', '1,2,3,abcde') +SELECT splitByChar(',', '1,2,3,abcde'); ``` ``` text @@ -50,7 +50,7 @@ Splits a string into substrings separated by a string. It uses a constant string **Syntax** ``` sql -splitByString(, ) +splitByString(separator, s) ``` **Arguments** @@ -62,7 +62,7 @@ splitByString(, ) Returns an array of selected substrings. Empty substrings may be selected when: -Type: [Array](../../sql-reference/data-types/array.md) of [String](../../sql-reference/data-types/string.md). +Type: [Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md)). - A non-empty separator occurs at the beginning or end of the string; - There are multiple consecutive non-empty separators; @@ -71,7 +71,7 @@ Type: [Array](../../sql-reference/data-types/array.md) of [String](../../sql-ref **Example** ``` sql -SELECT splitByString(', ', '1, 2 3, 4,5, abcde') +SELECT splitByString(', ', '1, 2 3, 4,5, abcde'); ``` ``` text @@ -81,7 +81,7 @@ SELECT splitByString(', ', '1, 2 3, 4,5, abcde') ``` ``` sql -SELECT splitByString('', 'abcde') +SELECT splitByString('', 'abcde'); ``` ``` text @@ -92,12 +92,12 @@ SELECT splitByString('', 'abcde') ## splitByRegexp(regexp, s) {#splitbyregexpseparator-s} -Splits a string into substrings separated by a regular expression. It uses a regular expression string `regexp` as the separator. If the `regexp` is empty, it will split the string s into an array of single characters. If no match is found for this regex expression, the string `s` won't be split. +Splits a string into substrings separated by a regular expression. It uses a regular expression string `regexp` as the separator. If the `regexp` is empty, it will split the string `s` into an array of single characters. If no match is found for this regular expression, the string `s` won't be split. **Syntax** ``` sql -splitByRegexp(, ) +splitByRegexp(regexp, s) ``` **Arguments** @@ -109,28 +109,36 @@ splitByRegexp(, ) Returns an array of selected substrings. Empty substrings may be selected when: - - A non-empty regular expression match occurs at the beginning or end of the string; - There are multiple consecutive non-empty regular expression matches; - The original string `s` is empty while the regular expression is not empty. -Type: [Array](../../sql-reference/data-types/array.md) of [String](../../sql-reference/data-types/string.md). +Type: [Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md)). + **Example** +Query: + ``` sql -SELECT splitByRegexp('\\d+', 'a12bc23de345f') +SELECT splitByRegexp('\\d+', 'a12bc23de345f'); ``` +Result: + ``` text ┌─splitByRegexp('\\d+', 'a12bc23de345f')─┐ │ ['a','bc','de','f'] │ └────────────────────────────────────────┘ ``` +Query: + ``` sql -SELECT splitByRegexp('', 'abcde') +SELECT splitByRegexp('', 'abcde'); ``` +Result: + ``` text ┌─splitByRegexp('', 'abcde')─┐ │ ['a','b','c','d','e'] │ @@ -149,7 +157,7 @@ Selects substrings of consecutive bytes from the ranges a-z and A-Z.Returns an a **Example** ``` sql -SELECT alphaTokens('abca1abc') +SELECT alphaTokens('abca1abc'); ``` ``` text diff --git a/docs/en/sql-reference/functions/string-functions.md b/docs/en/sql-reference/functions/string-functions.md index 85570cb408d..5074f478bc0 100644 --- a/docs/en/sql-reference/functions/string-functions.md +++ b/docs/en/sql-reference/functions/string-functions.md @@ -29,17 +29,17 @@ The function also works for arrays. ## lengthUTF8 {#lengthutf8} -Returns the length of a string in Unicode code points (not in characters), assuming that the string contains a set of bytes that make up UTF-8 encoded text. If this assumption is not met, it returns some result (it doesn’t throw an exception). +Returns the length of a string in Unicode code points (not in characters), assuming that the string contains a set of bytes that make up UTF-8 encoded text. If this assumption is not met, it returns some result (it does not throw an exception). The result type is UInt64. ## char_length, CHAR_LENGTH {#char-length} -Returns the length of a string in Unicode code points (not in characters), assuming that the string contains a set of bytes that make up UTF-8 encoded text. If this assumption is not met, it returns some result (it doesn’t throw an exception). +Returns the length of a string in Unicode code points (not in characters), assuming that the string contains a set of bytes that make up UTF-8 encoded text. If this assumption is not met, it returns some result (it does not throw an exception). The result type is UInt64. ## character_length, CHARACTER_LENGTH {#character-length} -Returns the length of a string in Unicode code points (not in characters), assuming that the string contains a set of bytes that make up UTF-8 encoded text. If this assumption is not met, it returns some result (it doesn’t throw an exception). +Returns the length of a string in Unicode code points (not in characters), assuming that the string contains a set of bytes that make up UTF-8 encoded text. If this assumption is not met, it returns some result (it does not throw an exception). The result type is UInt64. ## lower, lcase {#lower} @@ -53,14 +53,14 @@ Converts ASCII Latin symbols in a string to uppercase. ## lowerUTF8 {#lowerutf8} Converts a string to lowercase, assuming the string contains a set of bytes that make up a UTF-8 encoded text. -It doesn’t detect the language. So for Turkish the result might not be exactly correct. +It does not detect the language. So for Turkish the result might not be exactly correct. If the length of the UTF-8 byte sequence is different for upper and lower case of a code point, the result may be incorrect for this code point. If the string contains a set of bytes that is not UTF-8, then the behavior is undefined. ## upperUTF8 {#upperutf8} Converts a string to uppercase, assuming the string contains a set of bytes that make up a UTF-8 encoded text. -It doesn’t detect the language. So for Turkish the result might not be exactly correct. +It does not detect the language. So for Turkish the result might not be exactly correct. If the length of the UTF-8 byte sequence is different for upper and lower case of a code point, the result may be incorrect for this code point. If the string contains a set of bytes that is not UTF-8, then the behavior is undefined. @@ -139,7 +139,7 @@ Reverses the string (as a sequence of bytes). ## reverseUTF8 {#reverseutf8} -Reverses a sequence of Unicode code points, assuming that the string contains a set of bytes representing a UTF-8 text. Otherwise, it does something else (it doesn’t throw an exception). +Reverses a sequence of Unicode code points, assuming that the string contains a set of bytes representing a UTF-8 text. Otherwise, it does something else (it does not throw an exception). ## format(pattern, s0, s1, …) {#format} @@ -264,7 +264,7 @@ Returns a substring starting with the byte from the ‘offset’ index that is ## substringUTF8(s, offset, length) {#substringutf8} -The same as ‘substring’, but for Unicode code points. Works under the assumption that the string contains a set of bytes representing a UTF-8 encoded text. If this assumption is not met, it returns some result (it doesn’t throw an exception). +The same as ‘substring’, but for Unicode code points. Works under the assumption that the string contains a set of bytes representing a UTF-8 encoded text. If this assumption is not met, it returns some result (it does not throw an exception). ## appendTrailingCharIfAbsent(s, c) {#appendtrailingcharifabsent} @@ -305,7 +305,7 @@ SELECT startsWith('Spider-Man', 'Spi'); **Returned values** - 1, if the string starts with the specified prefix. -- 0, if the string doesn’t start with the specified prefix. +- 0, if the string does not start with the specified prefix. **Example** @@ -363,7 +363,7 @@ Result: ## trimLeft {#trimleft} -Removes all consecutive occurrences of common whitespace (ASCII character 32) from the beginning of a string. It doesn’t remove other kinds of whitespace characters (tab, no-break space, etc.). +Removes all consecutive occurrences of common whitespace (ASCII character 32) from the beginning of a string. It does not remove other kinds of whitespace characters (tab, no-break space, etc.). **Syntax** @@ -401,7 +401,7 @@ Result: ## trimRight {#trimright} -Removes all consecutive occurrences of common whitespace (ASCII character 32) from the end of a string. It doesn’t remove other kinds of whitespace characters (tab, no-break space, etc.). +Removes all consecutive occurrences of common whitespace (ASCII character 32) from the end of a string. It does not remove other kinds of whitespace characters (tab, no-break space, etc.). **Syntax** @@ -439,7 +439,7 @@ Result: ## trimBoth {#trimboth} -Removes all consecutive occurrences of common whitespace (ASCII character 32) from both ends of a string. It doesn’t remove other kinds of whitespace characters (tab, no-break space, etc.). +Removes all consecutive occurrences of common whitespace (ASCII character 32) from both ends of a string. It does not remove other kinds of whitespace characters (tab, no-break space, etc.). **Syntax** diff --git a/docs/en/sql-reference/functions/string-search-functions.md b/docs/en/sql-reference/functions/string-search-functions.md index 01b1dd2d004..551c4aee8f0 100644 --- a/docs/en/sql-reference/functions/string-search-functions.md +++ b/docs/en/sql-reference/functions/string-search-functions.md @@ -126,7 +126,7 @@ Result: The same as [position](#position) returns the position (in bytes) of the found substring in the string, starting from 1. Use the function for a case-insensitive search. -Works under the assumption that the string contains a set of bytes representing a single-byte encoded text. If this assumption is not met and a character can’t be represented using a single byte, the function doesn’t throw an exception and returns some unexpected result. If character can be represented using two bytes, it will use two bytes and so on. +Works under the assumption that the string contains a set of bytes representing a single-byte encoded text. If this assumption is not met and a character can’t be represented using a single byte, the function does not throw an exception and returns some unexpected result. If character can be represented using two bytes, it will use two bytes and so on. **Syntax** @@ -167,7 +167,7 @@ Result: Returns the position (in Unicode points) of the found substring in the string, starting from 1. -Works under the assumption that the string contains a set of bytes representing a UTF-8 encoded text. If this assumption is not met, the function doesn’t throw an exception and returns some unexpected result. If character can be represented using two Unicode points, it will use two and so on. +Works under the assumption that the string contains a set of bytes representing a UTF-8 encoded text. If this assumption is not met, the function does not throw an exception and returns some unexpected result. If character can be represented using two Unicode points, it will use two and so on. For a case-insensitive search, use the function [positionCaseInsensitiveUTF8](#positioncaseinsensitiveutf8). @@ -242,7 +242,7 @@ Result: The same as [positionUTF8](#positionutf8), but is case-insensitive. Returns the position (in Unicode points) of the found substring in the string, starting from 1. -Works under the assumption that the string contains a set of bytes representing a UTF-8 encoded text. If this assumption is not met, the function doesn’t throw an exception and returns some unexpected result. If character can be represented using two Unicode points, it will use two and so on. +Works under the assumption that the string contains a set of bytes representing a UTF-8 encoded text. If this assumption is not met, the function does not throw an exception and returns some unexpected result. If character can be represented using two Unicode points, it will use two and so on. **Syntax** @@ -349,7 +349,7 @@ For a case-insensitive search or/and in UTF-8 format use functions `multiSearchA Checks whether the string matches the `pattern` regular expression. A `re2` regular expression. The [syntax](https://github.com/google/re2/wiki/Syntax) of the `re2` regular expressions is more limited than the syntax of the Perl regular expressions. -Returns 0 if it doesn’t match, or 1 if it matches. +Returns 0 if it does not match, or 1 if it matches. Note that the backslash symbol (`\`) is used for escaping in the regular expression. The same symbol is used for escaping in string literals. So in order to escape the symbol in a regular expression, you must write two backslashes (\\) in a string literal. @@ -391,11 +391,11 @@ The same as `multiFuzzyMatchAny`, but returns the array of all indices in any or ## extract(haystack, pattern) {#extracthaystack-pattern} -Extracts a fragment of a string using a regular expression. If ‘haystack’ doesn’t match the ‘pattern’ regex, an empty string is returned. If the regex doesn’t contain subpatterns, it takes the fragment that matches the entire regex. Otherwise, it takes the fragment that matches the first subpattern. +Extracts a fragment of a string using a regular expression. If ‘haystack’ does not match the ‘pattern’ regex, an empty string is returned. If the regex does not contain subpatterns, it takes the fragment that matches the entire regex. Otherwise, it takes the fragment that matches the first subpattern. ## extractAll(haystack, pattern) {#extractallhaystack-pattern} -Extracts all the fragments of a string using a regular expression. If ‘haystack’ doesn’t match the ‘pattern’ regex, an empty string is returned. Returns an array of strings consisting of all matches to the regex. In general, the behavior is the same as the ‘extract’ function (it takes the first subpattern, or the entire expression if there isn’t a subpattern). +Extracts all the fragments of a string using a regular expression. If ‘haystack’ does not match the ‘pattern’ regex, an empty string is returned. Returns an array of strings consisting of all matches to the regex. In general, the behavior is the same as the ‘extract’ function (it takes the first subpattern, or the entire expression if there isn’t a subpattern). ## extractAllGroupsHorizontal {#extractallgroups-horizontal} @@ -419,7 +419,7 @@ extractAllGroupsHorizontal(haystack, pattern) - Type: [Array](../../sql-reference/data-types/array.md). -If `haystack` doesn’t match the `pattern` regex, an array of empty arrays is returned. +If `haystack` does not match the `pattern` regex, an array of empty arrays is returned. **Example** @@ -460,7 +460,7 @@ extractAllGroupsVertical(haystack, pattern) - Type: [Array](../../sql-reference/data-types/array.md). -If `haystack` doesn’t match the `pattern` regex, an empty array is returned. +If `haystack` does not match the `pattern` regex, an empty array is returned. **Example** @@ -513,7 +513,7 @@ ilike(haystack, pattern) **Arguments** - `haystack` — Input string. [String](../../sql-reference/syntax.md#syntax-string-literal). -- `pattern` — If `pattern` doesn't contain percent signs or underscores, then the `pattern` only represents the string itself. An underscore (`_`) in `pattern` stands for (matches) any single character. A percent sign (`%`) matches any sequence of zero or more characters. +- `pattern` — If `pattern` does not contain percent signs or underscores, then the `pattern` only represents the string itself. An underscore (`_`) in `pattern` stands for (matches) any single character. A percent sign (`%`) matches any sequence of zero or more characters. Some `pattern` examples: @@ -527,7 +527,7 @@ Some `pattern` examples: **Returned values** - True, if the string matches `pattern`. -- False, if the string doesn't match `pattern`. +- False, if the string does not match `pattern`. **Example** diff --git a/docs/en/sql-reference/functions/tuple-functions.md b/docs/en/sql-reference/functions/tuple-functions.md index 86442835425..4189d0feeb5 100644 --- a/docs/en/sql-reference/functions/tuple-functions.md +++ b/docs/en/sql-reference/functions/tuple-functions.md @@ -153,7 +153,7 @@ Result: Can be used with [MinHash](../../sql-reference/functions/hash-functions.md#ngramminhash) functions for detection of semi-duplicate strings: ``` sql -SELECT tupleHammingDistance(wordShingleMinHash(string), wordShingleMinHashCaseInsensitive(string)) as HammingDistance FROM (SELECT 'Clickhouse is a column-oriented database management system for online analytical processing of queries.' AS string); +SELECT tupleHammingDistance(wordShingleMinHash(string), wordShingleMinHashCaseInsensitive(string)) as HammingDistance FROM (SELECT 'ClickHouse is a column-oriented database management system for online analytical processing of queries.' AS string); ``` Result: diff --git a/docs/en/sql-reference/functions/type-conversion-functions.md b/docs/en/sql-reference/functions/type-conversion-functions.md index d8d13d81d97..661469e6901 100644 --- a/docs/en/sql-reference/functions/type-conversion-functions.md +++ b/docs/en/sql-reference/functions/type-conversion-functions.md @@ -373,7 +373,7 @@ This function accepts a number or date or date with time, and returns a FixedStr ## reinterpretAsUUID {#reinterpretasuuid} -This function accepts 16 bytes string, and returns UUID containing bytes representing the corresponding value in network byte order (big-endian). If the string isn't long enough, the functions work as if the string is padded with the necessary number of null bytes to the end. If the string longer than 16 bytes, the extra bytes at the end are ignored. +Accepts 16 bytes string and returns UUID containing bytes representing the corresponding value in network byte order (big-endian). If the string isn't long enough, the function works as if the string is padded with the necessary number of null bytes to the end. If the string longer than 16 bytes, the extra bytes at the end are ignored. **Syntax** @@ -429,7 +429,24 @@ Result: ## reinterpret(x, T) {#type_conversion_function-reinterpret} -Use the same source in-memory bytes sequence for `x` value and reinterpret it to destination type +Uses the same source in-memory bytes sequence for `x` value and reinterprets it to destination type. + +**Syntax** + +``` sql +reinterpret(x, type) +``` + +**Arguments** + +- `x` — Any type. +- `type` — Destination type. [String](../../sql-reference/data-types/string.md). + +**Returned value** + +- Destination type value. + +**Examples** Query: ```sql @@ -448,11 +465,27 @@ Result: ## CAST(x, T) {#type_conversion_function-cast} -Converts input value `x` to the `T` data type. Unlike to `reinterpret` function use external representation of `x` value. +Converts input value `x` to the `T` data type. Unlike to `reinterpret` function, type conversion is performed in a natural way. The syntax `CAST(x AS t)` is also supported. -Note, that if value `x` does not fit the bounds of type T, the function overflows. For example, CAST(-1, 'UInt8') returns 255. +!!! note "Note" + If value `x` does not fit the bounds of type `T`, the function overflows. For example, `CAST(-1, 'UInt8')` returns `255`. + +**Syntax** + +``` sql +CAST(x, T) +``` + +**Arguments** + +- `x` — Any type. +- `T` — Destination type. [String](../../sql-reference/data-types/string.md). + +**Returned value** + +- Destination type value. **Examples** @@ -460,9 +493,9 @@ Query: ```sql SELECT - cast(toInt8(-1), 'UInt8') AS cast_int_to_uint, - cast(toInt8(1), 'Float32') AS cast_int_to_float, - cast('1', 'UInt32') AS cast_string_to_int + CAST(toInt8(-1), 'UInt8') AS cast_int_to_uint, + CAST(toInt8(1), 'Float32') AS cast_int_to_float, + CAST('1', 'UInt32') AS cast_string_to_int; ``` Result: @@ -492,7 +525,7 @@ Result: └─────────────────────┴─────────────────────┴────────────┴─────────────────────┴───────────────────────────┘ ``` -Conversion to FixedString(N) only works for arguments of type String or FixedString(N). +Conversion to FixedString(N) only works for arguments of type [String](../../sql-reference/data-types/string.md) or [FixedString](../../sql-reference/data-types/fixedstring.md). Type conversion to [Nullable](../../sql-reference/data-types/nullable.md) and back is supported. @@ -1038,7 +1071,7 @@ Result: ## parseDateTime64BestEffort {#parsedatetime64besteffort} -Same as [parseDateTimeBestEffort](#parsedatetimebesteffort) function but also parse milliseconds and microseconds and return `DateTime64(3)` or `DateTime64(6)` data types. +Same as [parseDateTimeBestEffort](#parsedatetimebesteffort) function but also parse milliseconds and microseconds and returns [DateTime](../../sql-reference/functions/type-conversion-functions.md#data_type-datetime) data type. **Syntax** @@ -1049,9 +1082,13 @@ parseDateTime64BestEffort(time_string [, precision [, time_zone]]) **Parameters** - `time_string` — String containing a date or date with time to convert. [String](../../sql-reference/data-types/string.md). -- `precision` — `3` for milliseconds, `6` for microseconds. Default `3`. Optional [UInt8](../../sql-reference/data-types/int-uint.md). +- `precision` — Required precision. `3` — for milliseconds, `6` — for microseconds. Default — `3`. Optional. [UInt8](../../sql-reference/data-types/int-uint.md). - `time_zone` — [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone). The function parses `time_string` according to the timezone. Optional. [String](../../sql-reference/data-types/string.md). +**Returned value** + +- `time_string` converted to the [DateTime](../../sql-reference/data-types/datetime.md) data type. + **Examples** Query: @@ -1064,7 +1101,7 @@ UNION ALL SELECT parseDateTime64BestEffort('2021-01-01 01:01:00.12346',6) AS a, toTypeName(a) AS t UNION ALL SELECT parseDateTime64BestEffort('2021-01-01 01:01:00.12346',3,'Europe/Moscow') AS a, toTypeName(a) AS t -FORMAT PrettyCompactMonoBlcok +FORMAT PrettyCompactMonoBlock; ``` Result: @@ -1131,12 +1168,14 @@ Result: ## toUnixTimestamp64Nano {#tounixtimestamp64nano} -Converts a `DateTime64` to a `Int64` value with fixed sub-second precision. -Input value is scaled up or down appropriately depending on it precision. Please note that output value is a timestamp in UTC, not in timezone of `DateTime64`. +Converts a `DateTime64` to a `Int64` value with fixed sub-second precision. Input value is scaled up or down appropriately depending on it precision. + +!!! info "Note" + The output value is a timestamp in UTC, not in the timezone of `DateTime64`. **Syntax** -``` sql +```sql toUnixTimestamp64Milli(value) ``` @@ -1152,7 +1191,7 @@ toUnixTimestamp64Milli(value) Query: -``` sql +```sql WITH toDateTime64('2019-09-16 19:20:12.345678910', 6) AS dt64 SELECT toUnixTimestamp64Milli(dt64); ``` @@ -1298,4 +1337,3 @@ Result: │ 2,"good" │ └───────────────────────────────────────────┘ ``` - diff --git a/docs/en/sql-reference/functions/url-functions.md b/docs/en/sql-reference/functions/url-functions.md index 9feb7a3c711..397ae45ec71 100644 --- a/docs/en/sql-reference/functions/url-functions.md +++ b/docs/en/sql-reference/functions/url-functions.md @@ -5,7 +5,7 @@ toc_title: URLs # Functions for Working with URLs {#functions-for-working-with-urls} -All these functions don’t follow the RFC. They are maximally simplified for improved performance. +All these functions do not follow the RFC. They are maximally simplified for improved performance. ## Functions that Extract Parts of a URL {#functions-that-extract-parts-of-a-url} @@ -398,7 +398,7 @@ Result: ## Functions that Remove Part of a URL {#functions-that-remove-part-of-a-url} -If the URL doesn’t have anything similar, the URL remains unchanged. +If the URL does not have anything similar, the URL remains unchanged. ### cutWWW {#cutwww} diff --git a/docs/en/sql-reference/functions/ym-dict-functions.md b/docs/en/sql-reference/functions/ym-dict-functions.md index 941f75ff006..f947c81c7a9 100644 --- a/docs/en/sql-reference/functions/ym-dict-functions.md +++ b/docs/en/sql-reference/functions/ym-dict-functions.md @@ -136,7 +136,7 @@ In the Yandex geobase, the population might be recorded for child regions, but n ### regionIn(lhs, rhs\[, geobase\]) {#regioninlhs-rhs-geobase} -Checks whether a ‘lhs’ region belongs to a ‘rhs’ region. Returns a UInt8 number equal to 1 if it belongs, or 0 if it doesn’t belong. +Checks whether a ‘lhs’ region belongs to a ‘rhs’ region. Returns a UInt8 number equal to 1 if it belongs, or 0 if it does not belong. The relationship is reflexive – any region also belongs to itself. ### regionHierarchy(id\[, geobase\]) {#regionhierarchyid-geobase} @@ -146,7 +146,7 @@ Example: `regionHierarchy(toUInt32(213)) = [213,1,3,225,10001,10000]`. ### regionToName(id\[, lang\]) {#regiontonameid-lang} -Accepts a UInt32 number – the region ID from the Yandex geobase. A string with the name of the language can be passed as a second argument. Supported languages are: ru, en, ua, uk, by, kz, tr. If the second argument is omitted, the language ‘ru’ is used. If the language is not supported, an exception is thrown. Returns a string – the name of the region in the corresponding language. If the region with the specified ID doesn’t exist, an empty string is returned. +Accepts a UInt32 number – the region ID from the Yandex geobase. A string with the name of the language can be passed as a second argument. Supported languages are: ru, en, ua, uk, by, kz, tr. If the second argument is omitted, the language ‘ru’ is used. If the language is not supported, an exception is thrown. Returns a string – the name of the region in the corresponding language. If the region with the specified ID does not exist, an empty string is returned. `ua` and `uk` both mean Ukrainian. diff --git a/docs/en/sql-reference/operators/in.md b/docs/en/sql-reference/operators/in.md index 0abeabc7f57..3d8d2673468 100644 --- a/docs/en/sql-reference/operators/in.md +++ b/docs/en/sql-reference/operators/in.md @@ -208,10 +208,10 @@ and the temporary table `_data1` will be sent to every remote server with the qu This is more optimal than using the normal IN. However, keep the following points in mind: -1. When creating a temporary table, data is not made unique. To reduce the volume of data transmitted over the network, specify DISTINCT in the subquery. (You don’t need to do this for a normal IN.) +1. When creating a temporary table, data is not made unique. To reduce the volume of data transmitted over the network, specify DISTINCT in the subquery. (You do not need to do this for a normal IN.) 2. The temporary table will be sent to all the remote servers. Transmission does not account for network topology. For example, if 10 remote servers reside in a datacenter that is very remote in relation to the requestor server, the data will be sent 10 times over the channel to the remote datacenter. Try to avoid large data sets when using GLOBAL IN. 3. When transmitting data to remote servers, restrictions on network bandwidth are not configurable. You might overload the network. -4. Try to distribute data across servers so that you don’t need to use GLOBAL IN on a regular basis. +4. Try to distribute data across servers so that you do not need to use GLOBAL IN on a regular basis. 5. If you need to use GLOBAL IN often, plan the location of the ClickHouse cluster so that a single group of replicas resides in no more than one data center with a fast network between them, so that a query can be processed entirely within a single data center. It also makes sense to specify a local table in the `GLOBAL IN` clause, in case this local table is only available on the requestor server and you want to use data from it on remote servers. @@ -236,4 +236,4 @@ where M is between 1 and 3 depending on which replica the local query is executi Therefore adding the max_parallel_replicas setting will only produce correct results if both tables have the same replication scheme and are sampled by UserID or a subkey of it. In particular, if local_table_2 does not have a sampling key, incorrect results will be produced. The same rule applies to JOIN. -One workaround if local_table_2 doesn't meet the requirements, is to use `GLOBAL IN` or `GLOBAL JOIN`. +One workaround if local_table_2 does not meet the requirements, is to use `GLOBAL IN` or `GLOBAL JOIN`. diff --git a/docs/en/sql-reference/operators/index.md b/docs/en/sql-reference/operators/index.md index e073d5f23f0..31fce7f72b3 100644 --- a/docs/en/sql-reference/operators/index.md +++ b/docs/en/sql-reference/operators/index.md @@ -250,7 +250,7 @@ The following operators do not have a priority since they are brackets: ## Associativity {#associativity} All binary operators have left associativity. For example, `1 + 2 + 3` is transformed to `plus(plus(1, 2), 3)`. -Sometimes this doesn’t work the way you expect. For example, `SELECT 4 > 2 > 3` will result in 0. +Sometimes this does not work the way you expect. For example, `SELECT 4 > 2 > 3` will result in 0. For efficiency, the `and` and `or` functions accept any number of arguments. The corresponding chains of `AND` and `OR` operators are transformed into a single call of these functions. diff --git a/docs/en/sql-reference/statements/alter/column.md b/docs/en/sql-reference/statements/alter/column.md index d661bd4cd59..2e7cd1be952 100644 --- a/docs/en/sql-reference/statements/alter/column.md +++ b/docs/en/sql-reference/statements/alter/column.md @@ -39,7 +39,7 @@ Adds a new column to the table with the specified `name`, `type`, [`codec`](../. If the `IF NOT EXISTS` clause is included, the query won’t return an error if the column already exists. If you specify `AFTER name_after` (the name of another column), the column is added after the specified one in the list of table columns. If you want to add a column to the beginning of the table use the `FIRST` clause. Otherwise, the column is added to the end of the table. For a chain of actions, `name_after` can be the name of a column that is added in one of the previous actions. -Adding a column just changes the table structure, without performing any actions with data. The data doesn’t appear on the disk after `ALTER`. If the data is missing for a column when reading from the table, it is filled in with default values (by performing the default expression if there is one, or using zeros or empty strings). The column appears on the disk after merging data parts (see [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md)). +Adding a column just changes the table structure, without performing any actions with data. The data does not appear on the disk after `ALTER`. If the data is missing for a column when reading from the table, it is filled in with default values (by performing the default expression if there is one, or using zeros or empty strings). The column appears on the disk after merging data parts (see [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md)). This approach allows us to complete the `ALTER` query instantly, without increasing the volume of old data. @@ -70,7 +70,7 @@ Added3 UInt32 DROP COLUMN [IF EXISTS] name ``` -Deletes the column with the name `name`. If the `IF EXISTS` clause is specified, the query won’t return an error if the column doesn’t exist. +Deletes the column with the name `name`. If the `IF EXISTS` clause is specified, the query won’t return an error if the column does not exist. Deletes data from the file system. Since this deletes entire files, the query is completed almost instantly. @@ -89,7 +89,7 @@ ALTER TABLE visits DROP COLUMN browser RENAME COLUMN [IF EXISTS] name to new_name ``` -Renames the column `name` to `new_name`. If the `IF EXISTS` clause is specified, the query won’t return an error if the column doesn’t exist. Since renaming does not involve the underlying data, the query is completed almost instantly. +Renames the column `name` to `new_name`. If the `IF EXISTS` clause is specified, the query won’t return an error if the column does not exist. Since renaming does not involve the underlying data, the query is completed almost instantly. **NOTE**: Columns specified in the key expression of the table (either with `ORDER BY` or `PRIMARY KEY`) cannot be renamed. Trying to change these columns will produce `SQL Error [524]`. @@ -107,7 +107,7 @@ CLEAR COLUMN [IF EXISTS] name IN PARTITION partition_name Resets all data in a column for a specified partition. Read more about setting the partition name in the section [How to specify the partition expression](#alter-how-to-specify-part-expr). -If the `IF EXISTS` clause is specified, the query won’t return an error if the column doesn’t exist. +If the `IF EXISTS` clause is specified, the query won’t return an error if the column does not exist. Example: @@ -121,7 +121,7 @@ ALTER TABLE visits CLEAR COLUMN browser IN PARTITION tuple() COMMENT COLUMN [IF EXISTS] name 'comment' ``` -Adds a comment to the column. If the `IF EXISTS` clause is specified, the query won’t return an error if the column doesn’t exist. +Adds a comment to the column. If the `IF EXISTS` clause is specified, the query won’t return an error if the column does not exist. Each column can have one comment. If a comment already exists for the column, a new comment overwrites the previous comment. @@ -149,11 +149,11 @@ This query changes the `name` column properties: For examples of columns TTL modifying, see [Column TTL](../../../engines/table-engines/mergetree-family/mergetree.md#mergetree-column-ttl). -If the `IF EXISTS` clause is specified, the query won’t return an error if the column doesn’t exist. +If the `IF EXISTS` clause is specified, the query won’t return an error if the column does not exist. The query also can change the order of the columns using `FIRST | AFTER` clause, see [ADD COLUMN](#alter_add-column) description. -When changing the type, values are converted as if the [toType](../../../sql-reference/functions/type-conversion-functions.md) functions were applied to them. If only the default expression is changed, the query doesn’t do anything complex, and is completed almost instantly. +When changing the type, values are converted as if the [toType](../../../sql-reference/functions/type-conversion-functions.md) functions were applied to them. If only the default expression is changed, the query does not do anything complex, and is completed almost instantly. Example: @@ -213,4 +213,4 @@ If the `ALTER` query is not sufficient to make the table changes you need, you c The `ALTER` query blocks all reads and writes for the table. In other words, if a long `SELECT` is running at the time of the `ALTER` query, the `ALTER` query will wait for it to complete. At the same time, all new queries to the same table will wait while this `ALTER` is running. -For tables that don’t store data themselves (such as `Merge` and `Distributed`), `ALTER` just changes the table structure, and does not change the structure of subordinate tables. For example, when running ALTER for a `Distributed` table, you will also need to run `ALTER` for the tables on all remote servers. +For tables that do not store data themselves (such as `Merge` and `Distributed`), `ALTER` just changes the table structure, and does not change the structure of subordinate tables. For example, when running ALTER for a `Distributed` table, you will also need to run `ALTER` for the tables on all remote servers. diff --git a/docs/en/sql-reference/statements/alter/partition.md b/docs/en/sql-reference/statements/alter/partition.md index b22f89928b9..b38643d6027 100644 --- a/docs/en/sql-reference/statements/alter/partition.md +++ b/docs/en/sql-reference/statements/alter/partition.md @@ -184,7 +184,7 @@ To restore data from a backup, do the following: 2. Copy the data from the `data/database/table/` directory inside the backup to the `/var/lib/clickhouse/data/database/table/detached/` directory. 3. Run `ALTER TABLE t ATTACH PARTITION` queries to add the data to a table. -Restoring from a backup doesn’t require stopping the server. +Restoring from a backup does not require stopping the server. For more information about backups and restoring data, see the [Data Backup](../../../operations/backup.md) section. diff --git a/docs/en/sql-reference/statements/check-table.md b/docs/en/sql-reference/statements/check-table.md index 65e6238ebbc..d40fe263b1a 100644 --- a/docs/en/sql-reference/statements/check-table.md +++ b/docs/en/sql-reference/statements/check-table.md @@ -28,7 +28,7 @@ The `CHECK TABLE` query supports the following table engines: Performed over the tables with another table engines causes an exception. -Engines from the `*Log` family don’t provide automatic data recovery on failure. Use the `CHECK TABLE` query to track data loss in a timely manner. +Engines from the `*Log` family do not provide automatic data recovery on failure. Use the `CHECK TABLE` query to track data loss in a timely manner. ## Checking the MergeTree Family Tables {#checking-mergetree-tables} diff --git a/docs/en/sql-reference/statements/create/database.md b/docs/en/sql-reference/statements/create/database.md index bdb31d44b0b..3c6f73d54db 100644 --- a/docs/en/sql-reference/statements/create/database.md +++ b/docs/en/sql-reference/statements/create/database.md @@ -15,7 +15,7 @@ CREATE DATABASE [IF NOT EXISTS] db_name [ON CLUSTER cluster] [ENGINE = engine(.. ### IF NOT EXISTS {#if-not-exists} -If the `db_name` database already exists, then ClickHouse doesn’t create a new database and: +If the `db_name` database already exists, then ClickHouse does not create a new database and: - Doesn’t throw an exception if clause is specified. - Throws an exception if clause isn’t specified. diff --git a/docs/en/sql-reference/statements/create/table.md b/docs/en/sql-reference/statements/create/table.md index 5f1f0151350..70ac9acd186 100644 --- a/docs/en/sql-reference/statements/create/table.md +++ b/docs/en/sql-reference/statements/create/table.md @@ -96,13 +96,13 @@ If the default expression is defined, the column type is optional. If there isn If the data type and default expression are defined explicitly, this expression will be cast to the specified type using type casting functions. Example: `Hits UInt32 DEFAULT 0` means the same thing as `Hits UInt32 DEFAULT toUInt32(0)`. -Default expressions may be defined as an arbitrary expression from table constants and columns. When creating and changing the table structure, it checks that expressions don’t contain loops. For INSERT, it checks that expressions are resolvable – that all columns they can be calculated from have been passed. +Default expressions may be defined as an arbitrary expression from table constants and columns. When creating and changing the table structure, it checks that expressions do not contain loops. For INSERT, it checks that expressions are resolvable – that all columns they can be calculated from have been passed. ### DEFAULT {#default} `DEFAULT expr` -Normal default value. If the INSERT query doesn’t specify the corresponding column, it will be filled in by computing the corresponding expression. +Normal default value. If the INSERT query does not specify the corresponding column, it will be filled in by computing the corresponding expression. ### MATERIALIZED {#materialized} @@ -234,14 +234,14 @@ High compression levels are useful for asymmetric scenarios, like compress once, ### Specialized Codecs {#create-query-specialized-codecs} -These codecs are designed to make compression more effective by using specific features of data. Some of these codecs don’t compress data themself. Instead, they prepare the data for a common purpose codec, which compresses it better than without this preparation. +These codecs are designed to make compression more effective by using specific features of data. Some of these codecs do not compress data themself. Instead, they prepare the data for a common purpose codec, which compresses it better than without this preparation. Specialized codecs: - `Delta(delta_bytes)` — Compression approach in which raw values are replaced by the difference of two neighboring values, except for the first value that stays unchanged. Up to `delta_bytes` are used for storing delta values, so `delta_bytes` is the maximum size of raw values. Possible `delta_bytes` values: 1, 2, 4, 8. The default value for `delta_bytes` is `sizeof(type)` if equal to 1, 2, 4, or 8. In all other cases, it’s 1. - `DoubleDelta` — Calculates delta of deltas and writes it in compact binary form. Optimal compression rates are achieved for monotonic sequences with a constant stride, such as time series data. Can be used with any fixed-width type. Implements the algorithm used in Gorilla TSDB, extending it to support 64-bit types. Uses 1 extra bit for 32-byte deltas: 5-bit prefixes instead of 4-bit prefixes. For additional information, see Compressing Time Stamps in [Gorilla: A Fast, Scalable, In-Memory Time Series Database](http://www.vldb.org/pvldb/vol8/p1816-teller.pdf). - `Gorilla` — Calculates XOR between current and previous value and writes it in compact binary form. Efficient when storing a series of floating point values that change slowly, because the best compression rate is achieved when neighboring values are binary equal. Implements the algorithm used in Gorilla TSDB, extending it to support 64-bit types. For additional information, see Compressing Values in [Gorilla: A Fast, Scalable, In-Memory Time Series Database](http://www.vldb.org/pvldb/vol8/p1816-teller.pdf). -- `T64` — Compression approach that crops unused high bits of values in integer data types (including `Enum`, `Date` and `DateTime`). At each step of its algorithm, codec takes a block of 64 values, puts them into 64x64 bit matrix, transposes it, crops the unused bits of values and returns the rest as a sequence. Unused bits are the bits, that don’t differ between maximum and minimum values in the whole data part for which the compression is used. +- `T64` — Compression approach that crops unused high bits of values in integer data types (including `Enum`, `Date` and `DateTime`). At each step of its algorithm, codec takes a block of 64 values, puts them into 64x64 bit matrix, transposes it, crops the unused bits of values and returns the rest as a sequence. Unused bits are the bits, that do not differ between maximum and minimum values in the whole data part for which the compression is used. `DoubleDelta` and `Gorilla` codecs are used in Gorilla TSDB as the components of its compressing algorithm. Gorilla approach is effective in scenarios when there is a sequence of slowly changing values with their timestamps. Timestamps are effectively compressed by the `DoubleDelta` codec, and values are effectively compressed by the `Gorilla` codec. For example, to get an effectively stored table, you can create it in the following configuration: @@ -287,7 +287,7 @@ It’s possible to use tables with [ENGINE = Memory](../../../engines/table-engi !!!note "Note" This query is supported only for [Atomic](../../../engines/database-engines/atomic.md) database engine. -If you need to delete some data from a table, you can create a new table and fill it with a `SELECT` statement that doesn't retrieve unwanted data, then drop the old table and rename the new one: +If you need to delete some data from a table, you can create a new table and fill it with a `SELECT` statement that does not retrieve unwanted data, then drop the old table and rename the new one: ```sql CREATE TABLE myNewTable AS myOldTable; @@ -354,3 +354,39 @@ SELECT * FROM base.t1; │ 3 │ └───┘ ``` + +## COMMENT Clause {#comment-table} + +You can add a comment to the table when you creating it. + +!!!note "Note" + The comment is supported for all table engines except [Kafka](../../../engines/table-engines/integrations/kafka.md), [RabbitMQ](../../../engines/table-engines/integrations/rabbitmq.md) and [EmbeddedRocksDB](../../../engines/table-engines/integrations/embedded-rocksdb.md). + + +**Syntax** + +``` sql +CREATE TABLE db.table_name +( + name1 type1, name2 type2, ... +) +ENGINE = engine +COMMENT 'Comment' +``` + +**Example** + +Query: + +``` sql +CREATE TABLE t1 (x String) ENGINE = Memory COMMENT 'The temporary table'; +SELECT name, comment FROM system.tables WHERE name = 't1'; +``` + +Result: + +```text +┌─name─┬─comment─────────────┐ +│ t1 │ The temporary table │ +└──────┴─────────────────────┘ +``` diff --git a/docs/en/sql-reference/statements/create/user.md b/docs/en/sql-reference/statements/create/user.md index 456adc4bb13..ad9f203b768 100644 --- a/docs/en/sql-reference/statements/create/user.md +++ b/docs/en/sql-reference/statements/create/user.md @@ -52,7 +52,7 @@ Another way of specifying host is to use `@` syntax following the username. Exam - `CREATE USER mira@'192.168.%.%'` — Equivalent to the `HOST LIKE` syntax. !!! info "Warning" - ClickHouse treats `user_name@'address'` as a username as a whole. Thus, technically you can create multiple users with the same `user_name` and different constructions after `@`. However, we don’t recommend to do so. + ClickHouse treats `user_name@'address'` as a username as a whole. Thus, technically you can create multiple users with the same `user_name` and different constructions after `@`. However, we do not recommend to do so. ## GRANTEES Clause {#grantees} diff --git a/docs/en/sql-reference/statements/create/view.md b/docs/en/sql-reference/statements/create/view.md index 633db355d4a..4b51bb8b067 100644 --- a/docs/en/sql-reference/statements/create/view.md +++ b/docs/en/sql-reference/statements/create/view.md @@ -15,7 +15,7 @@ Syntax: CREATE [OR REPLACE] VIEW [IF NOT EXISTS] [db.]table_name [ON CLUSTER] AS SELECT ... ``` -Normal views don’t store any data. They just perform a read from another table on each access. In other words, a normal view is nothing more than a saved query. When reading from a view, this saved query is used as a subquery in the [FROM](../../../sql-reference/statements/select/from.md) clause. +Normal views do not store any data. They just perform a read from another table on each access. In other words, a normal view is nothing more than a saved query. When reading from a view, this saved query is used as a subquery in the [FROM](../../../sql-reference/statements/select/from.md) clause. As an example, assume you’ve created a view: @@ -50,9 +50,9 @@ When creating a materialized view with `TO [db].[table]`, you must not use `POPU A materialized view is implemented as follows: when inserting data to the table specified in `SELECT`, part of the inserted data is converted by this `SELECT` query, and the result is inserted in the view. !!! important "Important" - Materialized views in ClickHouse are implemented more like insert triggers. If there’s some aggregation in the view query, it’s applied only to the batch of freshly inserted data. Any changes to existing data of source table (like update, delete, drop partition, etc.) doesn’t change the materialized view. + Materialized views in ClickHouse are implemented more like insert triggers. If there’s some aggregation in the view query, it’s applied only to the batch of freshly inserted data. Any changes to existing data of source table (like update, delete, drop partition, etc.) does not change the materialized view. -If you specify `POPULATE`, the existing table data is inserted in the view when creating it, as if making a `CREATE TABLE ... AS SELECT ...` . Otherwise, the query contains only the data inserted in the table after creating the view. We **don’t recommend** using POPULATE, since data inserted in the table during the view creation will not be inserted in it. +If you specify `POPULATE`, the existing table data is inserted in the view when creating it, as if making a `CREATE TABLE ... AS SELECT ...` . Otherwise, the query contains only the data inserted in the table after creating the view. We **do not recommend** using POPULATE, since data inserted in the table during the view creation will not be inserted in it. A `SELECT` query can contain `DISTINCT`, `GROUP BY`, `ORDER BY`, `LIMIT`… Note that the corresponding conversions are performed independently on each block of inserted data. For example, if `GROUP BY` is set, data is aggregated during insertion, but only within a single packet of inserted data. The data won’t be further aggregated. The exception is when using an `ENGINE` that independently performs data aggregation, such as `SummingMergeTree`. @@ -229,7 +229,7 @@ WATCH lv ``` ``` -Code: 60. DB::Exception: Received from localhost:9000. DB::Exception: Table default.lv doesn't exist.. +Code: 60. DB::Exception: Received from localhost:9000. DB::Exception: Table default.lv does not exist.. ``` ### Usage {#live-view-usage} diff --git a/docs/en/sql-reference/statements/detach.md b/docs/en/sql-reference/statements/detach.md index 59f5b297ece..a181dd8deee 100644 --- a/docs/en/sql-reference/statements/detach.md +++ b/docs/en/sql-reference/statements/detach.md @@ -17,7 +17,7 @@ Detaching does not delete the data or metadata for the table or materialized vie Whether the table was detached permanently or not, in both cases you can reattach it using the [ATTACH](../../sql-reference/statements/attach.md). System log tables can be also attached back (e.g. `query_log`, `text_log`, etc). Other system tables can't be reattached. On the next server launch the server will recall those tables again. -`ATTACH MATERIALIZED VIEW` doesn't work with short syntax (without `SELECT`), but you can attach it using the `ATTACH TABLE` query. +`ATTACH MATERIALIZED VIEW` does not work with short syntax (without `SELECT`), but you can attach it using the `ATTACH TABLE` query. Note that you can not detach permanently the table which is already detached (temporary). But you can attach it back and then detach permanently again. @@ -64,7 +64,7 @@ Result: ``` text Received exception from server (version 21.4.1): -Code: 60. DB::Exception: Received from localhost:9000. DB::Exception: Table default.test doesn't exist. +Code: 60. DB::Exception: Received from localhost:9000. DB::Exception: Table default.test does not exist. ``` [Original article](https://clickhouse.tech/docs/en/sql-reference/statements/detach/) diff --git a/docs/en/sql-reference/statements/drop.md b/docs/en/sql-reference/statements/drop.md index 4317a20419e..90a2a46c7cf 100644 --- a/docs/en/sql-reference/statements/drop.md +++ b/docs/en/sql-reference/statements/drop.md @@ -5,7 +5,7 @@ toc_title: DROP # DROP Statements {#drop} -Deletes existing entity. If the `IF EXISTS` clause is specified, these queries don’t return an error if the entity doesn’t exist. +Deletes existing entity. If the `IF EXISTS` clause is specified, these queries do not return an error if the entity does not exist. ## DROP DATABASE {#drop-database} @@ -97,4 +97,4 @@ Syntax: DROP VIEW [IF EXISTS] [db.]name [ON CLUSTER cluster] ``` -[Оriginal article](https://clickhouse.tech/docs/en/sql-reference/statements/drop/) \ No newline at end of file +[Оriginal article](https://clickhouse.tech/docs/en/sql-reference/statements/drop/) diff --git a/docs/en/sql-reference/statements/exists.md b/docs/en/sql-reference/statements/exists.md index 3b0f4b66343..b7c4a487791 100644 --- a/docs/en/sql-reference/statements/exists.md +++ b/docs/en/sql-reference/statements/exists.md @@ -9,4 +9,4 @@ toc_title: EXISTS EXISTS [TEMPORARY] [TABLE|DICTIONARY] [db.]name [INTO OUTFILE filename] [FORMAT format] ``` -Returns a single `UInt8`-type column, which contains the single value `0` if the table or database doesn’t exist, or `1` if the table exists in the specified database. +Returns a single `UInt8`-type column, which contains the single value `0` if the table or database does not exist, or `1` if the table exists in the specified database. diff --git a/docs/en/sql-reference/statements/explain.md b/docs/en/sql-reference/statements/explain.md index 1c19adcdcd8..f22f92c625a 100644 --- a/docs/en/sql-reference/statements/explain.md +++ b/docs/en/sql-reference/statements/explain.md @@ -111,9 +111,11 @@ Dump query plan steps. Settings: -- `header` — Prints output header for step. Default: 0. -- `description` — Prints step description. Default: 1. -- `actions` — Prints detailed information about step actions. Default: 0. +- `header` — Prints output header for step. Default: 0. +- `description` — Prints step description. Default: 1. +- `indexes` — Shows used indexes, the number of filtered parts and the number of filtered granules for every index applied. Default: 0. Supported for [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) tables. +- `actions` — Prints detailed information about step actions. Default: 0. +- `json` — Prints query plan steps as a row in [JSON](../../interfaces/formats.md#json) format. Default: 0. It is recommended to use [TSVRaw](../../interfaces/formats.md#tabseparatedraw) format to avoid unnecessary escaping. Example: @@ -134,6 +136,225 @@ Union !!! note "Note" Step and query cost estimation is not supported. +When `json = 1`, the query plan is represented in JSON format. Every node is a dictionary that always has the keys `Node Type` and `Plans`. `Node Type` is a string with a step name. `Plans` is an array with child step descriptions. Other optional keys may be added depending on node type and settings. + +Example: + +```sql +EXPLAIN json = 1, description = 0 SELECT 1 UNION ALL SELECT 2 FORMAT TSVRaw; +``` + +```json +[ + { + "Plan": { + "Node Type": "Union", + "Plans": [ + { + "Node Type": "Expression", + "Plans": [ + { + "Node Type": "SettingQuotaAndLimits", + "Plans": [ + { + "Node Type": "ReadFromStorage" + } + ] + } + ] + }, + { + "Node Type": "Expression", + "Plans": [ + { + "Node Type": "SettingQuotaAndLimits", + "Plans": [ + { + "Node Type": "ReadFromStorage" + } + ] + } + ] + } + ] + } + } +] +``` + +With `description` = 1, the `Description` key is added to the step: + +```json +{ + "Node Type": "ReadFromStorage", + "Description": "SystemOne" +} +``` + +With `header` = 1, the `Header` key is added to the step as an array of columns. + +Example: + +```sql +EXPLAIN json = 1, description = 0, header = 1 SELECT 1, 2 + dummy; +``` + +```json +[ + { + "Plan": { + "Node Type": "Expression", + "Header": [ + { + "Name": "1", + "Type": "UInt8" + }, + { + "Name": "plus(2, dummy)", + "Type": "UInt16" + } + ], + "Plans": [ + { + "Node Type": "SettingQuotaAndLimits", + "Header": [ + { + "Name": "dummy", + "Type": "UInt8" + } + ], + "Plans": [ + { + "Node Type": "ReadFromStorage", + "Header": [ + { + "Name": "dummy", + "Type": "UInt8" + } + ] + } + ] + } + ] + } + } +] +``` + +With `indexes` = 1, the `Indexes` key is added. It contains an array of used indexes. Each index is described as JSON with `Type` key (a string `MinMax`, `Partition`, `PrimaryKey` or `Skip`) and optional keys: + +- `Name` — An index name (for now, is used only for `Skip` index). +- `Keys` — An array of columns used by the index. +- `Condition` — A string with condition used. +- `Description` — An index (for now, is used only for `Skip` index). +- `Initial Parts` — A number of parts before the index is applied. +- `Selected Parts` — A number of parts after the index is applied. +- `Initial Granules` — A number of granules before the index is applied. +- `Selected Granulesis` — A number of granules after the index is applied. + +Example: + +```json +"Node Type": "ReadFromMergeTree", +"Indexes": [ + { + "Type": "MinMax", + "Keys": ["y"], + "Condition": "(y in [1, +inf))", + "Initial Parts": 5, + "Selected Parts": 4, + "Initial Granules": 12, + "Selected Granules": 11 + }, + { + "Type": "Partition", + "Keys": ["y", "bitAnd(z, 3)"], + "Condition": "and((bitAnd(z, 3) not in [1, 1]), and((y in [1, +inf)), (bitAnd(z, 3) not in [1, 1])))", + "Initial Parts": 4, + "Selected Parts": 3, + "Initial Granules": 11, + "Selected Granules": 10 + }, + { + "Type": "PrimaryKey", + "Keys": ["x", "y"], + "Condition": "and((x in [11, +inf)), (y in [1, +inf)))", + "Initial Parts": 3, + "Selected Parts": 2, + "Initial Granules": 10, + "Selected Granules": 6 + }, + { + "Type": "Skip", + "Name": "t_minmax", + "Description": "minmax GRANULARITY 2", + "Initial Parts": 2, + "Selected Parts": 1, + "Initial Granules": 6, + "Selected Granules": 2 + }, + { + "Type": "Skip", + "Name": "t_set", + "Description": "set GRANULARITY 2", + "Initial Parts": 1, + "Selected Parts": 1, + "Initial Granules": 2, + "Selected Granules": 1 + } +] +``` + +With `actions` = 1, added keys depend on step type. + +Example: + +```sql +EXPLAIN json = 1, actions = 1, description = 0 SELECT 1 FORMAT TSVRaw; +``` + +```json +[ + { + "Plan": { + "Node Type": "Expression", + "Expression": { + "Inputs": [], + "Actions": [ + { + "Node Type": "Column", + "Result Type": "UInt8", + "Result Type": "Column", + "Column": "Const(UInt8)", + "Arguments": [], + "Removed Arguments": [], + "Result": 0 + } + ], + "Outputs": [ + { + "Name": "1", + "Type": "UInt8" + } + ], + "Positions": [0], + "Project Input": true + }, + "Plans": [ + { + "Node Type": "SettingQuotaAndLimits", + "Plans": [ + { + "Node Type": "ReadFromStorage" + } + ] + } + ] + } + } +] +``` + ### EXPLAIN PIPELINE {#explain-pipeline} Settings: @@ -141,7 +362,6 @@ Settings: - `header` — Prints header for each output port. Default: 0. - `graph` — Prints a graph described in the [DOT](https://en.wikipedia.org/wiki/DOT_(graph_description_language)) graph description language. Default: 0. - `compact` — Prints graph in compact mode if `graph` setting is enabled. Default: 1. -- `indexes` — Shows used indexes, the number of filtered parts, and granules for every index applied. Default: 0. Supported for [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) tables. Example: diff --git a/docs/en/sql-reference/statements/grant.md b/docs/en/sql-reference/statements/grant.md index 89f35b5f701..8ca2b25ce66 100644 --- a/docs/en/sql-reference/statements/grant.md +++ b/docs/en/sql-reference/statements/grant.md @@ -49,7 +49,7 @@ It means that `john` has the permission to execute: - `SELECT x FROM db.table`. - `SELECT y FROM db.table`. -`john` can’t execute `SELECT z FROM db.table`. The `SELECT * FROM db.table` also is not available. Processing this query, ClickHouse doesn’t return any data, even `x` and `y`. The only exception is if a table contains only `x` and `y` columns. In this case ClickHouse returns all the data. +`john` can’t execute `SELECT z FROM db.table`. The `SELECT * FROM db.table` also is not available. Processing this query, ClickHouse does not return any data, even `x` and `y`. The only exception is if a table contains only `x` and `y` columns. In this case ClickHouse returns all the data. Also `john` has the `GRANT OPTION` privilege, so it can grant other users with privileges of the same or smaller scope. @@ -230,7 +230,7 @@ Consider the following privilege: GRANT SELECT(x,y) ON db.table TO john ``` -This privilege allows `john` to execute any `SELECT` query that involves data from the `x` and/or `y` columns in `db.table`, for example, `SELECT x FROM db.table`. `john` can’t execute `SELECT z FROM db.table`. The `SELECT * FROM db.table` also is not available. Processing this query, ClickHouse doesn’t return any data, even `x` and `y`. The only exception is if a table contains only `x` and `y` columns, in this case ClickHouse returns all the data. +This privilege allows `john` to execute any `SELECT` query that involves data from the `x` and/or `y` columns in `db.table`, for example, `SELECT x FROM db.table`. `john` can’t execute `SELECT z FROM db.table`. The `SELECT * FROM db.table` also is not available. Processing this query, ClickHouse does not return any data, even `x` and `y`. The only exception is if a table contains only `x` and `y` columns, in this case ClickHouse returns all the data. ### INSERT {#grant-insert} @@ -240,7 +240,7 @@ Privilege level: `COLUMN`. **Description** -User granted with this privilege can execute `INSERT` queries over a specified list of columns in the specified table and database. If user includes other columns then specified a query doesn’t insert any data. +User granted with this privilege can execute `INSERT` queries over a specified list of columns in the specified table and database. If user includes other columns then specified a query does not insert any data. **Example** @@ -292,7 +292,7 @@ Examples of how this hierarchy is treated: **Notes** -- The `MODIFY SETTING` privilege allows modifying table engine settings. It doesn’t affect settings or server configuration parameters. +- The `MODIFY SETTING` privilege allows modifying table engine settings. It does not affect settings or server configuration parameters. - The `ATTACH` operation needs the [CREATE](#grant-create) privilege. - The `DETACH` operation needs the [DROP](#grant-drop) privilege. - To stop mutation by the [KILL MUTATION](../../sql-reference/statements/misc.md#kill-mutation) query, you need to have a privilege to start this mutation. For example, if you want to stop the `ALTER UPDATE` query, you need the `ALTER UPDATE`, `ALTER TABLE`, or `ALTER` privilege. @@ -316,7 +316,7 @@ Allows executing [CREATE](../../sql-reference/statements/create/index.md) and [A Allows executing [DROP](../../sql-reference/statements/misc.md#drop) and [DETACH](../../sql-reference/statements/misc.md#detach) queries according to the following hierarchy of privileges: -- `DROP`. Level: +- `DROP`. Level: `GROUP` - `DROP DATABASE`. Level: `DATABASE` - `DROP TABLE`. Level: `TABLE` - `DROP VIEW`. Level: `VIEW` diff --git a/docs/en/sql-reference/statements/insert-into.md b/docs/en/sql-reference/statements/insert-into.md index 66effcccc3f..db10ddd47c6 100644 --- a/docs/en/sql-reference/statements/insert-into.md +++ b/docs/en/sql-reference/statements/insert-into.md @@ -57,7 +57,7 @@ SELECT * FROM insert_select_testtable; In this example, we see that the second inserted row has `a` and `c` columns filled by the passed values, and `b` filled with value by default. -If a list of columns doesn't include all existing columns, the rest of the columns are filled with: +If a list of columns does not include all existing columns, the rest of the columns are filled with: - The values calculated from the `DEFAULT` expressions specified in the table definition. - Zeros and empty strings, if `DEFAULT` expressions are not defined. @@ -105,6 +105,8 @@ However, you can delete old data using `ALTER TABLE ... DROP PARTITION`. `FORMAT` clause must be specified in the end of query if `SELECT` clause contains table function [input()](../../sql-reference/table-functions/input.md). +To insert a default value instead of `NULL` into a column with not nullable data type, enable [insert_null_as_default](../../operations/settings/settings.md#insert_null_as_default) setting. + ### Performance Considerations {#performance-considerations} `INSERT` sorts the input data by primary key and splits them into partitions by a partition key. If you insert data into several partitions at once, it can significantly reduce the performance of the `INSERT` query. To avoid this: diff --git a/docs/en/sql-reference/statements/kill.md b/docs/en/sql-reference/statements/kill.md index 6aa09cca4ef..eab6f602c4a 100644 --- a/docs/en/sql-reference/statements/kill.md +++ b/docs/en/sql-reference/statements/kill.md @@ -31,7 +31,7 @@ KILL QUERY WHERE user='username' SYNC Read-only users can only stop their own queries. -By default, the asynchronous version of queries is used (`ASYNC`), which doesn’t wait for confirmation that queries have stopped. +By default, the asynchronous version of queries is used (`ASYNC`), which does not wait for confirmation that queries have stopped. The synchronous version (`SYNC`) waits for all queries to stop and displays information about each process as it stops. The response contains the `kill_status` column, which can take the following values: diff --git a/docs/en/sql-reference/statements/optimize.md b/docs/en/sql-reference/statements/optimize.md index 247252d3f4e..69e2caeb322 100644 --- a/docs/en/sql-reference/statements/optimize.md +++ b/docs/en/sql-reference/statements/optimize.md @@ -20,7 +20,7 @@ The `OPTMIZE` query is supported for [MergeTree](../../engines/table-engines/mer When `OPTIMIZE` is used with the [ReplicatedMergeTree](../../engines/table-engines/mergetree-family/replication.md) family of table engines, ClickHouse creates a task for merging and waits for execution on all nodes (if the `replication_alter_partitions_sync` setting is enabled). -- If `OPTIMIZE` doesn’t perform a merge for any reason, it doesn’t notify the client. To enable notifications, use the [optimize_throw_if_noop](../../operations/settings/settings.md#setting-optimize_throw_if_noop) setting. +- If `OPTIMIZE` does not perform a merge for any reason, it does not notify the client. To enable notifications, use the [optimize_throw_if_noop](../../operations/settings/settings.md#setting-optimize_throw_if_noop) setting. - If you specify a `PARTITION`, only the specified partition is optimized. [How to set partition expression](../../sql-reference/statements/alter/index.md#alter-how-to-specify-part-expr). - If you specify `FINAL`, optimization is performed even when all the data is already in one part. Also merge is forced even if concurrent merges are performed. - If you specify `DEDUPLICATE`, then completely identical rows (unless by-clause is specified) will be deduplicated (all columns are compared), it makes sense only for the MergeTree engine. diff --git a/docs/en/sql-reference/statements/rename.md b/docs/en/sql-reference/statements/rename.md index a9dda6ed3b2..4f454626493 100644 --- a/docs/en/sql-reference/statements/rename.md +++ b/docs/en/sql-reference/statements/rename.md @@ -19,4 +19,4 @@ Renames one or more tables. RENAME TABLE [db11.]name11 TO [db12.]name12, [db21.]name21 TO [db22.]name22, ... [ON CLUSTER cluster] ``` -Renaming tables is a light operation. If you indicated another database after `TO`, the table will be moved to this database. However, the directories with databases must reside in the same file system (otherwise, an error is returned). If you rename multiple tables in one query, this is a non-atomic operation, it may be partially executed, queries in other sessions may receive the error `Table ... doesn't exist ..`. +Renaming tables is a light operation. If you indicated another database after `TO`, the table will be moved to this database. However, the directories with databases must reside in the same file system (otherwise, an error is returned). If you rename multiple tables in one query, this is a non-atomic operation, it may be partially executed, queries in other sessions may receive the error `Table ... does not exist ..`. diff --git a/docs/en/sql-reference/statements/select/from.md b/docs/en/sql-reference/statements/select/from.md index 3ecb5096ab8..7c5ea732122 100644 --- a/docs/en/sql-reference/statements/select/from.md +++ b/docs/en/sql-reference/statements/select/from.md @@ -29,7 +29,7 @@ Now `SELECT` queries with `FINAL` are executed in parallel and slightly faster. ### Drawbacks {#drawbacks} -Queries that use `FINAL` are executed slightly slower than similar queries that don’t, because: +Queries that use `FINAL` are executed slightly slower than similar queries that do not, because: - Data is merged during query execution. - Queries with `FINAL` read primary key columns in addition to the columns specified in the query. diff --git a/docs/en/sql-reference/statements/select/group-by.md b/docs/en/sql-reference/statements/select/group-by.md index a07c810fae8..e6affc07b78 100644 --- a/docs/en/sql-reference/statements/select/group-by.md +++ b/docs/en/sql-reference/statements/select/group-by.md @@ -208,7 +208,7 @@ This extra row is only produced in `JSON*`, `TabSeparated*`, and `Pretty*` forma ### Configuring Totals Processing {#configuring-totals-processing} -By default, `totals_mode = 'before_having'`. In this case, ‘totals’ is calculated across all rows, including the ones that don’t pass through HAVING and `max_rows_to_group_by`. +By default, `totals_mode = 'before_having'`. In this case, ‘totals’ is calculated across all rows, including the ones that do not pass through HAVING and `max_rows_to_group_by`. The other alternatives include only the rows that pass through HAVING in ‘totals’, and behave differently with the setting `max_rows_to_group_by` and `group_by_overflow_mode = 'any'`. @@ -274,4 +274,4 @@ When merging data flushed to the disk, as well as when merging results from remo When external aggregation is enabled, if there was less than `max_bytes_before_external_group_by` of data (i.e. data was not flushed), the query runs just as fast as without external aggregation. If any temporary data was flushed, the run time will be several times longer (approximately three times). -If you have an [ORDER BY](../../../sql-reference/statements/select/order-by.md) with a [LIMIT](../../../sql-reference/statements/select/limit.md) after `GROUP BY`, then the amount of used RAM depends on the amount of data in `LIMIT`, not in the whole table. But if the `ORDER BY` doesn’t have `LIMIT`, don’t forget to enable external sorting (`max_bytes_before_external_sort`). +If you have an [ORDER BY](../../../sql-reference/statements/select/order-by.md) with a [LIMIT](../../../sql-reference/statements/select/limit.md) after `GROUP BY`, then the amount of used RAM depends on the amount of data in `LIMIT`, not in the whole table. But if the `ORDER BY` does not have `LIMIT`, do not forget to enable external sorting (`max_bytes_before_external_sort`). diff --git a/docs/en/sql-reference/statements/select/index.md b/docs/en/sql-reference/statements/select/index.md index 0712ea8daa7..2f2ce943225 100644 --- a/docs/en/sql-reference/statements/select/index.md +++ b/docs/en/sql-reference/statements/select/index.md @@ -101,7 +101,7 @@ SELECT COLUMNS('a'), COLUMNS('c'), toTypeName(COLUMNS('c')) FROM col_names └────┴────┴────┴────────────────┘ ``` -Each column returned by the `COLUMNS` expression is passed to the function as a separate argument. Also you can pass other arguments to the function if it supports them. Be careful when using functions. If a function doesn’t support the number of arguments you have passed to it, ClickHouse throws an exception. +Each column returned by the `COLUMNS` expression is passed to the function as a separate argument. Also you can pass other arguments to the function if it supports them. Be careful when using functions. If a function does not support the number of arguments you have passed to it, ClickHouse throws an exception. For example: @@ -111,12 +111,12 @@ SELECT COLUMNS('a') + COLUMNS('c') FROM col_names ``` text Received exception from server (version 19.14.1): -Code: 42. DB::Exception: Received from localhost:9000. DB::Exception: Number of arguments for function plus doesn't match: passed 3, should be 2. +Code: 42. DB::Exception: Received from localhost:9000. DB::Exception: Number of arguments for function plus does not match: passed 3, should be 2. ``` In this example, `COLUMNS('a')` returns two columns: `aa` and `ab`. `COLUMNS('c')` returns the `bc` column. The `+` operator can’t apply to 3 arguments, so ClickHouse throws an exception with the relevant message. -Columns that matched the `COLUMNS` expression can have different data types. If `COLUMNS` doesn’t match any columns and is the only expression in `SELECT`, ClickHouse throws an exception. +Columns that matched the `COLUMNS` expression can have different data types. If `COLUMNS` does not match any columns and is the only expression in `SELECT`, ClickHouse throws an exception. ### Asterisk {#asterisk} @@ -128,7 +128,7 @@ You can put an asterisk in any part of a query instead of an expression. When th - When there is strong filtration on a small number of columns using `PREWHERE`. - In subqueries (since columns that aren’t needed for the external query are excluded from subqueries). -In all other cases, we don’t recommend using the asterisk, since it only gives you the drawbacks of a columnar DBMS instead of the advantages. In other words using the asterisk is not recommended. +In all other cases, we do not recommend using the asterisk, since it only gives you the drawbacks of a columnar DBMS instead of the advantages. In other words using the asterisk is not recommended. ### Extreme Values {#extreme-values} diff --git a/docs/en/sql-reference/statements/select/order-by.md b/docs/en/sql-reference/statements/select/order-by.md index f19a785c6b7..a8fec5cfa26 100644 --- a/docs/en/sql-reference/statements/select/order-by.md +++ b/docs/en/sql-reference/statements/select/order-by.md @@ -252,11 +252,11 @@ External sorting works much less effectively than sorting in RAM. If `ORDER BY` expression has a prefix that coincides with the table sorting key, you can optimize the query by using the [optimize_read_in_order](../../../operations/settings/settings.md#optimize_read_in_order) setting. - When the `optimize_read_in_order` setting is enabled, the Clickhouse server uses the table index and reads the data in order of the `ORDER BY` key. This allows to avoid reading all data in case of specified [LIMIT](../../../sql-reference/statements/select/limit.md). So queries on big data with small limit are processed faster. + When the `optimize_read_in_order` setting is enabled, the ClickHouse server uses the table index and reads the data in order of the `ORDER BY` key. This allows to avoid reading all data in case of specified [LIMIT](../../../sql-reference/statements/select/limit.md). So queries on big data with small limit are processed faster. -Optimization works with both `ASC` and `DESC` and doesn't work together with [GROUP BY](../../../sql-reference/statements/select/group-by.md) clause and [FINAL](../../../sql-reference/statements/select/from.md#select-from-final) modifier. +Optimization works with both `ASC` and `DESC` and does not work together with [GROUP BY](../../../sql-reference/statements/select/group-by.md) clause and [FINAL](../../../sql-reference/statements/select/from.md#select-from-final) modifier. -When the `optimize_read_in_order` setting is disabled, the Clickhouse server does not use the table index while processing `SELECT` queries. +When the `optimize_read_in_order` setting is disabled, the ClickHouse server does not use the table index while processing `SELECT` queries. Consider disabling `optimize_read_in_order` manually, when running queries that have `ORDER BY` clause, large `LIMIT` and [WHERE](../../../sql-reference/statements/select/where.md) condition that requires to read huge amount of records before queried data is found. @@ -265,7 +265,7 @@ Optimization is supported in the following table engines: - [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md) - [Merge](../../../engines/table-engines/special/merge.md), [Buffer](../../../engines/table-engines/special/buffer.md), and [MaterializedView](../../../engines/table-engines/special/materializedview.md) table engines over `MergeTree`-engine tables -In `MaterializedView`-engine tables the optimization works with views like `SELECT ... FROM merge_tree_table ORDER BY pk`. But it is not supported in the queries like `SELECT ... FROM view ORDER BY pk` if the view query doesn't have the `ORDER BY` clause. +In `MaterializedView`-engine tables the optimization works with views like `SELECT ... FROM merge_tree_table ORDER BY pk`. But it is not supported in the queries like `SELECT ... FROM view ORDER BY pk` if the view query does not have the `ORDER BY` clause. ## ORDER BY Expr WITH FILL Modifier {#orderby-with-fill} @@ -364,7 +364,7 @@ returns └────────────┴────────────┴──────────┘ ``` -Field `d1` doesn’t fill and use default value cause we don’t have repeated values for `d2` value, and sequence for `d1` can’t be properly calculated. +Field `d1` does not fill and use default value cause we do not have repeated values for `d2` value, and sequence for `d1` can’t be properly calculated. The following query with a changed field in `ORDER BY` diff --git a/docs/en/sql-reference/statements/select/prewhere.md b/docs/en/sql-reference/statements/select/prewhere.md index fc43d1de0a1..663b84f2d48 100644 --- a/docs/en/sql-reference/statements/select/prewhere.md +++ b/docs/en/sql-reference/statements/select/prewhere.md @@ -16,6 +16,9 @@ A query may simultaneously specify `PREWHERE` and `WHERE`. In this case, `PREWHE If the `optimize_move_to_prewhere` setting is set to 0, heuristics to automatically move parts of expressions from `WHERE` to `PREWHERE` are disabled. +!!! note "Attention" + The `PREWHERE` section is executed before` FINAL`, so the results of `FROM FINAL` queries may be skewed when using` PREWHERE` with fields not in the `ORDER BY` section of a table. + ## Limitations {#limitations} `PREWHERE` is only supported by tables from the `*MergeTree` family. diff --git a/docs/en/sql-reference/statements/select/sample.md b/docs/en/sql-reference/statements/select/sample.md index 55c1919b81d..2ed0a804736 100644 --- a/docs/en/sql-reference/statements/select/sample.md +++ b/docs/en/sql-reference/statements/select/sample.md @@ -11,7 +11,7 @@ When data sampling is enabled, the query is not performed on all the data, but o Approximated query processing can be useful in the following cases: - When you have strict timing requirements (like \<100ms) but you can’t justify the cost of additional hardware resources to meet them. -- When your raw data is not accurate, so approximation doesn’t noticeably degrade the quality. +- When your raw data is not accurate, so approximation does not noticeably degrade the quality. - Business requirements target approximate results (for cost-effectiveness, or to market exact results to premium users). !!! note "Note" @@ -59,7 +59,7 @@ In this case, the query is executed on a sample of at least `n` rows (but not si Since the minimum unit for data reading is one granule (its size is set by the `index_granularity` setting), it makes sense to set a sample that is much larger than the size of the granule. -When using the `SAMPLE n` clause, you don’t know which relative percent of data was processed. So you don’t know the coefficient the aggregate functions should be multiplied by. Use the `_sample_factor` virtual column to get the approximate result. +When using the `SAMPLE n` clause, you do not know which relative percent of data was processed. So you do not know the coefficient the aggregate functions should be multiplied by. Use the `_sample_factor` virtual column to get the approximate result. The `_sample_factor` column contains relative coefficients that are calculated dynamically. This column is created automatically when you [create](../../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) a table with the specified sampling key. The usage examples of the `_sample_factor` column are shown below. @@ -79,7 +79,7 @@ FROM visits SAMPLE 10000000 ``` -The example below shows how to calculate the average session duration. Note that you don’t need to use the relative coefficient to calculate the average values. +The example below shows how to calculate the average session duration. Note that you do not need to use the relative coefficient to calculate the average values. ``` sql SELECT avg(Duration) diff --git a/docs/en/sql-reference/statements/select/union.md b/docs/en/sql-reference/statements/select/union.md index cf18ff7a4a2..6cedfb89787 100644 --- a/docs/en/sql-reference/statements/select/union.md +++ b/docs/en/sql-reference/statements/select/union.md @@ -78,4 +78,10 @@ Result: Queries that are parts of `UNION/UNION ALL/UNION DISTINCT` can be run simultaneously, and their results can be mixed together. +**See Also** + +- [insert_null_as_default](../../../operations/settings/settings.md#insert_null_as_default) setting. +- [union_default_mode](../../../operations/settings/settings.md#union-default-mode) setting. + + [Original article](https://clickhouse.tech/docs/en/sql-reference/statements/select/union/) diff --git a/docs/en/sql-reference/statements/select/with.md b/docs/en/sql-reference/statements/select/with.md index 6a0564a8ede..0958f651847 100644 --- a/docs/en/sql-reference/statements/select/with.md +++ b/docs/en/sql-reference/statements/select/with.md @@ -4,7 +4,7 @@ toc_title: WITH # WITH Clause {#with-clause} -Clickhouse supports Common Table Expressions ([CTE](https://en.wikipedia.org/wiki/Hierarchical_and_recursive_queries_in_SQL)), that is provides to use results of `WITH` clause in the rest of `SELECT` query. Named subqueries can be included to the current and child query context in places where table objects are allowed. Recursion is prevented by hiding the current level CTEs from the WITH expression. +ClickHouse supports Common Table Expressions ([CTE](https://en.wikipedia.org/wiki/Hierarchical_and_recursive_queries_in_SQL)), that is provides to use results of `WITH` clause in the rest of `SELECT` query. Named subqueries can be included to the current and child query context in places where table objects are allowed. Recursion is prevented by hiding the current level CTEs from the WITH expression. ## Syntax diff --git a/docs/en/sql-reference/statements/show.md b/docs/en/sql-reference/statements/show.md index 7b3f709b876..a78ef38241f 100644 --- a/docs/en/sql-reference/statements/show.md +++ b/docs/en/sql-reference/statements/show.md @@ -240,7 +240,7 @@ If user is not specified, the query returns privileges for the current user. Shows parameters that were used at a [user creation](../../sql-reference/statements/create/user.md). -`SHOW CREATE USER` doesn’t output user passwords. +`SHOW CREATE USER` does not output user passwords. ### Syntax {#show-create-user-syntax} diff --git a/docs/en/sql-reference/statements/system.md b/docs/en/sql-reference/statements/system.md index 7871894ccac..9397d7002fd 100644 --- a/docs/en/sql-reference/statements/system.md +++ b/docs/en/sql-reference/statements/system.md @@ -169,7 +169,7 @@ SYSTEM START MERGES [ON VOLUME | [db.]merge_tree_family_table_name ### STOP TTL MERGES {#query_language-stop-ttl-merges} Provides possibility to stop background delete old data according to [TTL expression](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-ttl) for tables in the MergeTree family: -Returns `Ok.` even if table doesn’t exist or table has not MergeTree engine. Returns error when database doesn’t exist: +Returns `Ok.` even if table does not exist or table has not MergeTree engine. Returns error when database does not exist: ``` sql SYSTEM STOP TTL MERGES [[db.]merge_tree_family_table_name] @@ -178,7 +178,7 @@ SYSTEM STOP TTL MERGES [[db.]merge_tree_family_table_name] ### START TTL MERGES {#query_language-start-ttl-merges} Provides possibility to start background delete old data according to [TTL expression](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-ttl) for tables in the MergeTree family: -Returns `Ok.` even if table doesn’t exist. Returns error when database doesn’t exist: +Returns `Ok.` even if table does not exist. Returns error when database does not exist: ``` sql SYSTEM START TTL MERGES [[db.]merge_tree_family_table_name] @@ -187,7 +187,7 @@ SYSTEM START TTL MERGES [[db.]merge_tree_family_table_name] ### STOP MOVES {#query_language-stop-moves} Provides possibility to stop background move data according to [TTL table expression with TO VOLUME or TO DISK clause](../../engines/table-engines/mergetree-family/mergetree.md#mergetree-table-ttl) for tables in the MergeTree family: -Returns `Ok.` even if table doesn’t exist. Returns error when database doesn’t exist: +Returns `Ok.` even if table does not exist. Returns error when database does not exist: ``` sql SYSTEM STOP MOVES [[db.]merge_tree_family_table_name] @@ -196,10 +196,10 @@ SYSTEM STOP MOVES [[db.]merge_tree_family_table_name] ### START MOVES {#query_language-start-moves} Provides possibility to start background move data according to [TTL table expression with TO VOLUME and TO DISK clause](../../engines/table-engines/mergetree-family/mergetree.md#mergetree-table-ttl) for tables in the MergeTree family: -Returns `Ok.` even if table doesn’t exist. Returns error when database doesn’t exist: +Returns `Ok.` even if table does not exist. Returns error when database does not exist: ``` sql -SYSTEM STOP MOVES [[db.]merge_tree_family_table_name] +SYSTEM START MOVES [[db.]merge_tree_family_table_name] ``` ## Managing ReplicatedMergeTree Tables {#query-language-system-replicated} @@ -209,7 +209,7 @@ ClickHouse can manage background replication related processes in [ReplicatedMer ### STOP FETCHES {#query_language-system-stop-fetches} Provides possibility to stop background fetches for inserted parts for tables in the `ReplicatedMergeTree` family: -Always returns `Ok.` regardless of the table engine and even if table or database doesn’t exist. +Always returns `Ok.` regardless of the table engine and even if table or database does not exist. ``` sql SYSTEM STOP FETCHES [[db.]replicated_merge_tree_family_table_name] @@ -218,7 +218,7 @@ SYSTEM STOP FETCHES [[db.]replicated_merge_tree_family_table_name] ### START FETCHES {#query_language-system-start-fetches} Provides possibility to start background fetches for inserted parts for tables in the `ReplicatedMergeTree` family: -Always returns `Ok.` regardless of the table engine and even if table or database doesn’t exist. +Always returns `Ok.` regardless of the table engine and even if table or database does not exist. ``` sql SYSTEM START FETCHES [[db.]replicated_merge_tree_family_table_name] diff --git a/docs/en/sql-reference/table-functions/cluster.md b/docs/en/sql-reference/table-functions/cluster.md index b85542d784f..2856e66db9b 100644 --- a/docs/en/sql-reference/table-functions/cluster.md +++ b/docs/en/sql-reference/table-functions/cluster.md @@ -24,7 +24,7 @@ clusterAllReplicas('cluster_name', db, table[, sharding_key]) `sharding_key` - When insert into cluster function with more than one shard, sharding_key need to be provided. -Using the `cluster` and `clusterAllReplicas` table functions are less efficient than creating a `Distributed` table because in this case, the server connection is re-established for every request. When processing a large number of queries, please always create the `Distributed` table ahead of time, and don’t use the `cluster` and `clusterAllReplicas` table functions. +Using the `cluster` and `clusterAllReplicas` table functions are less efficient than creating a `Distributed` table because in this case, the server connection is re-established for every request. When processing a large number of queries, please always create the `Distributed` table ahead of time, and do not use the `cluster` and `clusterAllReplicas` table functions. The `cluster` and `clusterAllReplicas` table functions can be useful in the following cases: diff --git a/docs/en/sql-reference/table-functions/remote.md b/docs/en/sql-reference/table-functions/remote.md index e80e58a76aa..ae399c7e612 100644 --- a/docs/en/sql-reference/table-functions/remote.md +++ b/docs/en/sql-reference/table-functions/remote.md @@ -42,7 +42,7 @@ The dataset from remote servers. **Usage** -Using the `remote` table function is less optimal than creating a `Distributed` table because in this case the server connection is re-established for every request. Also, if hostnames are set, the names are resolved, and errors are not counted when working with various replicas. When processing a large number of queries, always create the `Distributed` table ahead of time, and don’t use the `remote` table function. +Using the `remote` table function is less optimal than creating a `Distributed` table because in this case the server connection is re-established for every request. Also, if hostnames are set, the names are resolved, and errors are not counted when working with various replicas. When processing a large number of queries, always create the `Distributed` table ahead of time, and do not use the `remote` table function. The `remote` table function can be useful in the following cases: diff --git a/docs/en/sql-reference/table-functions/view.md b/docs/en/sql-reference/table-functions/view.md index e49a9f5218b..18323ec4e92 100644 --- a/docs/en/sql-reference/table-functions/view.md +++ b/docs/en/sql-reference/table-functions/view.md @@ -5,7 +5,7 @@ toc_title: view ## view {#view} -Turns a subquery into a table. The function implements views (see [CREATE VIEW](https://clickhouse.tech/docs/en/sql-reference/statements/create/view/#create-view)). The resulting table doesn't store data, but only stores the specified `SELECT` query. When reading from the table, ClickHouse executes the query and deletes all unnecessary columns from the result. +Turns a subquery into a table. The function implements views (see [CREATE VIEW](https://clickhouse.tech/docs/en/sql-reference/statements/create/view/#create-view)). The resulting table does not store data, but only stores the specified `SELECT` query. When reading from the table, ClickHouse executes the query and deletes all unnecessary columns from the result. **Syntax** diff --git a/docs/en/whats-new/changelog/2017.md b/docs/en/whats-new/changelog/2017.md index 17d3efe7bab..9ceca2b6c4a 100644 --- a/docs/en/whats-new/changelog/2017.md +++ b/docs/en/whats-new/changelog/2017.md @@ -7,7 +7,7 @@ toc_title: '2017' This release contains bug fixes for the previous release 1.1.54318: -- Fixed bug with possible race condition in replication that could lead to data loss. This issue affects versions 1.1.54310 and 1.1.54318. If you use one of these versions with Replicated tables, the update is strongly recommended. This issue shows in logs in Warning messages like `Part ... from own log doesn't exist.` The issue is relevant even if you don’t see these messages in logs. +- Fixed bug with possible race condition in replication that could lead to data loss. This issue affects versions 1.1.54310 and 1.1.54318. If you use one of these versions with Replicated tables, the update is strongly recommended. This issue shows in logs in Warning messages like `Part ... from own log does not exist.` The issue is relevant even if you do not see these messages in logs. ### ClickHouse Release 1.1.54318, 2017-11-30 {#clickhouse-release-1-1-54318-2017-11-30} @@ -50,7 +50,7 @@ This release contains bug fixes for the previous release 1.1.54310: - Fixed nonatomic adding and removing of parts in Replicated tables. - Data inserted into a materialized view is not subjected to unnecessary deduplication. - Executing a query to a Distributed table for which the local replica is lagging and remote replicas are unavailable does not result in an error anymore. -- Users don’t need access permissions to the `default` database to create temporary tables anymore. +- Users do not need access permissions to the `default` database to create temporary tables anymore. - Fixed crashing when specifying the Array type without arguments. - Fixed hangups when the disk volume containing server logs is full. - Fixed an overflow in the toRelativeWeekNum function for the first week of the Unix epoch. @@ -138,7 +138,7 @@ This release contains bug fixes for the previous release 1.1.54310: #### Please Note When Upgrading: {#please-note-when-upgrading} -- There is now a higher default value for the MergeTree setting `max_bytes_to_merge_at_max_space_in_pool` (the maximum total size of data parts to merge, in bytes): it has increased from 100 GiB to 150 GiB. This might result in large merges running after the server upgrade, which could cause an increased load on the disk subsystem. If the free space available on the server is less than twice the total amount of the merges that are running, this will cause all other merges to stop running, including merges of small data parts. As a result, INSERT queries will fail with the message “Merges are processing significantly slower than inserts.” Use the `SELECT * FROM system.merges` query to monitor the situation. You can also check the `DiskSpaceReservedForMerge` metric in the `system.metrics` table, or in Graphite. You don’t need to do anything to fix this, since the issue will resolve itself once the large merges finish. If you find this unacceptable, you can restore the previous value for the `max_bytes_to_merge_at_max_space_in_pool` setting. To do this, go to the `` section in config.xml, set ``` ``107374182400 ``` and restart the server. +- There is now a higher default value for the MergeTree setting `max_bytes_to_merge_at_max_space_in_pool` (the maximum total size of data parts to merge, in bytes): it has increased from 100 GiB to 150 GiB. This might result in large merges running after the server upgrade, which could cause an increased load on the disk subsystem. If the free space available on the server is less than twice the total amount of the merges that are running, this will cause all other merges to stop running, including merges of small data parts. As a result, INSERT queries will fail with the message “Merges are processing significantly slower than inserts.” Use the `SELECT * FROM system.merges` query to monitor the situation. You can also check the `DiskSpaceReservedForMerge` metric in the `system.metrics` table, or in Graphite. You do not need to do anything to fix this, since the issue will resolve itself once the large merges finish. If you find this unacceptable, you can restore the previous value for the `max_bytes_to_merge_at_max_space_in_pool` setting. To do this, go to the `` section in config.xml, set ``` ``107374182400 ``` and restart the server. ### ClickHouse Release 1.1.54284, 2017-08-29 {#clickhouse-release-1-1-54284-2017-08-29} @@ -181,7 +181,7 @@ This release contains bug fixes for the previous release 1.1.54276: - Added the `output_format_json_quote_denormals` setting, which enables outputting nan and inf values in JSON format. - Optimized stream allocation when reading from a Distributed table. -- Settings can be configured in readonly mode if the value doesn’t change. +- Settings can be configured in readonly mode if the value does not change. - Added the ability to retrieve non-integer granules of the MergeTree engine in order to meet restrictions on the block size specified in the preferred_block_size_bytes setting. The purpose is to reduce the consumption of RAM and increase cache locality when processing queries from tables with large columns. - Efficient use of indexes that contain expressions like `toStartOfHour(x)` for conditions like `toStartOfHour(x) op сonstexpr.` - Added new settings for MergeTree engines (the merge_tree section in config.xml): diff --git a/docs/en/whats-new/changelog/2018.md b/docs/en/whats-new/changelog/2018.md index b0c4e147352..3544c9a9b49 100644 --- a/docs/en/whats-new/changelog/2018.md +++ b/docs/en/whats-new/changelog/2018.md @@ -32,7 +32,7 @@ toc_title: '2018' - Now you can use a parameter to configure the precision of the `uniqCombined` aggregate function (select the number of HyperLogLog cells). [#3406](https://github.com/ClickHouse/ClickHouse/pull/3406) - Added the `system.contributors` table that contains the names of everyone who made commits in ClickHouse. [#3452](https://github.com/ClickHouse/ClickHouse/pull/3452) - Added the ability to omit the partition for the `ALTER TABLE ... FREEZE` query in order to back up all partitions at once. [#3514](https://github.com/ClickHouse/ClickHouse/pull/3514) -- Added `dictGet` and `dictGetOrDefault` functions that don’t require specifying the type of return value. The type is determined automatically from the dictionary description. [Amos Bird](https://github.com/ClickHouse/ClickHouse/pull/3564) +- Added `dictGet` and `dictGetOrDefault` functions that do not require specifying the type of return value. The type is determined automatically from the dictionary description. [Amos Bird](https://github.com/ClickHouse/ClickHouse/pull/3564) - Now you can specify comments for a column in the table description and change it using `ALTER`. [#3377](https://github.com/ClickHouse/ClickHouse/pull/3377) - Reading is supported for `Join` type tables with simple keys. [Amos Bird](https://github.com/ClickHouse/ClickHouse/pull/3728) - Now you can specify the options `join_use_nulls`, `max_rows_in_join`, `max_bytes_in_join`, and `join_overflow_mode` when creating a `Join` type table. [Amos Bird](https://github.com/ClickHouse/ClickHouse/pull/3728) @@ -70,7 +70,7 @@ toc_title: '2018' #### Improvements: {#improvements-1} -- The server does not write the processed configuration files to the `/etc/clickhouse-server/` directory. Instead, it saves them in the `preprocessed_configs` directory inside `path`. This means that the `/etc/clickhouse-server/` directory doesn’t have write access for the `clickhouse` user, which improves security. [#2443](https://github.com/ClickHouse/ClickHouse/pull/2443) +- The server does not write the processed configuration files to the `/etc/clickhouse-server/` directory. Instead, it saves them in the `preprocessed_configs` directory inside `path`. This means that the `/etc/clickhouse-server/` directory does not have write access for the `clickhouse` user, which improves security. [#2443](https://github.com/ClickHouse/ClickHouse/pull/2443) - The `min_merge_bytes_to_use_direct_io` option is set to 10 GiB by default. A merge that forms large parts of tables from the MergeTree family will be performed in `O_DIRECT` mode, which prevents excessive page cache eviction. [#3504](https://github.com/ClickHouse/ClickHouse/pull/3504) - Accelerated server start when there is a very large number of tables. [#3398](https://github.com/ClickHouse/ClickHouse/pull/3398) - Added a connection pool and HTTP `Keep-Alive` for connections between replicas. [#3594](https://github.com/ClickHouse/ClickHouse/pull/3594) @@ -291,7 +291,7 @@ toc_title: '2018' - Fixed an error when using `FINAL` with `PREWHERE`. [#3298](https://github.com/ClickHouse/ClickHouse/pull/3298) - Fixed an error when using `PREWHERE` over columns that were added during `ALTER`. [#3298](https://github.com/ClickHouse/ClickHouse/pull/3298) - Added a check for the absence of `arrayJoin` for `DEFAULT` and `MATERIALIZED` expressions. Previously, `arrayJoin` led to an error when inserting data. [#3337](https://github.com/ClickHouse/ClickHouse/pull/3337) -- Added a check for the absence of `arrayJoin` in a `PREWHERE` clause. Previously, this led to messages like `Size ... doesn't match` or `Unknown compression method` when executing queries. [#3357](https://github.com/ClickHouse/ClickHouse/pull/3357) +- Added a check for the absence of `arrayJoin` in a `PREWHERE` clause. Previously, this led to messages like `Size ... does not match` or `Unknown compression method` when executing queries. [#3357](https://github.com/ClickHouse/ClickHouse/pull/3357) - Fixed segfault that could occur in rare cases after optimization that replaced AND chains from equality evaluations with the corresponding IN expression. [liuyimin-bytedance](https://github.com/ClickHouse/ClickHouse/pull/3339) - Minor corrections to `clickhouse-benchmark`: previously, client information was not sent to the server; now the number of queries executed is calculated more accurately when shutting down and for limiting the number of iterations. [#3351](https://github.com/ClickHouse/ClickHouse/pull/3351) [#3352](https://github.com/ClickHouse/ClickHouse/pull/3352) @@ -392,7 +392,7 @@ toc_title: '2018' - The operation timeout can now be configured when working with ZooKeeper. [urykhy](https://github.com/ClickHouse/ClickHouse/pull/2971) - You can specify an offset for `LIMIT n, m` as `LIMIT n OFFSET m`. [#2840](https://github.com/ClickHouse/ClickHouse/pull/2840) - You can use the `SELECT TOP n` syntax as an alternative for `LIMIT`. [#2840](https://github.com/ClickHouse/ClickHouse/pull/2840) -- Increased the size of the queue to write to system tables, so the `SystemLog parameter queue is full` error doesn’t happen as often. +- Increased the size of the queue to write to system tables, so the `SystemLog parameter queue is full` error does not happen as often. - The `windowFunnel` aggregate function now supports events that meet multiple conditions. [Amos Bird](https://github.com/ClickHouse/ClickHouse/pull/2801) - Duplicate columns can be used in a `USING` clause for `JOIN`. [#3006](https://github.com/ClickHouse/ClickHouse/pull/3006) - `Pretty` formats now have a limit on column alignment by width. Use the `output_format_pretty_max_column_pad_width` setting. If a value is wider, it will still be displayed in its entirety, but the other cells in the table will not be too wide. [#3003](https://github.com/ClickHouse/ClickHouse/pull/3003) @@ -405,7 +405,7 @@ toc_title: '2018' #### Bug Fixes: {#bug-fixes-13} -- Fixed an issue with `Dictionary` tables (throws the `Size of offsets doesn't match size of column` or `Unknown compression method` exception). This bug appeared in version 18.10.3. [#2913](https://github.com/ClickHouse/ClickHouse/issues/2913) +- Fixed an issue with `Dictionary` tables (throws the `Size of offsets does not match size of column` or `Unknown compression method` exception). This bug appeared in version 18.10.3. [#2913](https://github.com/ClickHouse/ClickHouse/issues/2913) - Fixed a bug when merging `CollapsingMergeTree` tables if one of the data parts is empty (these parts are formed during merge or `ALTER DELETE` if all data was deleted), and the `vertical` algorithm was used for the merge. [#3049](https://github.com/ClickHouse/ClickHouse/pull/3049) - Fixed a race condition during `DROP` or `TRUNCATE` for `Memory` tables with a simultaneous `SELECT`, which could lead to server crashes. This bug appeared in version 1.1.54388. [#3038](https://github.com/ClickHouse/ClickHouse/pull/3038) - Fixed the possibility of data loss when inserting in `Replicated` tables if the `Session is expired` error is returned (data loss can be detected by the `ReplicatedDataLoss` metric). This error occurred in version 1.1.54378. [#2939](https://github.com/ClickHouse/ClickHouse/pull/2939) [#2949](https://github.com/ClickHouse/ClickHouse/pull/2949) [#2964](https://github.com/ClickHouse/ClickHouse/pull/2964) @@ -715,7 +715,7 @@ toc_title: '2018' #### Backward Incompatible Changes: {#backward-incompatible-changes-7} - Removed escaping in `Vertical` and `Pretty*` formats and deleted the `VerticalRaw` format. -- If servers with version 1.1.54388 (or newer) and servers with an older version are used simultaneously in a distributed query and the query has the `cast(x, 'Type')` expression without the `AS` keyword and doesn’t have the word `cast` in uppercase, an exception will be thrown with a message like `Not found column cast(0, 'UInt8') in block`. Solution: Update the server on the entire cluster. +- If servers with version 1.1.54388 (or newer) and servers with an older version are used simultaneously in a distributed query and the query has the `cast(x, 'Type')` expression without the `AS` keyword and does not have the word `cast` in uppercase, an exception will be thrown with a message like `Not found column cast(0, 'UInt8') in block`. Solution: Update the server on the entire cluster. ### ClickHouse Release 1.1.54385, 2018-06-01 {#clickhouse-release-1-1-54385-2018-06-01} @@ -1044,7 +1044,7 @@ This release contains bug fixes for the previous release 1.1.54337: #### Backward Incompatible Changes: {#backward-incompatible-changes-11} -- The format for marks in `Log` type tables that contain `Nullable` columns was changed in a backward incompatible way. If you have these tables, you should convert them to the `TinyLog` type before starting up the new server version. To do this, replace `ENGINE = Log` with `ENGINE = TinyLog` in the corresponding `.sql` file in the `metadata` directory. If your table doesn’t have `Nullable` columns or if the type of your table is not `Log`, then you don’t need to do anything. +- The format for marks in `Log` type tables that contain `Nullable` columns was changed in a backward incompatible way. If you have these tables, you should convert them to the `TinyLog` type before starting up the new server version. To do this, replace `ENGINE = Log` with `ENGINE = TinyLog` in the corresponding `.sql` file in the `metadata` directory. If your table does not have `Nullable` columns or if the type of your table is not `Log`, then you do not need to do anything. - Removed the `experimental_allow_extended_storage_definition_syntax` setting. Now this feature is enabled by default. - The `runningIncome` function was renamed to `runningDifferenceStartingWithFirstvalue` to avoid confusion. - Removed the `FROM ARRAY JOIN arr` syntax when ARRAY JOIN is specified directly after FROM with no table (Amos Bird). diff --git a/docs/en/whats-new/changelog/2019.md b/docs/en/whats-new/changelog/2019.md index eacd522390f..bd86bf6ce8b 100644 --- a/docs/en/whats-new/changelog/2019.md +++ b/docs/en/whats-new/changelog/2019.md @@ -11,16 +11,16 @@ toc_title: '2019' - Fixed potential buffer overflow in decompress. Malicious user can pass fabricated compressed data that could cause read after buffer. This issue was found by Eldar Zaitov from Yandex information security team. [#8404](https://github.com/ClickHouse/ClickHouse/pull/8404) ([alexey-milovidov](https://github.com/alexey-milovidov)) - Fixed possible server crash (`std::terminate`) when the server cannot send or write data in JSON or XML format with values of String data type (that require UTF-8 validation) or when compressing result data with Brotli algorithm or in some other rare cases. [#8384](https://github.com/ClickHouse/ClickHouse/pull/8384) ([alexey-milovidov](https://github.com/alexey-milovidov)) -- Fixed dictionaries with source from a clickhouse `VIEW`, now reading such dictionaries doesn’t cause the error `There is no query`. [#8351](https://github.com/ClickHouse/ClickHouse/pull/8351) ([Nikolai Kochetov](https://github.com/KochetovNicolai)) +- Fixed dictionaries with source from a clickhouse `VIEW`, now reading such dictionaries does not cause the error `There is no query`. [#8351](https://github.com/ClickHouse/ClickHouse/pull/8351) ([Nikolai Kochetov](https://github.com/KochetovNicolai)) - Fixed checking if a client host is allowed by host_regexp specified in users.xml. [#8241](https://github.com/ClickHouse/ClickHouse/pull/8241), [#8342](https://github.com/ClickHouse/ClickHouse/pull/8342) ([Vitaly Baranov](https://github.com/vitlibar)) - `RENAME TABLE` for a distributed table now renames the folder containing inserted data before sending to shards. This fixes an issue with successive renames `tableA->tableB`, `tableC->tableA`. [#8306](https://github.com/ClickHouse/ClickHouse/pull/8306) ([tavplubix](https://github.com/tavplubix)) - `range_hashed` external dictionaries created by DDL queries now allow ranges of arbitrary numeric types. [#8275](https://github.com/ClickHouse/ClickHouse/pull/8275) ([alesapin](https://github.com/alesapin)) - Fixed `INSERT INTO table SELECT ... FROM mysql(...)` table function. [#8234](https://github.com/ClickHouse/ClickHouse/pull/8234) ([tavplubix](https://github.com/tavplubix)) -- Fixed segfault in `INSERT INTO TABLE FUNCTION file()` while inserting into a file which doesn’t exist. Now in this case file would be created and then insert would be processed. [#8177](https://github.com/ClickHouse/ClickHouse/pull/8177) ([Olga Khvostikova](https://github.com/stavrolia)) +- Fixed segfault in `INSERT INTO TABLE FUNCTION file()` while inserting into a file which does not exist. Now in this case file would be created and then insert would be processed. [#8177](https://github.com/ClickHouse/ClickHouse/pull/8177) ([Olga Khvostikova](https://github.com/stavrolia)) - Fixed bitmapAnd error when intersecting an aggregated bitmap and a scalar bitmap. [#8082](https://github.com/ClickHouse/ClickHouse/pull/8082) ([Yue Huang](https://github.com/moon03432)) - Fixed segfault when `EXISTS` query was used without `TABLE` or `DICTIONARY` qualifier, just like `EXISTS t`. [#8213](https://github.com/ClickHouse/ClickHouse/pull/8213) ([alexey-milovidov](https://github.com/alexey-milovidov)) - Fixed return type for functions `rand` and `randConstant` in case of nullable argument. Now functions always return `UInt32` and never `Nullable(UInt32)`. [#8204](https://github.com/ClickHouse/ClickHouse/pull/8204) ([Nikolai Kochetov](https://github.com/KochetovNicolai)) -- Fixed `DROP DICTIONARY IF EXISTS db.dict`, now it doesn’t throw exception if `db` doesn’t exist. [#8185](https://github.com/ClickHouse/ClickHouse/pull/8185) ([Vitaly Baranov](https://github.com/vitlibar)) +- Fixed `DROP DICTIONARY IF EXISTS db.dict`, now it does not throw exception if `db` does not exist. [#8185](https://github.com/ClickHouse/ClickHouse/pull/8185) ([Vitaly Baranov](https://github.com/vitlibar)) - If a table wasn’t completely dropped because of server crash, the server will try to restore and load it [#8176](https://github.com/ClickHouse/ClickHouse/pull/8176) ([tavplubix](https://github.com/tavplubix)) - Fixed a trivial count query for a distributed table if there are more than two shard local table. [#8164](https://github.com/ClickHouse/ClickHouse/pull/8164) ([小路](https://github.com/nicelulu)) - Fixed bug that lead to a data race in DB::BlockStreamProfileInfo::calculateRowsBeforeLimit() [#8143](https://github.com/ClickHouse/ClickHouse/pull/8143) ([Alexander Kazakov](https://github.com/Akazz)) @@ -36,7 +36,7 @@ toc_title: '2019' - Removed the mutation number from a part name in case there were no mutations. This removing improved the compatibility with older versions. [#8250](https://github.com/ClickHouse/ClickHouse/pull/8250) ([alesapin](https://github.com/alesapin)) - Fixed the bug that mutations are skipped for some attached parts due to their data_version are larger than the table mutation version. [#7812](https://github.com/ClickHouse/ClickHouse/pull/7812) ([Zhichang Yu](https://github.com/yuzhichang)) - Allow starting the server with redundant copies of parts after moving them to another device. [#7810](https://github.com/ClickHouse/ClickHouse/pull/7810) ([Vladimir Chebotarev](https://github.com/excitoon)) -- Fixed the error “Sizes of columns doesn’t match” that might appear when using aggregate function columns. [#7790](https://github.com/ClickHouse/ClickHouse/pull/7790) ([Boris Granveaud](https://github.com/bgranvea)) +- Fixed the error “Sizes of columns does not match” that might appear when using aggregate function columns. [#7790](https://github.com/ClickHouse/ClickHouse/pull/7790) ([Boris Granveaud](https://github.com/bgranvea)) - Now an exception will be thrown in case of using WITH TIES alongside LIMIT BY. And now it’s possible to use TOP with LIMIT BY. [#7637](https://github.com/ClickHouse/ClickHouse/pull/7637) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)) - Fix dictionary reload if it has `invalidate_query`, which stopped updates and some exception on previous update tries. [#8029](https://github.com/ClickHouse/ClickHouse/pull/8029) ([alesapin](https://github.com/alesapin)) @@ -52,7 +52,7 @@ toc_title: '2019' - Make `bloom_filter` type of index supporting `LowCardinality` and `Nullable` [#7363](https://github.com/ClickHouse/ClickHouse/issues/7363) [#7561](https://github.com/ClickHouse/ClickHouse/pull/7561) ([Nikolai Kochetov](https://github.com/KochetovNicolai)) - Add function `isValidJSON` to check that passed string is a valid json. [#5910](https://github.com/ClickHouse/ClickHouse/issues/5910) [#7293](https://github.com/ClickHouse/ClickHouse/pull/7293) ([Vdimir](https://github.com/Vdimir)) - Implement `arrayCompact` function [#7328](https://github.com/ClickHouse/ClickHouse/pull/7328) ([Memo](https://github.com/Joeywzr)) -- Created function `hex` for Decimal numbers. It works like `hex(reinterpretAsString())`, but doesn’t delete last zero bytes. [#7355](https://github.com/ClickHouse/ClickHouse/pull/7355) ([Mikhail Korotov](https://github.com/millb)) +- Created function `hex` for Decimal numbers. It works like `hex(reinterpretAsString())`, but does not delete last zero bytes. [#7355](https://github.com/ClickHouse/ClickHouse/pull/7355) ([Mikhail Korotov](https://github.com/millb)) - Add `arrayFill` and `arrayReverseFill` functions, which replace elements by other elements in front/back of them in the array. [#7380](https://github.com/ClickHouse/ClickHouse/pull/7380) ([hcz](https://github.com/hczhcz)) - Add `CRC32IEEE()`/`CRC64()` support [#7480](https://github.com/ClickHouse/ClickHouse/pull/7480) ([Azat Khuzhin](https://github.com/azat)) - Implement `char` function similar to one in [mysql](https://dev.mysql.com/doc/refman/8.0/en/string-functions.html#function_char) [#7486](https://github.com/ClickHouse/ClickHouse/pull/7486) ([sundyli](https://github.com/sundy-li)) @@ -609,12 +609,12 @@ toc_title: '2019' - Fixed the possibility of hanging queries when server is overloaded and global thread pool becomes near full. This have higher chance to happen on clusters with large number of shards (hundreds), because distributed queries allocate a thread per connection to each shard. For example, this issue may reproduce if a cluster of 330 shards is processing 30 concurrent distributed queries. This issue affects all versions starting from 19.2. [#6301](https://github.com/ClickHouse/ClickHouse/pull/6301) ([alexey-milovidov](https://github.com/alexey-milovidov)) - Fixed logic of `arrayEnumerateUniqRanked` function. [#6423](https://github.com/ClickHouse/ClickHouse/pull/6423) ([alexey-milovidov](https://github.com/alexey-milovidov)) - Fix segfault when decoding symbol table. [#6603](https://github.com/ClickHouse/ClickHouse/pull/6603) ([Amos Bird](https://github.com/amosbird)) -- Fixed irrelevant exception in cast of `LowCardinality(Nullable)` to not-Nullable column in case if it doesn’t contain Nulls (e.g. in query like `SELECT CAST(CAST('Hello' AS LowCardinality(Nullable(String))) AS String)`. [#6094](https://github.com/ClickHouse/ClickHouse/issues/6094) [#6119](https://github.com/ClickHouse/ClickHouse/pull/6119) ([Nikolai Kochetov](https://github.com/KochetovNicolai)) +- Fixed irrelevant exception in cast of `LowCardinality(Nullable)` to not-Nullable column in case if it does not contain Nulls (e.g. in query like `SELECT CAST(CAST('Hello' AS LowCardinality(Nullable(String))) AS String)`. [#6094](https://github.com/ClickHouse/ClickHouse/issues/6094) [#6119](https://github.com/ClickHouse/ClickHouse/pull/6119) ([Nikolai Kochetov](https://github.com/KochetovNicolai)) - Removed extra quoting of description in `system.settings` table. [#6696](https://github.com/ClickHouse/ClickHouse/issues/6696) [#6699](https://github.com/ClickHouse/ClickHouse/pull/6699) ([alexey-milovidov](https://github.com/alexey-milovidov)) - Avoid possible deadlock in `TRUNCATE` of Replicated table. [#6695](https://github.com/ClickHouse/ClickHouse/pull/6695) ([alexey-milovidov](https://github.com/alexey-milovidov)) - Fix reading in order of sorting key. [#6189](https://github.com/ClickHouse/ClickHouse/pull/6189) ([Anton Popov](https://github.com/CurtizJ)) - Fix `ALTER TABLE ... UPDATE` query for tables with `enable_mixed_granularity_parts=1`. [#6543](https://github.com/ClickHouse/ClickHouse/pull/6543) ([alesapin](https://github.com/alesapin)) -- Fix bug opened by [#4405](https://github.com/ClickHouse/ClickHouse/pull/4405) (since 19.4.0). Reproduces in queries to Distributed tables over MergeTree tables when we doesn’t query any columns (`SELECT 1`). [#6236](https://github.com/ClickHouse/ClickHouse/pull/6236) ([alesapin](https://github.com/alesapin)) +- Fix bug opened by [#4405](https://github.com/ClickHouse/ClickHouse/pull/4405) (since 19.4.0). Reproduces in queries to Distributed tables over MergeTree tables when we does not query any columns (`SELECT 1`). [#6236](https://github.com/ClickHouse/ClickHouse/pull/6236) ([alesapin](https://github.com/alesapin)) - Fixed overflow in integer division of signed type to unsigned type. The behaviour was exactly as in C or C++ language (integer promotion rules) that may be surprising. Please note that the overflow is still possible when dividing large signed number to large unsigned number or vice-versa (but that case is less usual). The issue existed in all server versions. [#6214](https://github.com/ClickHouse/ClickHouse/issues/6214) [#6233](https://github.com/ClickHouse/ClickHouse/pull/6233) ([alexey-milovidov](https://github.com/alexey-milovidov)) - Limit maximum sleep time for throttling when `max_execution_speed` or `max_execution_speed_bytes` is set. Fixed false errors like `Estimated query execution time (inf seconds) is too long`. [#5547](https://github.com/ClickHouse/ClickHouse/issues/5547) [#6232](https://github.com/ClickHouse/ClickHouse/pull/6232) ([alexey-milovidov](https://github.com/alexey-milovidov)) - Fixed issues about using `MATERIALIZED` columns and aliases in `MaterializedView`. [#448](https://github.com/ClickHouse/ClickHouse/issues/448) [#3484](https://github.com/ClickHouse/ClickHouse/issues/3484) [#3450](https://github.com/ClickHouse/ClickHouse/issues/3450) [#2878](https://github.com/ClickHouse/ClickHouse/issues/2878) [#2285](https://github.com/ClickHouse/ClickHouse/issues/2285) [#3796](https://github.com/ClickHouse/ClickHouse/pull/3796) ([Amos Bird](https://github.com/amosbird)) [#6316](https://github.com/ClickHouse/ClickHouse/pull/6316) ([alexey-milovidov](https://github.com/alexey-milovidov)) @@ -639,7 +639,7 @@ toc_title: '2019' - Allow to `ATTACH` live views (for example, at the server startup) regardless to `allow_experimental_live_view` setting. [#6754](https://github.com/ClickHouse/ClickHouse/pull/6754) ([alexey-milovidov](https://github.com/alexey-milovidov)) - For stack traces gathered by query profiler, do not include stack frames generated by the query profiler itself. [#6250](https://github.com/ClickHouse/ClickHouse/pull/6250) ([alexey-milovidov](https://github.com/alexey-milovidov)) - Now table functions `values`, `file`, `url`, `hdfs` have support for ALIAS columns. [#6255](https://github.com/ClickHouse/ClickHouse/pull/6255) ([alexey-milovidov](https://github.com/alexey-milovidov)) -- Throw an exception if `config.d` file doesn’t have the corresponding root element as the config file. [#6123](https://github.com/ClickHouse/ClickHouse/pull/6123) ([dimarub2000](https://github.com/dimarub2000)) +- Throw an exception if `config.d` file does not have the corresponding root element as the config file. [#6123](https://github.com/ClickHouse/ClickHouse/pull/6123) ([dimarub2000](https://github.com/dimarub2000)) - Print extra info in exception message for `no space left on device`. [#6182](https://github.com/ClickHouse/ClickHouse/issues/6182), [#6252](https://github.com/ClickHouse/ClickHouse/issues/6252) [#6352](https://github.com/ClickHouse/ClickHouse/pull/6352) ([tavplubix](https://github.com/tavplubix)) - When determining shards of a `Distributed` table to be covered by a read query (for `optimize_skip_unused_shards` = 1) ClickHouse now checks conditions from both `prewhere` and `where` clauses of select statement. [#6521](https://github.com/ClickHouse/ClickHouse/pull/6521) ([Alexander Kazakov](https://github.com/Akazz)) - Enabled `SIMDJSON` for machines without AVX2 but with SSE 4.2 and PCLMUL instruction set. [#6285](https://github.com/ClickHouse/ClickHouse/issues/6285) [#6320](https://github.com/ClickHouse/ClickHouse/pull/6320) ([alexey-milovidov](https://github.com/alexey-milovidov)) @@ -657,7 +657,7 @@ toc_title: '2019' - Fixed possible deadlock of distributed queries when one of shards is localhost but the query is sent via network connection. [#6759](https://github.com/ClickHouse/ClickHouse/pull/6759) ([alexey-milovidov](https://github.com/alexey-milovidov)) - Changed semantic of multiple tables `RENAME` to avoid possible deadlocks. [#6757](https://github.com/ClickHouse/ClickHouse/issues/6757). [#6756](https://github.com/ClickHouse/ClickHouse/pull/6756) ([alexey-milovidov](https://github.com/alexey-milovidov)) - Rewritten MySQL compatibility server to prevent loading full packet payload in memory. Decreased memory consumption for each connection to approximately `2 * DBMS_DEFAULT_BUFFER_SIZE` (read/write buffers). [#5811](https://github.com/ClickHouse/ClickHouse/pull/5811) ([Yuriy Baranov](https://github.com/yurriy)) -- Move AST alias interpreting logic out of parser that doesn’t have to know anything about query semantics. [#6108](https://github.com/ClickHouse/ClickHouse/pull/6108) ([Artem Zuikov](https://github.com/4ertus2)) +- Move AST alias interpreting logic out of parser that does not have to know anything about query semantics. [#6108](https://github.com/ClickHouse/ClickHouse/pull/6108) ([Artem Zuikov](https://github.com/4ertus2)) - Slightly more safe parsing of `NamesAndTypesList`. [#6408](https://github.com/ClickHouse/ClickHouse/issues/6408). [#6410](https://github.com/ClickHouse/ClickHouse/pull/6410) ([alexey-milovidov](https://github.com/alexey-milovidov)) - `clickhouse-copier`: Allow use `where_condition` from config with `partition_key` alias in query for checking partition existence (Earlier it was used only in reading data queries). [#6577](https://github.com/ClickHouse/ClickHouse/pull/6577) ([proller](https://github.com/proller)) - Added optional message argument in `throwIf`. ([#5772](https://github.com/ClickHouse/ClickHouse/issues/5772)) [#6329](https://github.com/ClickHouse/ClickHouse/pull/6329) ([Vdimir](https://github.com/Vdimir)) @@ -869,7 +869,7 @@ toc_title: '2019' #### Improvement {#improvement-4} -- Throws an exception if `config.d` file doesn’t have the corresponding root element as the config file [#6123](https://github.com/ClickHouse/ClickHouse/pull/6123) ([dimarub2000](https://github.com/dimarub2000)) +- Throws an exception if `config.d` file does not have the corresponding root element as the config file [#6123](https://github.com/ClickHouse/ClickHouse/pull/6123) ([dimarub2000](https://github.com/dimarub2000)) #### Performance Improvement {#performance-improvement-3} @@ -986,7 +986,7 @@ toc_title: '2019' - Fix segfault in ExternalLoader::reloadOutdated(). [#6082](https://github.com/ClickHouse/ClickHouse/pull/6082) ([Vitaly Baranov](https://github.com/vitlibar)) - Fixed the case when server may close listening sockets but not shutdown and continue serving remaining queries. You may end up with two running clickhouse-server processes. Sometimes, the server may return an error `bad_function_call` for remaining queries. [#6231](https://github.com/ClickHouse/ClickHouse/pull/6231) ([alexey-milovidov](https://github.com/alexey-milovidov)) - Fixed useless and incorrect condition on update field for initial loading of external dictionaries via ODBC, MySQL, ClickHouse and HTTP. This fixes [#6069](https://github.com/ClickHouse/ClickHouse/issues/6069) [#6083](https://github.com/ClickHouse/ClickHouse/pull/6083) ([alexey-milovidov](https://github.com/alexey-milovidov)) -- Fixed irrelevant exception in cast of `LowCardinality(Nullable)` to not-Nullable column in case if it doesn’t contain Nulls (e.g. in query like `SELECT CAST(CAST('Hello' AS LowCardinality(Nullable(String))) AS String)`. [#6094](https://github.com/ClickHouse/ClickHouse/issues/6094) [#6119](https://github.com/ClickHouse/ClickHouse/pull/6119) ([Nikolai Kochetov](https://github.com/KochetovNicolai)) +- Fixed irrelevant exception in cast of `LowCardinality(Nullable)` to not-Nullable column in case if it does not contain Nulls (e.g. in query like `SELECT CAST(CAST('Hello' AS LowCardinality(Nullable(String))) AS String)`. [#6094](https://github.com/ClickHouse/ClickHouse/issues/6094) [#6119](https://github.com/ClickHouse/ClickHouse/pull/6119) ([Nikolai Kochetov](https://github.com/KochetovNicolai)) - Fix non-deterministic result of “uniq” aggregate function in extreme rare cases. The bug was present in all ClickHouse versions. [#6058](https://github.com/ClickHouse/ClickHouse/pull/6058) ([alexey-milovidov](https://github.com/alexey-milovidov)) - Segfault when we set a little bit too high CIDR on the function `IPv6CIDRToRange`. [#6068](https://github.com/ClickHouse/ClickHouse/pull/6068) ([Guillaume Tassery](https://github.com/YiuRULE)) - Fixed small memory leak when server throw many exceptions from many different contexts. [#6144](https://github.com/ClickHouse/ClickHouse/pull/6144) ([alexey-milovidov](https://github.com/alexey-milovidov)) @@ -1159,7 +1159,7 @@ toc_title: '2019' #### Build/Testing/Packaging Improvement {#buildtestingpackaging-improvement-8} - Implemented `TestKeeper` as an implementation of ZooKeeper interface used for testing [#5643](https://github.com/ClickHouse/ClickHouse/pull/5643) ([alexey-milovidov](https://github.com/alexey-milovidov)) ([levushkin aleksej](https://github.com/alexey-milovidov)) -- From now on `.sql` tests can be run isolated by server, in parallel, with random database. It allows to run them faster, add new tests with custom server configurations, and be sure that different tests doesn’t affect each other. [#5554](https://github.com/ClickHouse/ClickHouse/pull/5554) ([Ivan](https://github.com/abyss7)) +- From now on `.sql` tests can be run isolated by server, in parallel, with random database. It allows to run them faster, add new tests with custom server configurations, and be sure that different tests does not affect each other. [#5554](https://github.com/ClickHouse/ClickHouse/pull/5554) ([Ivan](https://github.com/abyss7)) - Remove `` and `` from performance tests [#5672](https://github.com/ClickHouse/ClickHouse/pull/5672) ([Olga Khvostikova](https://github.com/stavrolia)) - Fixed “select_format” performance test for `Pretty` formats [#5642](https://github.com/ClickHouse/ClickHouse/pull/5642) ([alexey-milovidov](https://github.com/alexey-milovidov)) @@ -1201,7 +1201,7 @@ toc_title: '2019' - Fixed UInt32 overflow bug in linear models. Allow eval ML model for non-const model argument. [#5516](https://github.com/ClickHouse/ClickHouse/pull/5516) ([Nikolai Kochetov](https://github.com/KochetovNicolai)) - `ALTER TABLE ... DROP INDEX IF EXISTS ...` should not raise an exception if provided index does not exist [#5524](https://github.com/ClickHouse/ClickHouse/pull/5524) ([Gleb Novikov](https://github.com/NanoBjorn)) - Fix segfault with `bitmapHasAny` in scalar subquery [#5528](https://github.com/ClickHouse/ClickHouse/pull/5528) ([Zhichang Yu](https://github.com/yuzhichang)) -- Fixed error when replication connection pool doesn’t retry to resolve host, even when DNS cache was dropped. [#5534](https://github.com/ClickHouse/ClickHouse/pull/5534) ([alesapin](https://github.com/alesapin)) +- Fixed error when replication connection pool does not retry to resolve host, even when DNS cache was dropped. [#5534](https://github.com/ClickHouse/ClickHouse/pull/5534) ([alesapin](https://github.com/alesapin)) - Fixed `ALTER ... MODIFY TTL` on ReplicatedMergeTree. [#5539](https://github.com/ClickHouse/ClickHouse/pull/5539) ([Anton Popov](https://github.com/CurtizJ)) - Fix INSERT into Distributed table with MATERIALIZED column [#5429](https://github.com/ClickHouse/ClickHouse/pull/5429) ([Azat Khuzhin](https://github.com/azat)) - Fix bad alloc when truncate Join storage [#5437](https://github.com/ClickHouse/ClickHouse/pull/5437) ([TCeason](https://github.com/TCeason)) @@ -1261,8 +1261,8 @@ toc_title: '2019' - Added `max_parts_in_total` setting for MergeTree family of tables (default: 100 000) that prevents unsafe specification of partition key #5166. [#5171](https://github.com/ClickHouse/ClickHouse/pull/5171) ([alexey-milovidov](https://github.com/alexey-milovidov)) - `clickhouse-obfuscator`: derive seed for individual columns by combining initial seed with column name, not column position. This is intended to transform datasets with multiple related tables, so that tables will remain JOINable after transformation. [#5178](https://github.com/ClickHouse/ClickHouse/pull/5178) ([alexey-milovidov](https://github.com/alexey-milovidov)) -- Added functions `JSONExtractRaw`, `JSONExtractKeyAndValues`. Renamed functions `jsonExtract` to `JSONExtract`. When something goes wrong these functions return the correspondent values, not `NULL`. Modified function `JSONExtract`, now it gets the return type from its last parameter and doesn’t inject nullables. Implemented fallback to RapidJSON in case AVX2 instructions are not available. Simdjson library updated to a new version. [#5235](https://github.com/ClickHouse/ClickHouse/pull/5235) ([Vitaly Baranov](https://github.com/vitlibar)) -- Now `if` and `multiIf` functions don’t rely on the condition’s `Nullable`, but rely on the branches for sql compatibility. [#5238](https://github.com/ClickHouse/ClickHouse/pull/5238) ([Jian Wu](https://github.com/janplus)) +- Added functions `JSONExtractRaw`, `JSONExtractKeyAndValues`. Renamed functions `jsonExtract` to `JSONExtract`. When something goes wrong these functions return the correspondent values, not `NULL`. Modified function `JSONExtract`, now it gets the return type from its last parameter and does not inject nullables. Implemented fallback to RapidJSON in case AVX2 instructions are not available. Simdjson library updated to a new version. [#5235](https://github.com/ClickHouse/ClickHouse/pull/5235) ([Vitaly Baranov](https://github.com/vitlibar)) +- Now `if` and `multiIf` functions do not rely on the condition’s `Nullable`, but rely on the branches for sql compatibility. [#5238](https://github.com/ClickHouse/ClickHouse/pull/5238) ([Jian Wu](https://github.com/janplus)) - `In` predicate now generates `Null` result from `Null` input like the `Equal` function. [#5152](https://github.com/ClickHouse/ClickHouse/pull/5152) ([Jian Wu](https://github.com/janplus)) - Check the time limit every (flush_interval / poll_timeout) number of rows from Kafka. This allows to break the reading from Kafka consumer more frequently and to check the time limits for the top-level streams [#5249](https://github.com/ClickHouse/ClickHouse/pull/5249) ([Ivan](https://github.com/abyss7)) - Link rdkafka with bundled SASL. It should allow to use SASL SCRAM authentication [#5253](https://github.com/ClickHouse/ClickHouse/pull/5253) ([Ivan](https://github.com/abyss7)) @@ -1347,13 +1347,13 @@ toc_title: '2019' - Fixed bitmap functions produce wrong result. [#5359](https://github.com/ClickHouse/ClickHouse/pull/5359) ([Andy Yang](https://github.com/andyyzh)) - Fix element_count for hashed dictionary (do not include duplicates) [#5440](https://github.com/ClickHouse/ClickHouse/pull/5440) ([Azat Khuzhin](https://github.com/azat)) - Use contents of environment variable TZ as the name for timezone. It helps to correctly detect default timezone in some cases.[#5443](https://github.com/ClickHouse/ClickHouse/pull/5443) ([Ivan](https://github.com/abyss7)) -- Do not try to convert integers in `dictGetT` functions, because it doesn’t work correctly. Throw an exception instead. [#5446](https://github.com/ClickHouse/ClickHouse/pull/5446) ([Artem Zuikov](https://github.com/4ertus2)) +- Do not try to convert integers in `dictGetT` functions, because it does not work correctly. Throw an exception instead. [#5446](https://github.com/ClickHouse/ClickHouse/pull/5446) ([Artem Zuikov](https://github.com/4ertus2)) - Fix settings in ExternalData HTTP request. [#5455](https://github.com/ClickHouse/ClickHouse/pull/5455) ([Danila Kutenin](https://github.com/danlark1)) - Fix bug when parts were removed only from FS without dropping them from Zookeeper. [#5520](https://github.com/ClickHouse/ClickHouse/pull/5520) ([alesapin](https://github.com/alesapin)) - Fix segmentation fault in `bitmapHasAny` function. [#5528](https://github.com/ClickHouse/ClickHouse/pull/5528) ([Zhichang Yu](https://github.com/yuzhichang)) -- Fixed error when replication connection pool doesn’t retry to resolve host, even when DNS cache was dropped. [#5534](https://github.com/ClickHouse/ClickHouse/pull/5534) ([alesapin](https://github.com/alesapin)) -- Fixed `DROP INDEX IF EXISTS` query. Now `ALTER TABLE ... DROP INDEX IF EXISTS ...` query doesn’t raise an exception if provided index does not exist. [#5524](https://github.com/ClickHouse/ClickHouse/pull/5524) ([Gleb Novikov](https://github.com/NanoBjorn)) +- Fixed error when replication connection pool does not retry to resolve host, even when DNS cache was dropped. [#5534](https://github.com/ClickHouse/ClickHouse/pull/5534) ([alesapin](https://github.com/alesapin)) +- Fixed `DROP INDEX IF EXISTS` query. Now `ALTER TABLE ... DROP INDEX IF EXISTS ...` query does not raise an exception if provided index does not exist. [#5524](https://github.com/ClickHouse/ClickHouse/pull/5524) ([Gleb Novikov](https://github.com/NanoBjorn)) - Fix union all supertype column. There were cases with inconsistent data and column types of resulting columns. [#5503](https://github.com/ClickHouse/ClickHouse/pull/5503) ([Artem Zuikov](https://github.com/4ertus2)) - Skip ZNONODE during DDL query processing. Before if another node removes the znode in task queue, the one that did not process it, but already get list of children, will terminate the DDLWorker thread. [#5489](https://github.com/ClickHouse/ClickHouse/pull/5489) ([Azat Khuzhin](https://github.com/azat)) @@ -1712,7 +1712,7 @@ toc_title: '2019' - Improved heuristics of “move to PREWHERE” optimization. [#4405](https://github.com/ClickHouse/ClickHouse/pull/4405) ([alexey-milovidov](https://github.com/alexey-milovidov)) - Use proper lookup tables that uses HashTable’s API for 8-bit and 16-bit keys. [#4536](https://github.com/ClickHouse/ClickHouse/pull/4536) ([Amos Bird](https://github.com/amosbird)) - Improved performance of string comparison. [#4564](https://github.com/ClickHouse/ClickHouse/pull/4564) ([alexey-milovidov](https://github.com/alexey-milovidov)) -- Cleanup distributed DDL queue in a separate thread so that it doesn’t slow down the main loop that processes distributed DDL tasks. [#4502](https://github.com/ClickHouse/ClickHouse/pull/4502) ([Alex Zatelepin](https://github.com/ztlpn)) +- Cleanup distributed DDL queue in a separate thread so that it does not slow down the main loop that processes distributed DDL tasks. [#4502](https://github.com/ClickHouse/ClickHouse/pull/4502) ([Alex Zatelepin](https://github.com/ztlpn)) - When `min_bytes_to_use_direct_io` is set to 1, not every file was opened with O_DIRECT mode because the data size to read was sometimes underestimated by the size of one compressed block. [#4526](https://github.com/ClickHouse/ClickHouse/pull/4526) ([alexey-milovidov](https://github.com/alexey-milovidov)) #### Build/Testing/Packaging Improvement {#buildtestingpackaging-improvement-12} @@ -1839,7 +1839,7 @@ toc_title: '2019' - Fixed incorrect result when `Date` and `DateTime` arguments are used in branches of conditional operator (function `if`). Added generic case for function `if`. [#4243](https://github.com/ClickHouse/ClickHouse/pull/4243) ([alexey-milovidov](https://github.com/alexey-milovidov)) - ClickHouse dictionaries now load within `clickhouse` process. [#4166](https://github.com/ClickHouse/ClickHouse/pull/4166) ([alexey-milovidov](https://github.com/alexey-milovidov)) - Fixed deadlock when `SELECT` from a table with `File` engine was retried after `No such file or directory` error. [#4161](https://github.com/ClickHouse/ClickHouse/pull/4161) ([alexey-milovidov](https://github.com/alexey-milovidov)) -- Fixed race condition when selecting from `system.tables` may give `table doesn't exist` error. [#4313](https://github.com/ClickHouse/ClickHouse/pull/4313) ([alexey-milovidov](https://github.com/alexey-milovidov)) +- Fixed race condition when selecting from `system.tables` may give `table does not exist` error. [#4313](https://github.com/ClickHouse/ClickHouse/pull/4313) ([alexey-milovidov](https://github.com/alexey-milovidov)) - `clickhouse-client` can segfault on exit while loading data for command line suggestions if it was run in interactive mode. [#4317](https://github.com/ClickHouse/ClickHouse/pull/4317) ([alexey-milovidov](https://github.com/alexey-milovidov)) - Fixed a bug when the execution of mutations containing `IN` operators was producing incorrect results. [#4099](https://github.com/ClickHouse/ClickHouse/pull/4099) ([Alex Zatelepin](https://github.com/ztlpn)) - Fixed error: if there is a database with `Dictionary` engine, all dictionaries forced to load at server startup, and if there is a dictionary with ClickHouse source from localhost, the dictionary cannot load. [#4255](https://github.com/ClickHouse/ClickHouse/pull/4255) ([alexey-milovidov](https://github.com/alexey-milovidov)) @@ -1941,7 +1941,7 @@ This release contains exactly the same set of patches as 19.3.6. - Fixed error: if there is a database with `Dictionary` engine, all dictionaries forced to load at server startup, and if there is a dictionary with ClickHouse source from localhost, the dictionary cannot load. [#4255](https://github.com/ClickHouse/ClickHouse/pull/4255) ([alexey-milovidov](https://github.com/alexey-milovidov)) - Fixed a bug when the execution of mutations containing `IN` operators was producing incorrect results. [#4099](https://github.com/ClickHouse/ClickHouse/pull/4099) ([Alex Zatelepin](https://github.com/ztlpn)) - `clickhouse-client` can segfault on exit while loading data for command line suggestions if it was run in interactive mode. [#4317](https://github.com/ClickHouse/ClickHouse/pull/4317) ([alexey-milovidov](https://github.com/alexey-milovidov)) -- Fixed race condition when selecting from `system.tables` may give `table doesn't exist` error. [#4313](https://github.com/ClickHouse/ClickHouse/pull/4313) ([alexey-milovidov](https://github.com/alexey-milovidov)) +- Fixed race condition when selecting from `system.tables` may give `table does not exist` error. [#4313](https://github.com/ClickHouse/ClickHouse/pull/4313) ([alexey-milovidov](https://github.com/alexey-milovidov)) - Fixed deadlock when `SELECT` from a table with `File` engine was retried after `No such file or directory` error. [#4161](https://github.com/ClickHouse/ClickHouse/pull/4161) ([alexey-milovidov](https://github.com/alexey-milovidov)) - Fixed an issue: local ClickHouse dictionaries are loaded via TCP, but should load within process. [#4166](https://github.com/ClickHouse/ClickHouse/pull/4166) ([alexey-milovidov](https://github.com/alexey-milovidov)) - Fixed `No message received` error when interacting with PostgreSQL ODBC Driver through TLS connection. Also fixes segfault when using MySQL ODBC Driver. [#4170](https://github.com/ClickHouse/ClickHouse/pull/4170) ([alexey-milovidov](https://github.com/alexey-milovidov)) @@ -2030,8 +2030,8 @@ This release contains exactly the same set of patches as 19.3.6. #### Performance Improvements {#performance-improvements-5} -- Add a MergeTree setting `use_minimalistic_part_header_in_zookeeper`. If enabled, Replicated tables will store compact part metadata in a single part znode. This can dramatically reduce ZooKeeper snapshot size (especially if the tables have a lot of columns). Note that after enabling this setting you will not be able to downgrade to a version that doesn’t support it. [#3960](https://github.com/ClickHouse/ClickHouse/pull/3960) ([Alex Zatelepin](https://github.com/ztlpn)) -- Add an DFA-based implementation for functions `sequenceMatch` and `sequenceCount` in case pattern doesn’t contain time. [#4004](https://github.com/ClickHouse/ClickHouse/pull/4004) ([Léo Ercolanelli](https://github.com/ercolanelli-leo)) +- Add a MergeTree setting `use_minimalistic_part_header_in_zookeeper`. If enabled, Replicated tables will store compact part metadata in a single part znode. This can dramatically reduce ZooKeeper snapshot size (especially if the tables have a lot of columns). Note that after enabling this setting you will not be able to downgrade to a version that does not support it. [#3960](https://github.com/ClickHouse/ClickHouse/pull/3960) ([Alex Zatelepin](https://github.com/ztlpn)) +- Add an DFA-based implementation for functions `sequenceMatch` and `sequenceCount` in case pattern does not contain time. [#4004](https://github.com/ClickHouse/ClickHouse/pull/4004) ([Léo Ercolanelli](https://github.com/ercolanelli-leo)) - Performance improvement for integer numbers serialization. [#3968](https://github.com/ClickHouse/ClickHouse/pull/3968) ([Amos Bird](https://github.com/amosbird)) - Zero left padding PODArray so that -1 element is always valid and zeroed. It’s used for branchless calculation of offsets. [#3920](https://github.com/ClickHouse/ClickHouse/pull/3920) ([Amos Bird](https://github.com/amosbird)) - Reverted `jemalloc` version which lead to performance degradation. [#4018](https://github.com/ClickHouse/ClickHouse/pull/4018) ([alexey-milovidov](https://github.com/alexey-milovidov)) @@ -2055,7 +2055,7 @@ This release contains exactly the same set of patches as 19.3.6. - Fixed bugs found by PVS-Studio. [#4013](https://github.com/ClickHouse/ClickHouse/pull/4013) ([alexey-milovidov](https://github.com/alexey-milovidov)) - Fixed glibc compatibility issues. [#4100](https://github.com/ClickHouse/ClickHouse/pull/4100) ([alexey-milovidov](https://github.com/alexey-milovidov)) - Move Docker images to 18.10 and add compatibility file for glibc \>= 2.28 [#3965](https://github.com/ClickHouse/ClickHouse/pull/3965) ([alesapin](https://github.com/alesapin)) -- Add env variable if user don’t want to chown directories in server Docker image. [#3967](https://github.com/ClickHouse/ClickHouse/pull/3967) ([alesapin](https://github.com/alesapin)) +- Add env variable if user do not want to chown directories in server Docker image. [#3967](https://github.com/ClickHouse/ClickHouse/pull/3967) ([alesapin](https://github.com/alesapin)) - Enabled most of the warnings from `-Weverything` in clang. Enabled `-Wpedantic`. [#3986](https://github.com/ClickHouse/ClickHouse/pull/3986) ([alexey-milovidov](https://github.com/alexey-milovidov)) - Added a few more warnings that are available only in clang 8. [#3993](https://github.com/ClickHouse/ClickHouse/pull/3993) ([alexey-milovidov](https://github.com/alexey-milovidov)) - Link to `libLLVM` rather than to individual LLVM libs when using shared linking. [#3989](https://github.com/ClickHouse/ClickHouse/pull/3989) ([Orivej Desh](https://github.com/orivej)) diff --git a/docs/en/whats-new/changelog/2020.md b/docs/en/whats-new/changelog/2020.md index bf4e4fb0fcc..7fb1f5d9377 100644 --- a/docs/en/whats-new/changelog/2020.md +++ b/docs/en/whats-new/changelog/2020.md @@ -14,7 +14,7 @@ toc_title: '2020' * Restrict merges from wide to compact parts. In case of vertical merge it led to broken result part. [#18381](https://github.com/ClickHouse/ClickHouse/pull/18381) ([Anton Popov](https://github.com/CurtizJ)). * Fix filling table `system.settings_profile_elements`. This PR fixes [#18231](https://github.com/ClickHouse/ClickHouse/issues/18231). [#18379](https://github.com/ClickHouse/ClickHouse/pull/18379) ([Vitaly Baranov](https://github.com/vitlibar)). * Fix possible crashes in aggregate functions with combinator `Distinct`, while using two-level aggregation. Fixes [#17682](https://github.com/ClickHouse/ClickHouse/issues/17682). [#18365](https://github.com/ClickHouse/ClickHouse/pull/18365) ([Anton Popov](https://github.com/CurtizJ)). -* Fix error when query `MODIFY COLUMN ... REMOVE TTL` doesn't actually remove column TTL. [#18130](https://github.com/ClickHouse/ClickHouse/pull/18130) ([alesapin](https://github.com/alesapin)). +* Fix error when query `MODIFY COLUMN ... REMOVE TTL` does not actually remove column TTL. [#18130](https://github.com/ClickHouse/ClickHouse/pull/18130) ([alesapin](https://github.com/alesapin)). #### Build/Testing/Packaging Improvement @@ -88,7 +88,7 @@ toc_title: '2020' * Fix `optimize_distributed_group_by_sharding_key` setting (that is disabled by default) for query with OFFSET only. [#16996](https://github.com/ClickHouse/ClickHouse/pull/16996) ([Azat Khuzhin](https://github.com/azat)). * Fix for Merge tables over Distributed tables with JOIN. [#16993](https://github.com/ClickHouse/ClickHouse/pull/16993) ([Azat Khuzhin](https://github.com/azat)). * Fixed wrong result in big integers (128, 256 bit) when casting from double. Big integers support is experimental. [#16986](https://github.com/ClickHouse/ClickHouse/pull/16986) ([Mike](https://github.com/myrrc)). -* Fix possible server crash after `ALTER TABLE ... MODIFY COLUMN ... NewType` when `SELECT` have `WHERE` expression on altering column and alter doesn't finished yet. [#16968](https://github.com/ClickHouse/ClickHouse/pull/16968) ([Amos Bird](https://github.com/amosbird)). +* Fix possible server crash after `ALTER TABLE ... MODIFY COLUMN ... NewType` when `SELECT` have `WHERE` expression on altering column and alter does not finished yet. [#16968](https://github.com/ClickHouse/ClickHouse/pull/16968) ([Amos Bird](https://github.com/amosbird)). * Blame info was not calculated correctly in `clickhouse-git-import`. [#16959](https://github.com/ClickHouse/ClickHouse/pull/16959) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fix order by optimization with monotonous functions. Fixes [#16107](https://github.com/ClickHouse/ClickHouse/issues/16107). [#16956](https://github.com/ClickHouse/ClickHouse/pull/16956) ([Anton Popov](https://github.com/CurtizJ)). * Fix optimization of group by with enabled setting `optimize_aggregators_of_group_by_keys` and joins. Fixes [#12604](https://github.com/ClickHouse/ClickHouse/issues/12604). [#16951](https://github.com/ClickHouse/ClickHouse/pull/16951) ([Anton Popov](https://github.com/CurtizJ)). @@ -212,7 +212,7 @@ toc_title: '2020' * `SELECT count() FROM table` now can be executed if only one any column can be selected from the `table`. This PR fixes [#10639](https://github.com/ClickHouse/ClickHouse/issues/10639). [#18233](https://github.com/ClickHouse/ClickHouse/pull/18233) ([Vitaly Baranov](https://github.com/vitlibar)). * `SELECT JOIN` now requires the `SELECT` privilege on each of the joined tables. This PR fixes [#17654](https://github.com/ClickHouse/ClickHouse/issues/17654). [#18232](https://github.com/ClickHouse/ClickHouse/pull/18232) ([Vitaly Baranov](https://github.com/vitlibar)). * Fix possible incomplete query result while reading from `MergeTree*` in case of read backoff (message ` MergeTreeReadPool: Will lower number of threads` in logs). Was introduced in [#16423](https://github.com/ClickHouse/ClickHouse/issues/16423). Fixes [#18137](https://github.com/ClickHouse/ClickHouse/issues/18137). [#18216](https://github.com/ClickHouse/ClickHouse/pull/18216) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). -* Fix error when query `MODIFY COLUMN ... REMOVE TTL` doesn't actually remove column TTL. [#18130](https://github.com/ClickHouse/ClickHouse/pull/18130) ([alesapin](https://github.com/alesapin)). +* Fix error when query `MODIFY COLUMN ... REMOVE TTL` does not actually remove column TTL. [#18130](https://github.com/ClickHouse/ClickHouse/pull/18130) ([alesapin](https://github.com/alesapin)). * Fix indeterministic functions with predicate optimizer. This fixes [#17244](https://github.com/ClickHouse/ClickHouse/issues/17244). [#17273](https://github.com/ClickHouse/ClickHouse/pull/17273) ([Winter Zhang](https://github.com/zhang2014)). * Mutation might hang waiting for some non-existent part after `MOVE` or `REPLACE PARTITION` or, in rare cases, after `DETACH` or `DROP PARTITION`. It's fixed. [#15537](https://github.com/ClickHouse/ClickHouse/pull/15537) ([tavplubix](https://github.com/tavplubix)). @@ -253,7 +253,7 @@ toc_title: '2020' * Avoid unnecessary network errors for remote queries which may be cancelled while execution, like queries with `LIMIT`. [#17006](https://github.com/ClickHouse/ClickHouse/pull/17006) ([Azat Khuzhin](https://github.com/azat)). * Fixed wrong result in big integers (128, 256 bit) when casting from double. [#16986](https://github.com/ClickHouse/ClickHouse/pull/16986) ([Mike](https://github.com/myrrc)). * Reresolve the IP of the `format_avro_schema_registry_url` in case of errors. [#16985](https://github.com/ClickHouse/ClickHouse/pull/16985) ([filimonov](https://github.com/filimonov)). -* Fixed possible server crash after `ALTER TABLE ... MODIFY COLUMN ... NewType` when `SELECT` have `WHERE` expression on altering column and alter doesn't finished yet. [#16968](https://github.com/ClickHouse/ClickHouse/pull/16968) ([Amos Bird](https://github.com/amosbird)). +* Fixed possible server crash after `ALTER TABLE ... MODIFY COLUMN ... NewType` when `SELECT` have `WHERE` expression on altering column and alter does not finished yet. [#16968](https://github.com/ClickHouse/ClickHouse/pull/16968) ([Amos Bird](https://github.com/amosbird)). * Blame info was not calculated correctly in `clickhouse-git-import`. [#16959](https://github.com/ClickHouse/ClickHouse/pull/16959) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fixed order by optimization with monotonous functions. Fixes [#16107](https://github.com/ClickHouse/ClickHouse/issues/16107). [#16956](https://github.com/ClickHouse/ClickHouse/pull/16956) ([Anton Popov](https://github.com/CurtizJ)). * Fixed optimization of group by with enabled setting `optimize_aggregators_of_group_by_keys` and joins. This fixes [#12604](https://github.com/ClickHouse/ClickHouse/issues/12604). [#16951](https://github.com/ClickHouse/ClickHouse/pull/16951) ([Anton Popov](https://github.com/CurtizJ)). @@ -424,7 +424,7 @@ toc_title: '2020' * Avoid unnecessary network errors for remote queries which may be cancelled while execution, like queries with `LIMIT`. [#17006](https://github.com/ClickHouse/ClickHouse/pull/17006) ([Azat Khuzhin](https://github.com/azat)). * Fixed wrong result in big integers (128, 256 bit) when casting from double. [#16986](https://github.com/ClickHouse/ClickHouse/pull/16986) ([Mike](https://github.com/myrrc)). * Reresolve the IP of the `format_avro_schema_registry_url` in case of errors. [#16985](https://github.com/ClickHouse/ClickHouse/pull/16985) ([filimonov](https://github.com/filimonov)). -* Fixed possible server crash after `ALTER TABLE ... MODIFY COLUMN ... NewType` when `SELECT` have `WHERE` expression on altering column and alter doesn't finished yet. [#16968](https://github.com/ClickHouse/ClickHouse/pull/16968) ([Amos Bird](https://github.com/amosbird)). +* Fixed possible server crash after `ALTER TABLE ... MODIFY COLUMN ... NewType` when `SELECT` have `WHERE` expression on altering column and alter does not finished yet. [#16968](https://github.com/ClickHouse/ClickHouse/pull/16968) ([Amos Bird](https://github.com/amosbird)). * Blame info was not calculated correctly in `clickhouse-git-import`. [#16959](https://github.com/ClickHouse/ClickHouse/pull/16959) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fixed order by optimization with monotonous functions. This fixes [#16107](https://github.com/ClickHouse/ClickHouse/issues/16107). [#16956](https://github.com/ClickHouse/ClickHouse/pull/16956) ([Anton Popov](https://github.com/CurtizJ)). * Fixrf optimization of group by with enabled setting `optimize_aggregators_of_group_by_keys` and joins. This fixes [#12604](https://github.com/ClickHouse/ClickHouse/issues/12604). [#16951](https://github.com/ClickHouse/ClickHouse/pull/16951) ([Anton Popov](https://github.com/CurtizJ)). @@ -514,7 +514,7 @@ toc_title: '2020' * Fix rare segfaults when inserting into or selecting from MaterializedView and concurrently dropping target table (for Atomic database engine). [#15984](https://github.com/ClickHouse/ClickHouse/pull/15984) ([tavplubix](https://github.com/tavplubix)). * Fix ambiguity in parsing of settings profiles: `CREATE USER ... SETTINGS profile readonly` is now considered as using a profile named `readonly`, not a setting named `profile` with the readonly constraint. This fixes [#15628](https://github.com/ClickHouse/ClickHouse/issues/15628). [#15982](https://github.com/ClickHouse/ClickHouse/pull/15982) ([Vitaly Baranov](https://github.com/vitlibar)). * `MaterializeMySQL` (experimental feature): Fix crash on create database failure. [#15954](https://github.com/ClickHouse/ClickHouse/pull/15954) ([Winter Zhang](https://github.com/zhang2014)). -* Fixed `DROP TABLE IF EXISTS` failure with `Table ... doesn't exist` error when table is concurrently renamed (for Atomic database engine). Fixed rare deadlock when concurrently executing some DDL queries with multiple tables (like `DROP DATABASE` and `RENAME TABLE`) - Fixed `DROP/DETACH DATABASE` failure with `Table ... doesn't exist` when concurrently executing `DROP/DETACH TABLE`. [#15934](https://github.com/ClickHouse/ClickHouse/pull/15934) ([tavplubix](https://github.com/tavplubix)). +* Fixed `DROP TABLE IF EXISTS` failure with `Table ... does not exist` error when table is concurrently renamed (for Atomic database engine). Fixed rare deadlock when concurrently executing some DDL queries with multiple tables (like `DROP DATABASE` and `RENAME TABLE`) - Fixed `DROP/DETACH DATABASE` failure with `Table ... does not exist` when concurrently executing `DROP/DETACH TABLE`. [#15934](https://github.com/ClickHouse/ClickHouse/pull/15934) ([tavplubix](https://github.com/tavplubix)). * Fix incorrect empty result for query from `Distributed` table if query has `WHERE`, `PREWHERE` and `GLOBAL IN`. Fixes [#15792](https://github.com/ClickHouse/ClickHouse/issues/15792). [#15933](https://github.com/ClickHouse/ClickHouse/pull/15933) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). * Fixes [#12513](https://github.com/ClickHouse/ClickHouse/issues/12513): difference expressions with same alias when query is reanalyzed. [#15886](https://github.com/ClickHouse/ClickHouse/pull/15886) ([Winter Zhang](https://github.com/zhang2014)). * Fix possible very rare deadlocks in RBAC implementation. [#15875](https://github.com/ClickHouse/ClickHouse/pull/15875) ([Vitaly Baranov](https://github.com/vitlibar)). @@ -535,7 +535,7 @@ toc_title: '2020' * Fixed `Element ... is not a constant expression` error when using `JSON*` function result in `VALUES`, `LIMIT` or right side of `IN` operator. [#15589](https://github.com/ClickHouse/ClickHouse/pull/15589) ([tavplubix](https://github.com/tavplubix)). * Query will finish faster in case of exception. Cancel execution on remote replicas if exception happens. [#15578](https://github.com/ClickHouse/ClickHouse/pull/15578) ([Azat Khuzhin](https://github.com/azat)). * Prevent the possibility of error message `Could not calculate available disk space (statvfs), errno: 4, strerror: Interrupted system call`. This fixes [#15541](https://github.com/ClickHouse/ClickHouse/issues/15541). [#15557](https://github.com/ClickHouse/ClickHouse/pull/15557) ([alexey-milovidov](https://github.com/alexey-milovidov)). -* Fix `Database doesn't exist.` in queries with IN and Distributed table when there's no database on initiator. [#15538](https://github.com/ClickHouse/ClickHouse/pull/15538) ([Artem Zuikov](https://github.com/4ertus2)). +* Fix `Database does not exist.` in queries with IN and Distributed table when there's no database on initiator. [#15538](https://github.com/ClickHouse/ClickHouse/pull/15538) ([Artem Zuikov](https://github.com/4ertus2)). * Mutation might hang waiting for some non-existent part after `MOVE` or `REPLACE PARTITION` or, in rare cases, after `DETACH` or `DROP PARTITION`. It's fixed. [#15537](https://github.com/ClickHouse/ClickHouse/pull/15537) ([tavplubix](https://github.com/tavplubix)). * Fix bug when `ILIKE` operator stops being case insensitive if `LIKE` with the same pattern was executed. [#15536](https://github.com/ClickHouse/ClickHouse/pull/15536) ([alesapin](https://github.com/alesapin)). * Fix `Missing columns` errors when selecting columns which absent in data, but depend on other columns which also absent in data. Fixes [#15530](https://github.com/ClickHouse/ClickHouse/issues/15530). [#15532](https://github.com/ClickHouse/ClickHouse/pull/15532) ([alesapin](https://github.com/alesapin)). @@ -552,7 +552,7 @@ toc_title: '2020' * Fix MSan report in QueryLog. Uninitialized memory can be used for the field `memory_usage`. [#15258](https://github.com/ClickHouse/ClickHouse/pull/15258) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fix 'Unknown identifier' in GROUP BY when query has JOIN over Merge table. [#15242](https://github.com/ClickHouse/ClickHouse/pull/15242) ([Artem Zuikov](https://github.com/4ertus2)). * Fix instance crash when using `joinGet` with `LowCardinality` types. This fixes [#15214](https://github.com/ClickHouse/ClickHouse/issues/15214). [#15220](https://github.com/ClickHouse/ClickHouse/pull/15220) ([Amos Bird](https://github.com/amosbird)). -* Fix bug in table engine `Buffer` which doesn't allow to insert data of new structure into `Buffer` after `ALTER` query. Fixes [#15117](https://github.com/ClickHouse/ClickHouse/issues/15117). [#15192](https://github.com/ClickHouse/ClickHouse/pull/15192) ([alesapin](https://github.com/alesapin)). +* Fix bug in table engine `Buffer` which does not allow to insert data of new structure into `Buffer` after `ALTER` query. Fixes [#15117](https://github.com/ClickHouse/ClickHouse/issues/15117). [#15192](https://github.com/ClickHouse/ClickHouse/pull/15192) ([alesapin](https://github.com/alesapin)). * Adjust Decimal field size in MySQL column definition packet. [#15152](https://github.com/ClickHouse/ClickHouse/pull/15152) ([maqroll](https://github.com/maqroll)). * Fixes `Data compressed with different methods` in `join_algorithm='auto'`. Keep LowCardinality as type for left table join key in `join_algorithm='partial_merge'`. [#15088](https://github.com/ClickHouse/ClickHouse/pull/15088) ([Artem Zuikov](https://github.com/4ertus2)). * Update `jemalloc` to fix `percpu_arena` with affinity mask. [#15035](https://github.com/ClickHouse/ClickHouse/pull/15035) ([Azat Khuzhin](https://github.com/azat)). [#14957](https://github.com/ClickHouse/ClickHouse/pull/14957) ([Azat Khuzhin](https://github.com/azat)). @@ -711,7 +711,7 @@ toc_title: '2020' * Fix bug when `ON CLUSTER` queries may hang forever for non-leader ReplicatedMergeTreeTables. [#17089](https://github.com/ClickHouse/ClickHouse/pull/17089) ([alesapin](https://github.com/alesapin)). * Reresolve the IP of the `format_avro_schema_registry_url` in case of errors. [#16985](https://github.com/ClickHouse/ClickHouse/pull/16985) ([filimonov](https://github.com/filimonov)). -* Fix possible server crash after `ALTER TABLE ... MODIFY COLUMN ... NewType` when `SELECT` have `WHERE` expression on altering column and alter doesn't finished yet. [#16968](https://github.com/ClickHouse/ClickHouse/pull/16968) ([Amos Bird](https://github.com/amosbird)). +* Fix possible server crash after `ALTER TABLE ... MODIFY COLUMN ... NewType` when `SELECT` have `WHERE` expression on altering column and alter does not finished yet. [#16968](https://github.com/ClickHouse/ClickHouse/pull/16968) ([Amos Bird](https://github.com/amosbird)). * Install script should always create subdirs in config folders. This is only relevant for Docker build with custom config. [#16936](https://github.com/ClickHouse/ClickHouse/pull/16936) ([filimonov](https://github.com/filimonov)). * Fix possible error `Illegal type of argument` for queries with `ORDER BY`. Fixes [#16580](https://github.com/ClickHouse/ClickHouse/issues/16580). [#16928](https://github.com/ClickHouse/ClickHouse/pull/16928) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). * Abort multipart upload if no data was written to WriteBufferFromS3. [#16840](https://github.com/ClickHouse/ClickHouse/pull/16840) ([Pavel Kovalenko](https://github.com/Jokser)). @@ -751,7 +751,7 @@ toc_title: '2020' * Fix rare segfaults when inserting into or selecting from MaterializedView and concurrently dropping target table (for Atomic database engine). [#15984](https://github.com/ClickHouse/ClickHouse/pull/15984) ([tavplubix](https://github.com/tavplubix)). * Fix ambiguity in parsing of settings profiles: `CREATE USER ... SETTINGS profile readonly` is now considered as using a profile named `readonly`, not a setting named `profile` with the readonly constraint. This fixes [#15628](https://github.com/ClickHouse/ClickHouse/issues/15628). [#15982](https://github.com/ClickHouse/ClickHouse/pull/15982) ([Vitaly Baranov](https://github.com/vitlibar)). * Fix a crash when database creation fails. [#15954](https://github.com/ClickHouse/ClickHouse/pull/15954) ([Winter Zhang](https://github.com/zhang2014)). -* Fixed `DROP TABLE IF EXISTS` failure with `Table ... doesn't exist` error when table is concurrently renamed (for Atomic database engine). Fixed rare deadlock when concurrently executing some DDL queries with multiple tables (like `DROP DATABASE` and `RENAME TABLE`) Fixed `DROP/DETACH DATABASE` failure with `Table ... doesn't exist` when concurrently executing `DROP/DETACH TABLE`. [#15934](https://github.com/ClickHouse/ClickHouse/pull/15934) ([tavplubix](https://github.com/tavplubix)). +* Fixed `DROP TABLE IF EXISTS` failure with `Table ... does not exist` error when table is concurrently renamed (for Atomic database engine). Fixed rare deadlock when concurrently executing some DDL queries with multiple tables (like `DROP DATABASE` and `RENAME TABLE`) Fixed `DROP/DETACH DATABASE` failure with `Table ... does not exist` when concurrently executing `DROP/DETACH TABLE`. [#15934](https://github.com/ClickHouse/ClickHouse/pull/15934) ([tavplubix](https://github.com/tavplubix)). * Fix incorrect empty result for query from `Distributed` table if query has `WHERE`, `PREWHERE` and `GLOBAL IN`. Fixes [#15792](https://github.com/ClickHouse/ClickHouse/issues/15792). [#15933](https://github.com/ClickHouse/ClickHouse/pull/15933) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). * Fix possible deadlocks in RBAC. [#15875](https://github.com/ClickHouse/ClickHouse/pull/15875) ([Vitaly Baranov](https://github.com/vitlibar)). * Fix exception `Block structure mismatch` in `SELECT ... ORDER BY DESC` queries which were executed after `ALTER MODIFY COLUMN` query. Fixes [#15800](https://github.com/ClickHouse/ClickHouse/issues/15800). [#15852](https://github.com/ClickHouse/ClickHouse/pull/15852) ([alesapin](https://github.com/alesapin)). @@ -789,7 +789,7 @@ toc_title: '2020' * Fix rare race condition on server startup when system.logs are enabled. [#15300](https://github.com/ClickHouse/ClickHouse/pull/15300) ([alesapin](https://github.com/alesapin)). * Fix MSan report in QueryLog. Uninitialized memory can be used for the field `memory_usage`. [#15258](https://github.com/ClickHouse/ClickHouse/pull/15258) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fix instance crash when using joinGet with LowCardinality types. This fixes [#15214](https://github.com/ClickHouse/ClickHouse/issues/15214). [#15220](https://github.com/ClickHouse/ClickHouse/pull/15220) ([Amos Bird](https://github.com/amosbird)). -* Fix bug in table engine `Buffer` which doesn't allow to insert data of new structure into `Buffer` after `ALTER` query. Fixes [#15117](https://github.com/ClickHouse/ClickHouse/issues/15117). [#15192](https://github.com/ClickHouse/ClickHouse/pull/15192) ([alesapin](https://github.com/alesapin)). +* Fix bug in table engine `Buffer` which does not allow to insert data of new structure into `Buffer` after `ALTER` query. Fixes [#15117](https://github.com/ClickHouse/ClickHouse/issues/15117). [#15192](https://github.com/ClickHouse/ClickHouse/pull/15192) ([alesapin](https://github.com/alesapin)). * Adjust decimals field size in mysql column definition packet. [#15152](https://github.com/ClickHouse/ClickHouse/pull/15152) ([maqroll](https://github.com/maqroll)). * Fixed `Cannot rename ... errno: 22, strerror: Invalid argument` error on DDL query execution in Atomic database when running clickhouse-server in docker on Mac OS. [#15024](https://github.com/ClickHouse/ClickHouse/pull/15024) ([tavplubix](https://github.com/tavplubix)). * Fix to make predicate push down work when subquery contains finalizeAggregation function. Fixes [#14847](https://github.com/ClickHouse/ClickHouse/issues/14847). [#14937](https://github.com/ClickHouse/ClickHouse/pull/14937) ([filimonov](https://github.com/filimonov)). @@ -909,7 +909,7 @@ toc_title: '2020' * Fixed bug when `ON CLUSTER` queries may hang forever for non-leader ReplicatedMergeTreeTables. [#17089](https://github.com/ClickHouse/ClickHouse/pull/17089) ([alesapin](https://github.com/alesapin)). * Avoid unnecessary network errors for remote queries which may be cancelled while execution, like queries with `LIMIT`. [#17006](https://github.com/ClickHouse/ClickHouse/pull/17006) ([Azat Khuzhin](https://github.com/azat)). * Reresolve the IP of the `format_avro_schema_registry_url` in case of errors. [#16985](https://github.com/ClickHouse/ClickHouse/pull/16985) ([filimonov](https://github.com/filimonov)). -* Fixed possible server crash after `ALTER TABLE ... MODIFY COLUMN ... NewType` when `SELECT` have `WHERE` expression on altering column and alter doesn't finished yet. [#16968](https://github.com/ClickHouse/ClickHouse/pull/16968) ([Amos Bird](https://github.com/amosbird)). +* Fixed possible server crash after `ALTER TABLE ... MODIFY COLUMN ... NewType` when `SELECT` have `WHERE` expression on altering column and alter does not finished yet. [#16968](https://github.com/ClickHouse/ClickHouse/pull/16968) ([Amos Bird](https://github.com/amosbird)). * Install script should always create subdirs in config folders. This is only relevant for Docker build with custom config. [#16936](https://github.com/ClickHouse/ClickHouse/pull/16936) ([filimonov](https://github.com/filimonov)). * Fixed possible error `Illegal type of argument` for queries with `ORDER BY`. Fixes [#16580](https://github.com/ClickHouse/ClickHouse/issues/16580). [#16928](https://github.com/ClickHouse/ClickHouse/pull/16928) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). * Abort multipart upload if no data was written to WriteBufferFromS3. [#16840](https://github.com/ClickHouse/ClickHouse/pull/16840) ([Pavel Kovalenko](https://github.com/Jokser)). @@ -949,7 +949,7 @@ toc_title: '2020' * Fix rare segfaults when inserting into or selecting from MaterializedView and concurrently dropping target table (for Atomic database engine). [#15984](https://github.com/ClickHouse/ClickHouse/pull/15984) ([tavplubix](https://github.com/tavplubix)). * Fix ambiguity in parsing of settings profiles: `CREATE USER ... SETTINGS profile readonly` is now considered as using a profile named `readonly`, not a setting named `profile` with the readonly constraint. This fixes [#15628](https://github.com/ClickHouse/ClickHouse/issues/15628). [#15982](https://github.com/ClickHouse/ClickHouse/pull/15982) ([Vitaly Baranov](https://github.com/vitlibar)). * Fix a crash when database creation fails. [#15954](https://github.com/ClickHouse/ClickHouse/pull/15954) ([Winter Zhang](https://github.com/zhang2014)). -* Fixed `DROP TABLE IF EXISTS` failure with `Table ... doesn't exist` error when table is concurrently renamed (for Atomic database engine). Fixed rare deadlock when concurrently executing some DDL queries with multiple tables (like `DROP DATABASE` and `RENAME TABLE`) Fixed `DROP/DETACH DATABASE` failure with `Table ... doesn't exist` when concurrently executing `DROP/DETACH TABLE`. [#15934](https://github.com/ClickHouse/ClickHouse/pull/15934) ([tavplubix](https://github.com/tavplubix)). +* Fixed `DROP TABLE IF EXISTS` failure with `Table ... does not exist` error when table is concurrently renamed (for Atomic database engine). Fixed rare deadlock when concurrently executing some DDL queries with multiple tables (like `DROP DATABASE` and `RENAME TABLE`) Fixed `DROP/DETACH DATABASE` failure with `Table ... does not exist` when concurrently executing `DROP/DETACH TABLE`. [#15934](https://github.com/ClickHouse/ClickHouse/pull/15934) ([tavplubix](https://github.com/tavplubix)). * Fix incorrect empty result for query from `Distributed` table if query has `WHERE`, `PREWHERE` and `GLOBAL IN`. Fixes [#15792](https://github.com/ClickHouse/ClickHouse/issues/15792). [#15933](https://github.com/ClickHouse/ClickHouse/pull/15933) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). * Fix possible deadlocks in RBAC. [#15875](https://github.com/ClickHouse/ClickHouse/pull/15875) ([Vitaly Baranov](https://github.com/vitlibar)). * Fix exception `Block structure mismatch` in `SELECT ... ORDER BY DESC` queries which were executed after `ALTER MODIFY COLUMN` query. Fixes [#15800](https://github.com/ClickHouse/ClickHouse/issues/15800). [#15852](https://github.com/ClickHouse/ClickHouse/pull/15852) ([alesapin](https://github.com/alesapin)). @@ -984,7 +984,7 @@ toc_title: '2020' * Fix rare race condition on server startup when system.logs are enabled. [#15300](https://github.com/ClickHouse/ClickHouse/pull/15300) ([alesapin](https://github.com/alesapin)). * Fix MSan report in QueryLog. Uninitialized memory can be used for the field `memory_usage`. [#15258](https://github.com/ClickHouse/ClickHouse/pull/15258) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fix instance crash when using joinGet with LowCardinality types. This fixes [#15214](https://github.com/ClickHouse/ClickHouse/issues/15214). [#15220](https://github.com/ClickHouse/ClickHouse/pull/15220) ([Amos Bird](https://github.com/amosbird)). -* Fix bug in table engine `Buffer` which doesn't allow to insert data of new structure into `Buffer` after `ALTER` query. Fixes [#15117](https://github.com/ClickHouse/ClickHouse/issues/15117). [#15192](https://github.com/ClickHouse/ClickHouse/pull/15192) ([alesapin](https://github.com/alesapin)). +* Fix bug in table engine `Buffer` which does not allow to insert data of new structure into `Buffer` after `ALTER` query. Fixes [#15117](https://github.com/ClickHouse/ClickHouse/issues/15117). [#15192](https://github.com/ClickHouse/ClickHouse/pull/15192) ([alesapin](https://github.com/alesapin)). * Adjust decimals field size in mysql column definition packet. [#15152](https://github.com/ClickHouse/ClickHouse/pull/15152) ([maqroll](https://github.com/maqroll)). * We already use padded comparison between String and FixedString (https://github.com/ClickHouse/ClickHouse/blob/master/src/Functions/FunctionsComparison.h#L333). This PR applies the same logic to field comparison which corrects the usage of FixedString as primary keys. This fixes [#14908](https://github.com/ClickHouse/ClickHouse/issues/14908). [#15033](https://github.com/ClickHouse/ClickHouse/pull/15033) ([Amos Bird](https://github.com/amosbird)). * If function `bar` was called with specifically crafted arguments, buffer overflow was possible. This closes [#13926](https://github.com/ClickHouse/ClickHouse/issues/13926). [#15028](https://github.com/ClickHouse/ClickHouse/pull/15028) ([alexey-milovidov](https://github.com/alexey-milovidov)). @@ -1028,7 +1028,7 @@ toc_title: '2020' #### Backward Incompatible Change -* Now `OPTIMIZE FINAL` query doesn't recalculate TTL for parts that were added before TTL was created. Use `ALTER TABLE ... MATERIALIZE TTL` once to calculate them, after that `OPTIMIZE FINAL` will evaluate TTL's properly. This behavior never worked for replicated tables. [#14220](https://github.com/ClickHouse/ClickHouse/pull/14220) ([alesapin](https://github.com/alesapin)). +* Now `OPTIMIZE FINAL` query does not recalculate TTL for parts that were added before TTL was created. Use `ALTER TABLE ... MATERIALIZE TTL` once to calculate them, after that `OPTIMIZE FINAL` will evaluate TTL's properly. This behavior never worked for replicated tables. [#14220](https://github.com/ClickHouse/ClickHouse/pull/14220) ([alesapin](https://github.com/alesapin)). * Extend `parallel_distributed_insert_select` setting, adding an option to run `INSERT` into local table. The setting changes type from `Bool` to `UInt64`, so the values `false` and `true` are no longer supported. If you have these values in server configuration, the server will not start. Please replace them with `0` and `1`, respectively. [#14060](https://github.com/ClickHouse/ClickHouse/pull/14060) ([Azat Khuzhin](https://github.com/azat)). * Remove support for the `ODBCDriver` input/output format. This was a deprecated format once used for communication with the ClickHouse ODBC driver, now long superseded by the `ODBCDriver2` format. Resolves [#13629](https://github.com/ClickHouse/ClickHouse/issues/13629). [#13847](https://github.com/ClickHouse/ClickHouse/pull/13847) ([hexiaoting](https://github.com/hexiaoting)). * When upgrading from versions older than 20.5, if rolling update is performed and cluster contains both versions 20.5 or greater and less than 20.5, if ClickHouse nodes with old versions are restarted and old version has been started up in presence of newer versions, it may lead to `Part ... intersects previous part` errors. To prevent this error, first install newer clickhouse-server packages on all cluster nodes and then do restarts (so, when clickhouse-server is restarted, it will start up with the new version). @@ -1257,7 +1257,7 @@ toc_title: '2020' * Fixed [#10572](https://github.com/ClickHouse/ClickHouse/issues/10572) fix bloom filter index with const expression. [#12659](https://github.com/ClickHouse/ClickHouse/pull/12659) ([Winter Zhang](https://github.com/zhang2014)). * Fix SIGSEGV in StorageKafka when broker is unavailable (and not only). [#12658](https://github.com/ClickHouse/ClickHouse/pull/12658) ([Azat Khuzhin](https://github.com/azat)). * Add support for function `if` with `Array(UUID)` arguments. This fixes [#11066](https://github.com/ClickHouse/ClickHouse/issues/11066). [#12648](https://github.com/ClickHouse/ClickHouse/pull/12648) ([alexey-milovidov](https://github.com/alexey-milovidov)). -* CREATE USER IF NOT EXISTS now doesn't throw exception if the user exists. This fixes [#12507](https://github.com/ClickHouse/ClickHouse/issues/12507). [#12646](https://github.com/ClickHouse/ClickHouse/pull/12646) ([Vitaly Baranov](https://github.com/vitlibar)). +* CREATE USER IF NOT EXISTS now does not throw exception if the user exists. This fixes [#12507](https://github.com/ClickHouse/ClickHouse/issues/12507). [#12646](https://github.com/ClickHouse/ClickHouse/pull/12646) ([Vitaly Baranov](https://github.com/vitlibar)). * Exception `There is no supertype...` can be thrown during `ALTER ... UPDATE` in unexpected cases (e.g. when subtracting from UInt64 column). This fixes [#7306](https://github.com/ClickHouse/ClickHouse/issues/7306). This fixes [#4165](https://github.com/ClickHouse/ClickHouse/issues/4165). [#12633](https://github.com/ClickHouse/ClickHouse/pull/12633) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fix possible `Pipeline stuck` error for queries with external sorting. Fixes [#12617](https://github.com/ClickHouse/ClickHouse/issues/12617). [#12618](https://github.com/ClickHouse/ClickHouse/pull/12618) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). * Fix error `Output of TreeExecutor is not sorted` for `OPTIMIZE DEDUPLICATE`. Fixes [#11572](https://github.com/ClickHouse/ClickHouse/issues/11572). [#12613](https://github.com/ClickHouse/ClickHouse/pull/12613) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). @@ -1398,7 +1398,7 @@ toc_title: '2020' * Fixed bloom filter index with const expression. This fixes [#10572](https://github.com/ClickHouse/ClickHouse/issues/10572). [#12659](https://github.com/ClickHouse/ClickHouse/pull/12659) ([Winter Zhang](https://github.com/zhang2014)). * Fixed `SIGSEGV` in `StorageKafka` when broker is unavailable (and not only). [#12658](https://github.com/ClickHouse/ClickHouse/pull/12658) ([Azat Khuzhin](https://github.com/azat)). * Added support for function `if` with `Array(UUID)` arguments. This fixes [#11066](https://github.com/ClickHouse/ClickHouse/issues/11066). [#12648](https://github.com/ClickHouse/ClickHouse/pull/12648) ([alexey-milovidov](https://github.com/alexey-milovidov)). -* `CREATE USER IF NOT EXISTS` now doesn't throw exception if the user exists. This fixes [#12507](https://github.com/ClickHouse/ClickHouse/issues/12507). [#12646](https://github.com/ClickHouse/ClickHouse/pull/12646) ([Vitaly Baranov](https://github.com/vitlibar)). +* `CREATE USER IF NOT EXISTS` now does not throw exception if the user exists. This fixes [#12507](https://github.com/ClickHouse/ClickHouse/issues/12507). [#12646](https://github.com/ClickHouse/ClickHouse/pull/12646) ([Vitaly Baranov](https://github.com/vitlibar)). * Better exception message in disk access storage. [#12625](https://github.com/ClickHouse/ClickHouse/pull/12625) ([alesapin](https://github.com/alesapin)). * The function `groupArrayMoving*` was not working for distributed queries. It's result was calculated within incorrect data type (without promotion to the largest type). The function `groupArrayMovingAvg` was returning integer number that was inconsistent with the `avg` function. This fixes [#12568](https://github.com/ClickHouse/ClickHouse/issues/12568). [#12622](https://github.com/ClickHouse/ClickHouse/pull/12622) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fixed lack of aliases with function `any`. [#12593](https://github.com/ClickHouse/ClickHouse/pull/12593) ([Anton Popov](https://github.com/CurtizJ)). @@ -1436,7 +1436,7 @@ toc_title: '2020' * Cap max_memory_usage* limits to the process resident memory. [#12182](https://github.com/ClickHouse/ClickHouse/pull/12182) ([Azat Khuzhin](https://github.com/azat)). * Fix dictGet arguments check during `GROUP BY` injective functions elimination. [#12179](https://github.com/ClickHouse/ClickHouse/pull/12179) ([Azat Khuzhin](https://github.com/azat)). * Fixed the behaviour when `SummingMergeTree` engine sums up columns from partition key. Added an exception in case of explicit definition of columns to sum which intersects with partition key columns. This fixes [#7867](https://github.com/ClickHouse/ClickHouse/issues/7867). [#12173](https://github.com/ClickHouse/ClickHouse/pull/12173) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). -* Don't split the dictionary source's table name into schema and table name itself if ODBC connection doesn't support schema. [#12165](https://github.com/ClickHouse/ClickHouse/pull/12165) ([Vitaly Baranov](https://github.com/vitlibar)). +* Don't split the dictionary source's table name into schema and table name itself if ODBC connection does not support schema. [#12165](https://github.com/ClickHouse/ClickHouse/pull/12165) ([Vitaly Baranov](https://github.com/vitlibar)). * Fixed wrong logic in `ALTER DELETE` that leads to deleting of records when condition evaluates to NULL. This fixes [#9088](https://github.com/ClickHouse/ClickHouse/issues/9088). This closes [#12106](https://github.com/ClickHouse/ClickHouse/issues/12106). [#12153](https://github.com/ClickHouse/ClickHouse/pull/12153) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fixed transform of query to send to external DBMS (e.g. MySQL, ODBC) in presense of aliases. This fixes [#12032](https://github.com/ClickHouse/ClickHouse/issues/12032). [#12151](https://github.com/ClickHouse/ClickHouse/pull/12151) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fixed bad code in redundant ORDER BY optimization. The bug was introduced in [#10067](https://github.com/ClickHouse/ClickHouse/issues/10067). [#12148](https://github.com/ClickHouse/ClickHouse/pull/12148) ([alexey-milovidov](https://github.com/alexey-milovidov)). @@ -1467,7 +1467,7 @@ toc_title: '2020' * Added `KILL QUERY [connection_id]` for the MySQL client/driver to cancel the long query, issue [#12038](https://github.com/ClickHouse/ClickHouse/issues/12038). [#12152](https://github.com/ClickHouse/ClickHouse/pull/12152) ([BohuTANG](https://github.com/BohuTANG)). * Added support for `%g` (two digit ISO year) and `%G` (four digit ISO year) substitutions in `formatDateTime` function. [#12136](https://github.com/ClickHouse/ClickHouse/pull/12136) ([vivarum](https://github.com/vivarum)). * Added 'type' column in system.disks. [#12115](https://github.com/ClickHouse/ClickHouse/pull/12115) ([ianton-ru](https://github.com/ianton-ru)). -* Improved `REVOKE` command: now it requires grant/admin option for only access which will be revoked. For example, to execute `REVOKE ALL ON *.* FROM user1` now it doesn't require to have full access rights granted with grant option. Added command `REVOKE ALL FROM user1` - it revokes all granted roles from `user1`. [#12083](https://github.com/ClickHouse/ClickHouse/pull/12083) ([Vitaly Baranov](https://github.com/vitlibar)). +* Improved `REVOKE` command: now it requires grant/admin option for only access which will be revoked. For example, to execute `REVOKE ALL ON *.* FROM user1` now it does not require to have full access rights granted with grant option. Added command `REVOKE ALL FROM user1` - it revokes all granted roles from `user1`. [#12083](https://github.com/ClickHouse/ClickHouse/pull/12083) ([Vitaly Baranov](https://github.com/vitlibar)). * Added replica priority for load_balancing (for manual prioritization of the load balancing). [#11995](https://github.com/ClickHouse/ClickHouse/pull/11995) ([Azat Khuzhin](https://github.com/azat)). * Switched paths in S3 metadata to relative which allows to handle S3 blobs more easily. [#11892](https://github.com/ClickHouse/ClickHouse/pull/11892) ([Vladimir Chebotarev](https://github.com/excitoon)). @@ -1617,7 +1617,7 @@ toc_title: '2020' * Fix wrong result of comparison of FixedString with constant String. This fixes [#11393](https://github.com/ClickHouse/ClickHouse/issues/11393). This bug appeared in version 20.4. [#11828](https://github.com/ClickHouse/ClickHouse/pull/11828) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fix wrong result for `if` with NULLs in condition. [#11807](https://github.com/ClickHouse/ClickHouse/pull/11807) ([Artem Zuikov](https://github.com/4ertus2)). * Fix using too many threads for queries. [#11788](https://github.com/ClickHouse/ClickHouse/pull/11788) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). -* Fixed `Scalar doesn't exist` exception when using `WITH ...` in `SELECT ... FROM merge_tree_table ...` [#11621](https://github.com/ClickHouse/ClickHouse/issues/11621). [#11767](https://github.com/ClickHouse/ClickHouse/pull/11767) ([Amos Bird](https://github.com/amosbird)). +* Fixed `Scalar does not exist` exception when using `WITH ...` in `SELECT ... FROM merge_tree_table ...` [#11621](https://github.com/ClickHouse/ClickHouse/issues/11621). [#11767](https://github.com/ClickHouse/ClickHouse/pull/11767) ([Amos Bird](https://github.com/amosbird)). * Fix unexpected behaviour of queries like `SELECT *, xyz.*` which were success while an error expected. [#11753](https://github.com/ClickHouse/ClickHouse/pull/11753) ([hexiaoting](https://github.com/hexiaoting)). * Now replicated fetches will be cancelled during metadata alter. [#11744](https://github.com/ClickHouse/ClickHouse/pull/11744) ([alesapin](https://github.com/alesapin)). * Parse metadata stored in zookeeper before checking for equality. [#11739](https://github.com/ClickHouse/ClickHouse/pull/11739) ([Azat Khuzhin](https://github.com/azat)). @@ -1637,7 +1637,7 @@ toc_title: '2020' * Fix wrong exit code of the clickhouse-client, when `exception.code() % 256 == 0`. [#11601](https://github.com/ClickHouse/ClickHouse/pull/11601) ([filimonov](https://github.com/filimonov)). * Fix race conditions in CREATE/DROP of different replicas of ReplicatedMergeTree. Continue to work if the table was not removed completely from ZooKeeper or not created successfully. This fixes [#11432](https://github.com/ClickHouse/ClickHouse/issues/11432). [#11592](https://github.com/ClickHouse/ClickHouse/pull/11592) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fix trivial error in log message about "Mark cache size was lowered" at server startup. This closes [#11399](https://github.com/ClickHouse/ClickHouse/issues/11399). [#11589](https://github.com/ClickHouse/ClickHouse/pull/11589) ([alexey-milovidov](https://github.com/alexey-milovidov)). -* Fix error `Size of offsets doesn't match size of column` for queries with `PREWHERE column in (subquery)` and `ARRAY JOIN`. [#11580](https://github.com/ClickHouse/ClickHouse/pull/11580) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix error `Size of offsets does not match size of column` for queries with `PREWHERE column in (subquery)` and `ARRAY JOIN`. [#11580](https://github.com/ClickHouse/ClickHouse/pull/11580) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). * Fixed rare segfault in `SHOW CREATE TABLE` Fixes [#11490](https://github.com/ClickHouse/ClickHouse/issues/11490). [#11579](https://github.com/ClickHouse/ClickHouse/pull/11579) ([tavplubix](https://github.com/tavplubix)). * All queries in HTTP session have had the same query_id. It is fixed. [#11578](https://github.com/ClickHouse/ClickHouse/pull/11578) ([tavplubix](https://github.com/tavplubix)). * Now clickhouse-server docker container will prefer IPv6 checking server aliveness. [#11550](https://github.com/ClickHouse/ClickHouse/pull/11550) ([Ivan Starkov](https://github.com/istarkov)). @@ -1758,7 +1758,7 @@ toc_title: '2020' * Multiple names are now allowed in commands: CREATE USER, CREATE ROLE, ALTER USER, SHOW CREATE USER, SHOW GRANTS and so on. [#11670](https://github.com/ClickHouse/ClickHouse/pull/11670) ([Vitaly Baranov](https://github.com/vitlibar)). * Add support for distributed DDL (`UPDATE/DELETE/DROP PARTITION`) on cross replication clusters. [#11508](https://github.com/ClickHouse/ClickHouse/pull/11508) ([frank lee](https://github.com/etah000)). * Clear password from command line in `clickhouse-client` and `clickhouse-benchmark` if the user has specified it with explicit value. This prevents password exposure by `ps` and similar tools. [#11665](https://github.com/ClickHouse/ClickHouse/pull/11665) ([alexey-milovidov](https://github.com/alexey-milovidov)). -* Don't use debug info from ELF file if it doesn't correspond to the running binary. It is needed to avoid printing wrong function names and source locations in stack traces. This fixes [#7514](https://github.com/ClickHouse/ClickHouse/issues/7514). [#11657](https://github.com/ClickHouse/ClickHouse/pull/11657) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Don't use debug info from ELF file if it does not correspond to the running binary. It is needed to avoid printing wrong function names and source locations in stack traces. This fixes [#7514](https://github.com/ClickHouse/ClickHouse/issues/7514). [#11657](https://github.com/ClickHouse/ClickHouse/pull/11657) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Return NULL/zero when value is not parsed completely in parseDateTimeBestEffortOrNull/Zero functions. This fixes [#7876](https://github.com/ClickHouse/ClickHouse/issues/7876). [#11653](https://github.com/ClickHouse/ClickHouse/pull/11653) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Skip empty parameters in requested URL. They may appear when you write `http://localhost:8123/?&a=b` or `http://localhost:8123/?a=b&&c=d`. This closes [#10749](https://github.com/ClickHouse/ClickHouse/issues/10749). [#11651](https://github.com/ClickHouse/ClickHouse/pull/11651) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Allow using `groupArrayArray` and `groupUniqArrayArray` as `SimpleAggregateFunction`. [#11650](https://github.com/ClickHouse/ClickHouse/pull/11650) ([Volodymyr Kuznetsov](https://github.com/ksvladimir)). @@ -1933,7 +1933,7 @@ toc_title: '2020' * Fixed logical functions for UInt8 values when they are not equal to 0 or 1. [#12196](https://github.com/ClickHouse/ClickHouse/pull/12196) ([Alexander Kazakov](https://github.com/Akazz)). * Cap max_memory_usage* limits to the process resident memory. [#12182](https://github.com/ClickHouse/ClickHouse/pull/12182) ([Azat Khuzhin](https://github.com/azat)). * Fixed `dictGet` arguments check during GROUP BY injective functions elimination. [#12179](https://github.com/ClickHouse/ClickHouse/pull/12179) ([Azat Khuzhin](https://github.com/azat)). -* Don't split the dictionary source's table name into schema and table name itself if ODBC connection doesn't support schema. [#12165](https://github.com/ClickHouse/ClickHouse/pull/12165) ([Vitaly Baranov](https://github.com/vitlibar)). +* Don't split the dictionary source's table name into schema and table name itself if ODBC connection does not support schema. [#12165](https://github.com/ClickHouse/ClickHouse/pull/12165) ([Vitaly Baranov](https://github.com/vitlibar)). * Fixed wrong logic in `ALTER DELETE` that leads to deleting of records when condition evaluates to NULL. This fixes [#9088](https://github.com/ClickHouse/ClickHouse/issues/9088). This closes [#12106](https://github.com/ClickHouse/ClickHouse/issues/12106). [#12153](https://github.com/ClickHouse/ClickHouse/pull/12153) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fixed transform of query to send to external DBMS (e.g. MySQL, ODBC) in presense of aliases. This fixes [#12032](https://github.com/ClickHouse/ClickHouse/issues/12032). [#12151](https://github.com/ClickHouse/ClickHouse/pull/12151) ([alexey-milovidov](https://github.com/alexey-milovidov)). * Fixed potential overflow in integer division. This fixes [#12119](https://github.com/ClickHouse/ClickHouse/issues/12119). [#12140](https://github.com/ClickHouse/ClickHouse/pull/12140) ([alexey-milovidov](https://github.com/alexey-milovidov)). @@ -1997,7 +1997,7 @@ toc_title: '2020' * Fix error `Block structure mismatch` for queries with sampling reading from `Buffer` table. [#11602](https://github.com/ClickHouse/ClickHouse/pull/11602) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). * Fix wrong exit code of the clickhouse-client, when exception.code() % 256 = 0. [#11601](https://github.com/ClickHouse/ClickHouse/pull/11601) ([filimonov](https://github.com/filimonov)). * Fix trivial error in log message about "Mark cache size was lowered" at server startup. This closes [#11399](https://github.com/ClickHouse/ClickHouse/issues/11399). [#11589](https://github.com/ClickHouse/ClickHouse/pull/11589) ([alexey-milovidov](https://github.com/alexey-milovidov)). -* Fix error `Size of offsets doesn't match size of column` for queries with `PREWHERE column in (subquery)` and `ARRAY JOIN`. [#11580](https://github.com/ClickHouse/ClickHouse/pull/11580) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix error `Size of offsets does not match size of column` for queries with `PREWHERE column in (subquery)` and `ARRAY JOIN`. [#11580](https://github.com/ClickHouse/ClickHouse/pull/11580) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). * Fixed rare segfault in `SHOW CREATE TABLE` Fixes [#11490](https://github.com/ClickHouse/ClickHouse/issues/11490). [#11579](https://github.com/ClickHouse/ClickHouse/pull/11579) ([tavplubix](https://github.com/tavplubix)). * All queries in HTTP session have had the same query_id. It is fixed. [#11578](https://github.com/ClickHouse/ClickHouse/pull/11578) ([tavplubix](https://github.com/tavplubix)). * Now clickhouse-server docker container will prefer IPv6 checking server aliveness. [#11550](https://github.com/ClickHouse/ClickHouse/pull/11550) ([Ivan Starkov](https://github.com/istarkov)). @@ -2213,7 +2213,7 @@ No changes compared to v20.4.3.16-stable. * Fix Distributed-over-Distributed with the only one shard in a nested table [#9997](https://github.com/ClickHouse/ClickHouse/pull/9997) ([Azat Khuzhin](https://github.com/azat)) * Fix possible rows loss for queries with `JOIN` and `UNION ALL`. Fixes [#9826](https://github.com/ClickHouse/ClickHouse/issues/9826), [#10113](https://github.com/ClickHouse/ClickHouse/issues/10113). ... [#10099](https://github.com/ClickHouse/ClickHouse/pull/10099) ([Nikolai Kochetov](https://github.com/KochetovNicolai)) * Fix bug in dictionary when local clickhouse server is used as source. It may caused memory corruption if types in dictionary and source are not compatible. [#10071](https://github.com/ClickHouse/ClickHouse/pull/10071) ([alesapin](https://github.com/alesapin)) -* Fixed replicated tables startup when updating from an old ClickHouse version where `/table/replicas/replica_name/metadata` node doesn't exist. Fixes [#10037](https://github.com/ClickHouse/ClickHouse/issues/10037). [#10095](https://github.com/ClickHouse/ClickHouse/pull/10095) ([alesapin](https://github.com/alesapin)) +* Fixed replicated tables startup when updating from an old ClickHouse version where `/table/replicas/replica_name/metadata` node does not exist. Fixes [#10037](https://github.com/ClickHouse/ClickHouse/issues/10037). [#10095](https://github.com/ClickHouse/ClickHouse/pull/10095) ([alesapin](https://github.com/alesapin)) * Fix error `Cannot clone block with columns because block has 0 columns ... While executing GroupingAggregatedTransform`. It happened when setting `distributed_aggregation_memory_efficient` was enabled, and distributed query read aggregating data with mixed single and two-level aggregation from different shards. [#10063](https://github.com/ClickHouse/ClickHouse/pull/10063) ([Nikolai Kochetov](https://github.com/KochetovNicolai)) * Fix deadlock when database with materialized view failed attach at start [#10054](https://github.com/ClickHouse/ClickHouse/pull/10054) ([Azat Khuzhin](https://github.com/azat)) * Fix a segmentation fault that could occur in GROUP BY over string keys containing trailing zero bytes ([#8636](https://github.com/ClickHouse/ClickHouse/issues/8636), [#8925](https://github.com/ClickHouse/ClickHouse/issues/8925)). ... [#10025](https://github.com/ClickHouse/ClickHouse/pull/10025) ([Alexander Kuzmenkov](https://github.com/akuzm)) @@ -2228,7 +2228,7 @@ No changes compared to v20.4.3.16-stable. * Fix `TRUNCATE` for Join table engine ([#9917](https://github.com/ClickHouse/ClickHouse/issues/9917)). [#9920](https://github.com/ClickHouse/ClickHouse/pull/9920) ([Amos Bird](https://github.com/amosbird)) * Fix race condition between drop and optimize in `ReplicatedMergeTree`. [#9901](https://github.com/ClickHouse/ClickHouse/pull/9901) ([alesapin](https://github.com/alesapin)) * Fix `DISTINCT` for Distributed when `optimize_skip_unused_shards` is set. [#9808](https://github.com/ClickHouse/ClickHouse/pull/9808) ([Azat Khuzhin](https://github.com/azat)) -* Fix "scalar doesn't exist" error in ALTERs ([#9878](https://github.com/ClickHouse/ClickHouse/issues/9878)). ... [#9904](https://github.com/ClickHouse/ClickHouse/pull/9904) ([Amos Bird](https://github.com/amosbird)) +* Fix "scalar does not exist" error in ALTERs ([#9878](https://github.com/ClickHouse/ClickHouse/issues/9878)). ... [#9904](https://github.com/ClickHouse/ClickHouse/pull/9904) ([Amos Bird](https://github.com/amosbird)) * Fix error with qualified names in `distributed_product_mode=\'local\'`. Fixes [#4756](https://github.com/ClickHouse/ClickHouse/issues/4756) [#9891](https://github.com/ClickHouse/ClickHouse/pull/9891) ([Artem Zuikov](https://github.com/4ertus2)) * For INSERT queries shards now do clamp the settings from the initiator to their constraints instead of throwing an exception. This fix allows to send INSERT queries to a shard with another constraints. This change improves fix [#9447](https://github.com/ClickHouse/ClickHouse/issues/9447). [#9852](https://github.com/ClickHouse/ClickHouse/pull/9852) ([Vitaly Baranov](https://github.com/vitlibar)) * Add some retries when commiting offsets to Kafka broker, since it can reject commit if during `offsets.commit.timeout.ms` there were no enough replicas available for the `__consumer_offsets` topic [#9884](https://github.com/ClickHouse/ClickHouse/pull/9884) ([filimonov](https://github.com/filimonov)) @@ -2248,11 +2248,11 @@ No changes compared to v20.4.3.16-stable. * Fix possible permanent "Cannot schedule a task" error. [#9154](https://github.com/ClickHouse/ClickHouse/pull/9154) ([Azat Khuzhin](https://github.com/azat)) * Fix bug in backquoting in external dictionaries DDL. Fixes [#9619](https://github.com/ClickHouse/ClickHouse/issues/9619). [#9734](https://github.com/ClickHouse/ClickHouse/pull/9734) ([alesapin](https://github.com/alesapin)) * Fixed data race in `text_log`. It does not correspond to any real bug. [#9726](https://github.com/ClickHouse/ClickHouse/pull/9726) ([alexey-milovidov](https://github.com/alexey-milovidov)) -* Fix bug in a replication that doesn't allow replication to work if the user has executed mutations on the previous version. This fixes [#9645](https://github.com/ClickHouse/ClickHouse/issues/9645). [#9652](https://github.com/ClickHouse/ClickHouse/pull/9652) ([alesapin](https://github.com/alesapin)) +* Fix bug in a replication that does not allow replication to work if the user has executed mutations on the previous version. This fixes [#9645](https://github.com/ClickHouse/ClickHouse/issues/9645). [#9652](https://github.com/ClickHouse/ClickHouse/pull/9652) ([alesapin](https://github.com/alesapin)) * Fixed incorrect internal function names for `sumKahan` and `sumWithOverflow`. It led to exception while using this functions in remote queries. [#9636](https://github.com/ClickHouse/ClickHouse/pull/9636) ([Azat Khuzhin](https://github.com/azat)) * Add setting `use_compact_format_in_distributed_parts_names` which allows to write files for `INSERT` queries into `Distributed` table with more compact format. This fixes [#9647](https://github.com/ClickHouse/ClickHouse/issues/9647). [#9653](https://github.com/ClickHouse/ClickHouse/pull/9653) ([alesapin](https://github.com/alesapin)) * Fix RIGHT and FULL JOIN with LowCardinality in JOIN keys. [#9610](https://github.com/ClickHouse/ClickHouse/pull/9610) ([Artem Zuikov](https://github.com/4ertus2)) -* Fix possible exceptions `Size of filter doesn't match size of column` and `Invalid number of rows in Chunk` in `MergeTreeRangeReader`. They could appear while executing `PREWHERE` in some cases. [#9612](https://github.com/ClickHouse/ClickHouse/pull/9612) ([Anton Popov](https://github.com/CurtizJ)) +* Fix possible exceptions `Size of filter does not match size of column` and `Invalid number of rows in Chunk` in `MergeTreeRangeReader`. They could appear while executing `PREWHERE` in some cases. [#9612](https://github.com/ClickHouse/ClickHouse/pull/9612) ([Anton Popov](https://github.com/CurtizJ)) * Allow `ALTER ON CLUSTER` of Distributed tables with internal replication. This fixes [#3268](https://github.com/ClickHouse/ClickHouse/issues/3268) [#9617](https://github.com/ClickHouse/ClickHouse/pull/9617) ([shinoi2](https://github.com/shinoi2)) * Fix issue when timezone was not preserved if you write a simple arithmetic expression like `time + 1` (in contrast to an expression like `time + INTERVAL 1 SECOND`). This fixes [#5743](https://github.com/ClickHouse/ClickHouse/issues/5743) [#9323](https://github.com/ClickHouse/ClickHouse/pull/9323) ([alexey-milovidov](https://github.com/alexey-milovidov)) @@ -2533,7 +2533,7 @@ No changes compared to v20.4.3.16-stable. * Fix error `Block structure mismatch` for queries with sampling reading from `Buffer` table. [#11602](https://github.com/ClickHouse/ClickHouse/pull/11602) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). * Fix wrong exit code of the clickhouse-client, when exception.code() % 256 = 0. [#11601](https://github.com/ClickHouse/ClickHouse/pull/11601) ([filimonov](https://github.com/filimonov)). * Fix trivial error in log message about "Mark cache size was lowered" at server startup. This closes [#11399](https://github.com/ClickHouse/ClickHouse/issues/11399). [#11589](https://github.com/ClickHouse/ClickHouse/pull/11589) ([alexey-milovidov](https://github.com/alexey-milovidov)). -* Fix error `Size of offsets doesn't match size of column` for queries with `PREWHERE column in (subquery)` and `ARRAY JOIN`. [#11580](https://github.com/ClickHouse/ClickHouse/pull/11580) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix error `Size of offsets does not match size of column` for queries with `PREWHERE column in (subquery)` and `ARRAY JOIN`. [#11580](https://github.com/ClickHouse/ClickHouse/pull/11580) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). * All queries in HTTP session have had the same query_id. It is fixed. [#11578](https://github.com/ClickHouse/ClickHouse/pull/11578) ([tavplubix](https://github.com/tavplubix)). * Now clickhouse-server docker container will prefer IPv6 checking server aliveness. [#11550](https://github.com/ClickHouse/ClickHouse/pull/11550) ([Ivan Starkov](https://github.com/istarkov)). * Fix shard_num/replica_num for `` (breaks use_compact_format_in_distributed_parts_names). [#11528](https://github.com/ClickHouse/ClickHouse/pull/11528) ([Azat Khuzhin](https://github.com/azat)). @@ -2692,7 +2692,7 @@ No changes compared to v20.4.3.16-stable. * Fix incorrect `index_granularity_bytes` check while creating new replica. Fixes [#10098](https://github.com/ClickHouse/ClickHouse/issues/10098). [#10121](https://github.com/ClickHouse/ClickHouse/pull/10121) ([alesapin](https://github.com/alesapin)). * Fix SIGSEGV on INSERT into Distributed table when its structure differs from the underlying tables. [#10105](https://github.com/ClickHouse/ClickHouse/pull/10105) ([Azat Khuzhin](https://github.com/azat)). * Fix possible rows loss for queries with `JOIN` and `UNION ALL`. Fixes [#9826](https://github.com/ClickHouse/ClickHouse/issues/9826), [#10113](https://github.com/ClickHouse/ClickHouse/issues/10113). [#10099](https://github.com/ClickHouse/ClickHouse/pull/10099) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). -* Fixed replicated tables startup when updating from an old ClickHouse version where `/table/replicas/replica_name/metadata` node doesn't exist. Fixes [#10037](https://github.com/ClickHouse/ClickHouse/issues/10037). [#10095](https://github.com/ClickHouse/ClickHouse/pull/10095) ([alesapin](https://github.com/alesapin)). +* Fixed replicated tables startup when updating from an old ClickHouse version where `/table/replicas/replica_name/metadata` node does not exist. Fixes [#10037](https://github.com/ClickHouse/ClickHouse/issues/10037). [#10095](https://github.com/ClickHouse/ClickHouse/pull/10095) ([alesapin](https://github.com/alesapin)). * Add some arguments check and support identifier arguments for MySQL Database Engine. [#10077](https://github.com/ClickHouse/ClickHouse/pull/10077) ([Winter Zhang](https://github.com/zhang2014)). * Fix bug in clickhouse dictionary source from localhost clickhouse server. The bug may lead to memory corruption if types in dictionary and source are not compatible. [#10071](https://github.com/ClickHouse/ClickHouse/pull/10071) ([alesapin](https://github.com/alesapin)). * Fix bug in `CHECK TABLE` query when table contain skip indices. [#10068](https://github.com/ClickHouse/ClickHouse/pull/10068) ([alesapin](https://github.com/alesapin)). @@ -2704,7 +2704,7 @@ No changes compared to v20.4.3.16-stable. * Fix a bug with `ON CLUSTER` DDL queries freezing on server startup. [#9927](https://github.com/ClickHouse/ClickHouse/pull/9927) ([Gagan Arneja](https://github.com/garneja)). * Fix parsing multiple hosts set in the CREATE USER command, e.g. `CREATE USER user6 HOST NAME REGEXP 'lo.?*host', NAME REGEXP 'lo*host'`. [#9924](https://github.com/ClickHouse/ClickHouse/pull/9924) ([Vitaly Baranov](https://github.com/vitlibar)). * Fix `TRUNCATE` for Join table engine ([#9917](https://github.com/ClickHouse/ClickHouse/issues/9917)). [#9920](https://github.com/ClickHouse/ClickHouse/pull/9920) ([Amos Bird](https://github.com/amosbird)). -* Fix "scalar doesn't exist" error in ALTERs ([#9878](https://github.com/ClickHouse/ClickHouse/issues/9878)). [#9904](https://github.com/ClickHouse/ClickHouse/pull/9904) ([Amos Bird](https://github.com/amosbird)). +* Fix "scalar does not exist" error in ALTERs ([#9878](https://github.com/ClickHouse/ClickHouse/issues/9878)). [#9904](https://github.com/ClickHouse/ClickHouse/pull/9904) ([Amos Bird](https://github.com/amosbird)). * Fix race condition between drop and optimize in `ReplicatedMergeTree`. [#9901](https://github.com/ClickHouse/ClickHouse/pull/9901) ([alesapin](https://github.com/alesapin)). * Fix error with qualified names in `distributed_product_mode='local'`. Fixes [#4756](https://github.com/ClickHouse/ClickHouse/issues/4756). [#9891](https://github.com/ClickHouse/ClickHouse/pull/9891) ([Artem Zuikov](https://github.com/4ertus2)). * Fix calculating grants for introspection functions from the setting 'allow_introspection_functions'. [#9840](https://github.com/ClickHouse/ClickHouse/pull/9840) ([Vitaly Baranov](https://github.com/vitlibar)). @@ -2745,7 +2745,7 @@ No changes compared to v20.4.3.16-stable. #### Bug Fix * This release also contains all bug fixes from 20.1.7.38 -* Fix bug in a replication that doesn't allow replication to work if the user has executed mutations on the previous version. This fixes [#9645](https://github.com/ClickHouse/ClickHouse/issues/9645). [#9652](https://github.com/ClickHouse/ClickHouse/pull/9652) ([alesapin](https://github.com/alesapin)). It makes version 20.3 backward compatible again. +* Fix bug in a replication that does not allow replication to work if the user has executed mutations on the previous version. This fixes [#9645](https://github.com/ClickHouse/ClickHouse/issues/9645). [#9652](https://github.com/ClickHouse/ClickHouse/pull/9652) ([alesapin](https://github.com/alesapin)). It makes version 20.3 backward compatible again. * Add setting `use_compact_format_in_distributed_parts_names` which allows to write files for `INSERT` queries into `Distributed` table with more compact format. This fixes [#9647](https://github.com/ClickHouse/ClickHouse/issues/9647). [#9653](https://github.com/ClickHouse/ClickHouse/pull/9653) ([alesapin](https://github.com/alesapin)). It makes version 20.3 backward compatible again. ### ClickHouse release v20.3.2.1, 2020-03-12 @@ -2760,7 +2760,7 @@ No changes compared to v20.4.3.16-stable. * Remove `findClusterIndex`, `findClusterValue` functions. This fixes [#8641](https://github.com/ClickHouse/ClickHouse/issues/8641). If you were using these functions, send an email to `clickhouse-feedback@yandex-team.com` [#9543](https://github.com/ClickHouse/ClickHouse/pull/9543) ([alexey-milovidov](https://github.com/alexey-milovidov)) * Now it's not allowed to create columns or add columns with `SELECT` subquery as default expression. [#9481](https://github.com/ClickHouse/ClickHouse/pull/9481) ([alesapin](https://github.com/alesapin)) * Require aliases for subqueries in JOIN. [#9274](https://github.com/ClickHouse/ClickHouse/pull/9274) ([Artem Zuikov](https://github.com/4ertus2)) -* Improved `ALTER MODIFY/ADD` queries logic. Now you cannot `ADD` column without type, `MODIFY` default expression doesn't change type of column and `MODIFY` type doesn't loose default expression value. Fixes [#8669](https://github.com/ClickHouse/ClickHouse/issues/8669). [#9227](https://github.com/ClickHouse/ClickHouse/pull/9227) ([alesapin](https://github.com/alesapin)) +* Improved `ALTER MODIFY/ADD` queries logic. Now you cannot `ADD` column without type, `MODIFY` default expression does not change type of column and `MODIFY` type does not loose default expression value. Fixes [#8669](https://github.com/ClickHouse/ClickHouse/issues/8669). [#9227](https://github.com/ClickHouse/ClickHouse/pull/9227) ([alesapin](https://github.com/alesapin)) * Require server to be restarted to apply the changes in logging configuration. This is a temporary workaround to avoid the bug where the server logs to a deleted log file (see [#8696](https://github.com/ClickHouse/ClickHouse/issues/8696)). [#8707](https://github.com/ClickHouse/ClickHouse/pull/8707) ([Alexander Kuzmenkov](https://github.com/akuzm)) * The setting `experimental_use_processors` is enabled by default. This setting enables usage of the new query pipeline. This is internal refactoring and we expect no visible changes. If you will see any issues, set it to back zero. [#8768](https://github.com/ClickHouse/ClickHouse/pull/8768) ([alexey-milovidov](https://github.com/alexey-milovidov)) @@ -2811,7 +2811,7 @@ No changes compared to v20.4.3.16-stable. * Found keys were counted as missed in metrics of cache dictionaries. [#9411](https://github.com/ClickHouse/ClickHouse/pull/9411) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)) * Fix replication protocol incompatibility introduced in [#8598](https://github.com/ClickHouse/ClickHouse/issues/8598). [#9412](https://github.com/ClickHouse/ClickHouse/pull/9412) ([alesapin](https://github.com/alesapin)) * Fixed race condition on `queue_task_handle` at the startup of `ReplicatedMergeTree` tables. [#9552](https://github.com/ClickHouse/ClickHouse/pull/9552) ([alexey-milovidov](https://github.com/alexey-milovidov)) -* The token `NOT` didn't work in `SHOW TABLES NOT LIKE` query [#8727](https://github.com/ClickHouse/ClickHouse/issues/8727) [#8940](https://github.com/ClickHouse/ClickHouse/pull/8940) ([alexey-milovidov](https://github.com/alexey-milovidov)) +* The token `NOT` did not work in `SHOW TABLES NOT LIKE` query [#8727](https://github.com/ClickHouse/ClickHouse/issues/8727) [#8940](https://github.com/ClickHouse/ClickHouse/pull/8940) ([alexey-milovidov](https://github.com/alexey-milovidov)) * Added range check to function `h3EdgeLengthM`. Without this check, buffer overflow is possible. [#8945](https://github.com/ClickHouse/ClickHouse/pull/8945) ([alexey-milovidov](https://github.com/alexey-milovidov)) * Fixed up a bug in batched calculations of ternary logical OPs on multiple arguments (more than 10). [#8718](https://github.com/ClickHouse/ClickHouse/pull/8718) ([Alexander Kazakov](https://github.com/Akazz)) * Fix error of PREWHERE optimization, which could lead to segfaults or `Inconsistent number of columns got from MergeTreeRangeReader` exception. [#9024](https://github.com/ClickHouse/ClickHouse/pull/9024) ([Anton Popov](https://github.com/CurtizJ)) @@ -2872,7 +2872,7 @@ No changes compared to v20.4.3.16-stable. * Fix bug in which a misleading error message was shown when running `SHOW CREATE TABLE a_table_that_does_not_exist`. [#8899](https://github.com/ClickHouse/ClickHouse/pull/8899) ([achulkov2](https://github.com/achulkov2)) * Fixed `Parameters are out of bound` exception in some rare cases when we have a constant in the `SELECT` clause when we have an `ORDER BY` and a `LIMIT` clause. [#8892](https://github.com/ClickHouse/ClickHouse/pull/8892) ([Guillaume Tassery](https://github.com/YiuRULE)) * Fix mutations finalization, when already done mutation can have status `is_done=0`. [#9217](https://github.com/ClickHouse/ClickHouse/pull/9217) ([alesapin](https://github.com/alesapin)) -* Prevent from executing `ALTER ADD INDEX` for MergeTree tables with old syntax, because it doesn't work. [#8822](https://github.com/ClickHouse/ClickHouse/pull/8822) ([Mikhail Korotov](https://github.com/millb)) +* Prevent from executing `ALTER ADD INDEX` for MergeTree tables with old syntax, because it does not work. [#8822](https://github.com/ClickHouse/ClickHouse/pull/8822) ([Mikhail Korotov](https://github.com/millb)) * During server startup do not access table, which `LIVE VIEW` depends on, so server will be able to start. Also remove `LIVE VIEW` dependencies when detaching `LIVE VIEW`. `LIVE VIEW` is an experimental feature. [#8824](https://github.com/ClickHouse/ClickHouse/pull/8824) ([tavplubix](https://github.com/tavplubix)) * Fix possible segfault in `MergeTreeRangeReader`, while executing `PREWHERE`. [#9106](https://github.com/ClickHouse/ClickHouse/pull/9106) ([Anton Popov](https://github.com/CurtizJ)) * Fix possible mismatched checksums with column TTLs. [#9451](https://github.com/ClickHouse/ClickHouse/pull/9451) ([Anton Popov](https://github.com/CurtizJ)) @@ -3020,7 +3020,7 @@ No changes compared to v20.4.3.16-stable. #### Bug Fix -* Fix error `Size of offsets doesn't match size of column` for queries with `PREWHERE column in (subquery)` and `ARRAY JOIN`. [#11580](https://github.com/ClickHouse/ClickHouse/pull/11580) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix error `Size of offsets does not match size of column` for queries with `PREWHERE column in (subquery)` and `ARRAY JOIN`. [#11580](https://github.com/ClickHouse/ClickHouse/pull/11580) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). ### ClickHouse release v20.1.13.105-stable 2020-06-10 @@ -3116,7 +3116,7 @@ No changes compared to v20.4.3.16-stable. * Fix `'Not found column in block'` error when `JOIN` appears with `TOTALS`. Fixes [#9839](https://github.com/ClickHouse/ClickHouse/issues/9839). [#9939](https://github.com/ClickHouse/ClickHouse/pull/9939) ([Artem Zuikov](https://github.com/4ertus2)). * Fix a bug with `ON CLUSTER` DDL queries freezing on server startup. [#9927](https://github.com/ClickHouse/ClickHouse/pull/9927) ([Gagan Arneja](https://github.com/garneja)). * Fix `TRUNCATE` for Join table engine ([#9917](https://github.com/ClickHouse/ClickHouse/issues/9917)). [#9920](https://github.com/ClickHouse/ClickHouse/pull/9920) ([Amos Bird](https://github.com/amosbird)). -* Fix `'scalar doesn't exist'` error in ALTER queries ([#9878](https://github.com/ClickHouse/ClickHouse/issues/9878)). [#9904](https://github.com/ClickHouse/ClickHouse/pull/9904) ([Amos Bird](https://github.com/amosbird)). +* Fix `'scalar does not exist'` error in ALTER queries ([#9878](https://github.com/ClickHouse/ClickHouse/issues/9878)). [#9904](https://github.com/ClickHouse/ClickHouse/pull/9904) ([Amos Bird](https://github.com/amosbird)). * Fix race condition between drop and optimize in `ReplicatedMergeTree`. [#9901](https://github.com/ClickHouse/ClickHouse/pull/9901) ([alesapin](https://github.com/alesapin)). * Fixed `DeleteOnDestroy` logic in `ATTACH PART` which could lead to automatic removal of attached part and added few tests. [#9410](https://github.com/ClickHouse/ClickHouse/pull/9410) ([Vladimir Chebotarev](https://github.com/excitoon)). @@ -3155,7 +3155,7 @@ No changes compared to v20.4.3.16-stable. #### Bug Fix * Fixed incorrect internal function names for `sumKahan` and `sumWithOverflow`. I lead to exception while using this functions in remote queries. [#9636](https://github.com/ClickHouse/ClickHouse/pull/9636) ([Azat Khuzhin](https://github.com/azat)). This issue was in all ClickHouse releases. * Allow `ALTER ON CLUSTER` of `Distributed` tables with internal replication. This fixes [#3268](https://github.com/ClickHouse/ClickHouse/issues/3268). [#9617](https://github.com/ClickHouse/ClickHouse/pull/9617) ([shinoi2](https://github.com/shinoi2)). This issue was in all ClickHouse releases. -* Fix possible exceptions `Size of filter doesn't match size of column` and `Invalid number of rows in Chunk` in `MergeTreeRangeReader`. They could appear while executing `PREWHERE` in some cases. Fixes [#9132](https://github.com/ClickHouse/ClickHouse/issues/9132). [#9612](https://github.com/ClickHouse/ClickHouse/pull/9612) ([Anton Popov](https://github.com/CurtizJ)) +* Fix possible exceptions `Size of filter does not match size of column` and `Invalid number of rows in Chunk` in `MergeTreeRangeReader`. They could appear while executing `PREWHERE` in some cases. Fixes [#9132](https://github.com/ClickHouse/ClickHouse/issues/9132). [#9612](https://github.com/ClickHouse/ClickHouse/pull/9612) ([Anton Popov](https://github.com/CurtizJ)) * Fixed the issue: timezone was not preserved if you write a simple arithmetic expression like `time + 1` (in contrast to an expression like `time + INTERVAL 1 SECOND`). This fixes [#5743](https://github.com/ClickHouse/ClickHouse/issues/5743). [#9323](https://github.com/ClickHouse/ClickHouse/pull/9323) ([alexey-milovidov](https://github.com/alexey-milovidov)). This issue was in all ClickHouse releases. * Now it's not possible to create or add columns with simple cyclic aliases like `a DEFAULT b, b DEFAULT a`. [#9603](https://github.com/ClickHouse/ClickHouse/pull/9603) ([alesapin](https://github.com/alesapin)) * Fixed the issue when padding at the end of base64 encoded value can be malformed. Update base64 library. This fixes [#9491](https://github.com/ClickHouse/ClickHouse/issues/9491), closes [#9492](https://github.com/ClickHouse/ClickHouse/issues/9492) [#9500](https://github.com/ClickHouse/ClickHouse/pull/9500) ([alexey-milovidov](https://github.com/alexey-milovidov)) @@ -3199,7 +3199,7 @@ No changes compared to v20.4.3.16-stable. [#9231](https://github.com/ClickHouse/ClickHouse/pull/9231) [(abyss7)](https://github.com/abyss7) * Allow comma join with `IN()` inside. Fixes [#7314](https://github.com/ClickHouse/ClickHouse/issues/7314). [#9251](https://github.com/ClickHouse/ClickHouse/pull/9251) [(4ertus2)](https://github.com/4ertus2) -* Improve `ALTER MODIFY/ADD` queries logic. Now you cannot `ADD` column without type, `MODIFY` default expression doesn't change type of column and `MODIFY` type doesn't loose default expression value. Fixes [#8669](https://github.com/ClickHouse/ClickHouse/issues/8669). +* Improve `ALTER MODIFY/ADD` queries logic. Now you cannot `ADD` column without type, `MODIFY` default expression does not change type of column and `MODIFY` type does not loose default expression value. Fixes [#8669](https://github.com/ClickHouse/ClickHouse/issues/8669). [#9227](https://github.com/ClickHouse/ClickHouse/pull/9227) [(alesapin)](https://github.com/alesapin) * Fix mutations finalization, when already done mutation can have status is_done=0. [#9217](https://github.com/ClickHouse/ClickHouse/pull/9217) [(alesapin)](https://github.com/alesapin) @@ -3283,7 +3283,7 @@ No changes compared to v20.4.3.16-stable. * Fix error "Mismatch column sizes" when inserting default `Tuple` from `JSONEachRow`. This fixes [#5653](https://github.com/ClickHouse/ClickHouse/issues/5653). [#8606](https://github.com/ClickHouse/ClickHouse/pull/8606) ([tavplubix](https://github.com/tavplubix)) * Now an exception will be thrown in case of using `WITH TIES` alongside `LIMIT BY`. Also add ability to use `TOP` with `LIMIT BY`. This fixes [#7472](https://github.com/ClickHouse/ClickHouse/issues/7472). [#7637](https://github.com/ClickHouse/ClickHouse/pull/7637) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)) * Fix unintendent dependency from fresh glibc version in `clickhouse-odbc-bridge` binary. [#8046](https://github.com/ClickHouse/ClickHouse/pull/8046) ([Amos Bird](https://github.com/amosbird)) -* Fix bug in check function of `*MergeTree` engines family. Now it doesn't fail in case when we have equal amount of rows in last granule and last mark (non-final). [#8047](https://github.com/ClickHouse/ClickHouse/pull/8047) ([alesapin](https://github.com/alesapin)) +* Fix bug in check function of `*MergeTree` engines family. Now it does not fail in case when we have equal amount of rows in last granule and last mark (non-final). [#8047](https://github.com/ClickHouse/ClickHouse/pull/8047) ([alesapin](https://github.com/alesapin)) * Fix insert into `Enum*` columns after `ALTER` query, when underlying numeric type is equal to table specified type. This fixes [#7836](https://github.com/ClickHouse/ClickHouse/issues/7836). [#7908](https://github.com/ClickHouse/ClickHouse/pull/7908) ([Anton Popov](https://github.com/CurtizJ)) * Allowed non-constant negative "size" argument for function `substring`. It was not allowed by mistake. This fixes [#4832](https://github.com/ClickHouse/ClickHouse/issues/4832). [#7703](https://github.com/ClickHouse/ClickHouse/pull/7703) ([alexey-milovidov](https://github.com/alexey-milovidov)) * Fix parsing bug when wrong number of arguments passed to `(O|J)DBC` table engine. [#7709](https://github.com/ClickHouse/ClickHouse/pull/7709) ([alesapin](https://github.com/alesapin)) @@ -3296,7 +3296,7 @@ No changes compared to v20.4.3.16-stable. * Fix function `IN` inside `WHERE` statement when row-level table filter is present. Fixes [#6687](https://github.com/ClickHouse/ClickHouse/issues/6687) [#8357](https://github.com/ClickHouse/ClickHouse/pull/8357) ([Ivan](https://github.com/abyss7)) * Now an exception is thrown if the integral value is not parsed completely for settings values. [#7678](https://github.com/ClickHouse/ClickHouse/pull/7678) ([Mikhail Korotov](https://github.com/millb)) * Fix exception when aggregate function is used in query to distributed table with more than two local shards. [#8164](https://github.com/ClickHouse/ClickHouse/pull/8164) ([小路](https://github.com/nicelulu)) -* Now bloom filter can handle zero length arrays and doesn't perform redundant calculations. [#8242](https://github.com/ClickHouse/ClickHouse/pull/8242) ([achimbab](https://github.com/achimbab)) +* Now bloom filter can handle zero length arrays and does not perform redundant calculations. [#8242](https://github.com/ClickHouse/ClickHouse/pull/8242) ([achimbab](https://github.com/achimbab)) * Fixed checking if a client host is allowed by matching the client host to `host_regexp` specified in `users.xml`. [#8241](https://github.com/ClickHouse/ClickHouse/pull/8241) ([Vitaly Baranov](https://github.com/vitlibar)) * Relax ambiguous column check that leads to false positives in multiple `JOIN ON` section. [#8385](https://github.com/ClickHouse/ClickHouse/pull/8385) ([Artem Zuikov](https://github.com/4ertus2)) * Fixed possible server crash (`std::terminate`) when the server cannot send or write data in `JSON` or `XML` format with values of `String` data type (that require `UTF-8` validation) or when compressing result data with Brotli algorithm or in some other rare cases. This fixes [#7603](https://github.com/ClickHouse/ClickHouse/issues/7603) [#8384](https://github.com/ClickHouse/ClickHouse/pull/8384) ([alexey-milovidov](https://github.com/alexey-milovidov)) @@ -3323,19 +3323,19 @@ No changes compared to v20.4.3.16-stable. * Fix error in background merge of columns with `SimpleAggregateFunction(LowCardinality)` type. [#8613](https://github.com/ClickHouse/ClickHouse/pull/8613) ([Nikolai Kochetov](https://github.com/KochetovNicolai)) * Fixed type check in function `toDateTime64`. [#8375](https://github.com/ClickHouse/ClickHouse/pull/8375) ([Vasily Nemkov](https://github.com/Enmk)) * Now server do not crash on `LEFT` or `FULL JOIN` with and Join engine and unsupported `join_use_nulls` settings. [#8479](https://github.com/ClickHouse/ClickHouse/pull/8479) ([Artem Zuikov](https://github.com/4ertus2)) -* Now `DROP DICTIONARY IF EXISTS db.dict` query doesn't throw exception if `db` doesn't exist. [#8185](https://github.com/ClickHouse/ClickHouse/pull/8185) ([Vitaly Baranov](https://github.com/vitlibar)) +* Now `DROP DICTIONARY IF EXISTS db.dict` query does not throw exception if `db` does not exist. [#8185](https://github.com/ClickHouse/ClickHouse/pull/8185) ([Vitaly Baranov](https://github.com/vitlibar)) * Fix possible crashes in table functions (`file`, `mysql`, `remote`) caused by usage of reference to removed `IStorage` object. Fix incorrect parsing of columns specified at insertion into table function. [#7762](https://github.com/ClickHouse/ClickHouse/pull/7762) ([tavplubix](https://github.com/tavplubix)) * Ensure network be up before starting `clickhouse-server`. This fixes [#7507](https://github.com/ClickHouse/ClickHouse/issues/7507). [#8570](https://github.com/ClickHouse/ClickHouse/pull/8570) ([Zhichang Yu](https://github.com/yuzhichang)) -* Fix timeouts handling for secure connections, so queries doesn't hang indefenitely. This fixes [#8126](https://github.com/ClickHouse/ClickHouse/issues/8126). [#8128](https://github.com/ClickHouse/ClickHouse/pull/8128) ([alexey-milovidov](https://github.com/alexey-milovidov)) +* Fix timeouts handling for secure connections, so queries does not hang indefenitely. This fixes [#8126](https://github.com/ClickHouse/ClickHouse/issues/8126). [#8128](https://github.com/ClickHouse/ClickHouse/pull/8128) ([alexey-milovidov](https://github.com/alexey-milovidov)) * Fix `clickhouse-copier`'s redundant contention between concurrent workers. [#7816](https://github.com/ClickHouse/ClickHouse/pull/7816) ([Ding Xiang Fei](https://github.com/dingxiangfei2009)) -* Now mutations doesn't skip attached parts, even if their mutation version were larger than current mutation version. [#7812](https://github.com/ClickHouse/ClickHouse/pull/7812) ([Zhichang Yu](https://github.com/yuzhichang)) [#8250](https://github.com/ClickHouse/ClickHouse/pull/8250) ([alesapin](https://github.com/alesapin)) +* Now mutations does not skip attached parts, even if their mutation version were larger than current mutation version. [#7812](https://github.com/ClickHouse/ClickHouse/pull/7812) ([Zhichang Yu](https://github.com/yuzhichang)) [#8250](https://github.com/ClickHouse/ClickHouse/pull/8250) ([alesapin](https://github.com/alesapin)) * Ignore redundant copies of `*MergeTree` data parts after move to another disk and server restart. [#7810](https://github.com/ClickHouse/ClickHouse/pull/7810) ([Vladimir Chebotarev](https://github.com/excitoon)) * Fix crash in `FULL JOIN` with `LowCardinality` in `JOIN` key. [#8252](https://github.com/ClickHouse/ClickHouse/pull/8252) ([Artem Zuikov](https://github.com/4ertus2)) * Forbidden to use column name more than once in insert query like `INSERT INTO tbl (x, y, x)`. This fixes [#5465](https://github.com/ClickHouse/ClickHouse/issues/5465), [#7681](https://github.com/ClickHouse/ClickHouse/issues/7681). [#7685](https://github.com/ClickHouse/ClickHouse/pull/7685) ([alesapin](https://github.com/alesapin)) * Added fallback for detection the number of physical CPU cores for unknown CPUs (using the number of logical CPU cores). This fixes [#5239](https://github.com/ClickHouse/ClickHouse/issues/5239). [#7726](https://github.com/ClickHouse/ClickHouse/pull/7726) ([alexey-milovidov](https://github.com/alexey-milovidov)) * Fix `There's no column` error for materialized and alias columns. [#8210](https://github.com/ClickHouse/ClickHouse/pull/8210) ([Artem Zuikov](https://github.com/4ertus2)) * Fixed sever crash when `EXISTS` query was used without `TABLE` or `DICTIONARY` qualifier. Just like `EXISTS t`. This fixes [#8172](https://github.com/ClickHouse/ClickHouse/issues/8172). This bug was introduced in version 19.17. [#8213](https://github.com/ClickHouse/ClickHouse/pull/8213) ([alexey-milovidov](https://github.com/alexey-milovidov)) -* Fix rare bug with error `"Sizes of columns doesn't match"` that might appear when using `SimpleAggregateFunction` column. [#7790](https://github.com/ClickHouse/ClickHouse/pull/7790) ([Boris Granveaud](https://github.com/bgranvea)) +* Fix rare bug with error `"Sizes of columns does not match"` that might appear when using `SimpleAggregateFunction` column. [#7790](https://github.com/ClickHouse/ClickHouse/pull/7790) ([Boris Granveaud](https://github.com/bgranvea)) * Fix bug where user with empty `allow_databases` got access to all databases (and same for `allow_dictionaries`). [#7793](https://github.com/ClickHouse/ClickHouse/pull/7793) ([DeifyTheGod](https://github.com/DeifyTheGod)) * Fix client crash when server already disconnected from client. [#8071](https://github.com/ClickHouse/ClickHouse/pull/8071) ([Azat Khuzhin](https://github.com/azat)) * Fix `ORDER BY` behaviour in case of sorting by primary key prefix and non primary key suffix. [#7759](https://github.com/ClickHouse/ClickHouse/pull/7759) ([Anton Popov](https://github.com/CurtizJ)) @@ -3367,7 +3367,7 @@ No changes compared to v20.4.3.16-stable. * Fix `CHECK TABLE` query for `*MergeTree` tables without key. Fixes [#7543](https://github.com/ClickHouse/ClickHouse/issues/7543). [#7979](https://github.com/ClickHouse/ClickHouse/pull/7979) ([alesapin](https://github.com/alesapin)) * Fixed conversion of `Float64` to MySQL type. [#8079](https://github.com/ClickHouse/ClickHouse/pull/8079) ([Yuriy Baranov](https://github.com/yurriy)) * Now if table was not completely dropped because of server crash, server will try to restore and load it. [#8176](https://github.com/ClickHouse/ClickHouse/pull/8176) ([tavplubix](https://github.com/tavplubix)) -* Fixed crash in table function `file` while inserting into file that doesn't exist. Now in this case file would be created and then insert would be processed. [#8177](https://github.com/ClickHouse/ClickHouse/pull/8177) ([Olga Khvostikova](https://github.com/stavrolia)) +* Fixed crash in table function `file` while inserting into file that does not exist. Now in this case file would be created and then insert would be processed. [#8177](https://github.com/ClickHouse/ClickHouse/pull/8177) ([Olga Khvostikova](https://github.com/stavrolia)) * Fix rare deadlock which can happen when `trace_log` is in enabled. [#7838](https://github.com/ClickHouse/ClickHouse/pull/7838) ([filimonov](https://github.com/filimonov)) * Add ability to work with different types besides `Date` in `RangeHashed` external dictionary created from DDL query. Fixes [7899](https://github.com/ClickHouse/ClickHouse/issues/7899). [#8275](https://github.com/ClickHouse/ClickHouse/pull/8275) ([alesapin](https://github.com/alesapin)) * Fixes crash when `now64()` is called with result of another function. [#8270](https://github.com/ClickHouse/ClickHouse/pull/8270) ([Vasily Nemkov](https://github.com/Enmk)) @@ -3465,7 +3465,7 @@ No changes compared to v20.4.3.16-stable. * Add performance tests for short string optimized hash tables. [#7679](https://github.com/ClickHouse/ClickHouse/pull/7679) ([Amos Bird](https://github.com/amosbird)) * Now ClickHouse will build on `AArch64` even if `MADV_FREE` is not available. This fixes [#8027](https://github.com/ClickHouse/ClickHouse/issues/8027). [#8243](https://github.com/ClickHouse/ClickHouse/pull/8243) ([Amos Bird](https://github.com/amosbird)) * Update `zlib-ng` to fix memory sanitizer problems. [#7182](https://github.com/ClickHouse/ClickHouse/pull/7182) [#8206](https://github.com/ClickHouse/ClickHouse/pull/8206) ([Alexander Kuzmenkov](https://github.com/akuzm)) -* Enable internal MySQL library on non-Linux system, because usage of OS packages is very fragile and usually doesn't work at all. This fixes [#5765](https://github.com/ClickHouse/ClickHouse/issues/5765). [#8426](https://github.com/ClickHouse/ClickHouse/pull/8426) ([alexey-milovidov](https://github.com/alexey-milovidov)) +* Enable internal MySQL library on non-Linux system, because usage of OS packages is very fragile and usually does not work at all. This fixes [#5765](https://github.com/ClickHouse/ClickHouse/issues/5765). [#8426](https://github.com/ClickHouse/ClickHouse/pull/8426) ([alexey-milovidov](https://github.com/alexey-milovidov)) * Fixed build on some systems after enabling `libc++`. This supersedes [#8374](https://github.com/ClickHouse/ClickHouse/issues/8374). [#8380](https://github.com/ClickHouse/ClickHouse/pull/8380) ([alexey-milovidov](https://github.com/alexey-milovidov)) * Make `Field` methods more type-safe to find more errors. [#7386](https://github.com/ClickHouse/ClickHouse/pull/7386) [#8209](https://github.com/ClickHouse/ClickHouse/pull/8209) ([Alexander Kuzmenkov](https://github.com/akuzm)) * Added missing files to the `libc-headers` submodule. [#8507](https://github.com/ClickHouse/ClickHouse/pull/8507) ([alexey-milovidov](https://github.com/alexey-milovidov)) @@ -3494,7 +3494,7 @@ No changes compared to v20.4.3.16-stable. * Fix mode of default password file for `.deb` linux distros. [#8075](https://github.com/ClickHouse/ClickHouse/pull/8075) ([proller](https://github.com/proller)) * Improved expression for getting `clickhouse-server` PID in `clickhouse-test`. [#8063](https://github.com/ClickHouse/ClickHouse/pull/8063) ([Alexander Kazakov](https://github.com/Akazz)) * Updated contrib/googletest to v1.10.0. [#8587](https://github.com/ClickHouse/ClickHouse/pull/8587) ([Alexander Burmak](https://github.com/Alex-Burmak)) -* Fixed ThreadSaninitizer report in `base64` library. Also updated this library to the latest version, but it doesn't matter. This fixes [#8397](https://github.com/ClickHouse/ClickHouse/issues/8397). [#8403](https://github.com/ClickHouse/ClickHouse/pull/8403) ([alexey-milovidov](https://github.com/alexey-milovidov)) +* Fixed ThreadSaninitizer report in `base64` library. Also updated this library to the latest version, but it does not matter. This fixes [#8397](https://github.com/ClickHouse/ClickHouse/issues/8397). [#8403](https://github.com/ClickHouse/ClickHouse/pull/8403) ([alexey-milovidov](https://github.com/alexey-milovidov)) * Fix `00600_replace_running_query` for processors. [#8272](https://github.com/ClickHouse/ClickHouse/pull/8272) ([Nikolai Kochetov](https://github.com/KochetovNicolai)) * Remove support for `tcmalloc` to make `CMakeLists.txt` simpler. [#8310](https://github.com/ClickHouse/ClickHouse/pull/8310) ([alexey-milovidov](https://github.com/alexey-milovidov)) * Release gcc builds now use `libc++` instead of `libstdc++`. Recently `libc++` was used only with clang. This will improve consistency of build configurations and portability. [#8311](https://github.com/ClickHouse/ClickHouse/pull/8311) ([alexey-milovidov](https://github.com/alexey-milovidov)) diff --git a/docs/ja/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md b/docs/ja/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md index d0660444e15..f29a608b85e 100644 --- a/docs/ja/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md +++ b/docs/ja/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md @@ -55,7 +55,7 @@ SOURCE(SOURCE_TYPE(param1 val1 ... paramN valN)) -- Source configuration または ``` sql -SOURCE(FILE(path '/opt/dictionaries/os.tsv' format 'TabSeparated')) +SOURCE(FILE(path './user_files/os.tsv' format 'TabSeparated')) SETTINGS(format_csv_allow_single_quotes = 0) ``` @@ -87,7 +87,7 @@ SETTINGS(format_csv_allow_single_quotes = 0) または ``` sql -SOURCE(FILE(path '/opt/dictionaries/os.tsv' format 'TabSeparated')) +SOURCE(FILE(path './user_files/os.tsv' format 'TabSeparated')) ``` フィールドの設定: diff --git a/docs/ru/engines/table-engines/integrations/hdfs.md b/docs/ru/engines/table-engines/integrations/hdfs.md index b56bbfc0788..c96ac12cd2a 100644 --- a/docs/ru/engines/table-engines/integrations/hdfs.md +++ b/docs/ru/engines/table-engines/integrations/hdfs.md @@ -183,7 +183,7 @@ CREATE TABLE big_table (name String, value UInt32) ENGINE = HDFS('hdfs://hdfs1:9 #### Ограничения {#limitations} * hadoop\_security\_kerberos\_ticket\_cache\_path могут быть определены только на глобальном уровне -## Поддержика Kerberos {#kerberos-support} +## Поддержка Kerberos {#kerberos-support} Если hadoop\_security\_authentication параметр имеет значение 'kerberos', ClickHouse аутентифицируется с помощью Kerberos. [Расширенные параметры](#clickhouse-extras) и hadoop\_security\_kerberos\_ticket\_cache\_path помогают сделать это. diff --git a/docs/ru/engines/table-engines/mergetree-family/custom-partitioning-key.md b/docs/ru/engines/table-engines/mergetree-family/custom-partitioning-key.md index 9a09618e508..4f0206158f1 100644 --- a/docs/ru/engines/table-engines/mergetree-family/custom-partitioning-key.md +++ b/docs/ru/engines/table-engines/mergetree-family/custom-partitioning-key.md @@ -34,6 +34,8 @@ ORDER BY (CounterID, StartDate, intHash32(UserID)); В этом примере задано партиционирование по типам событий, произошедших в течение текущей недели. +По умолчанию, ключ партиционирования с плавающей запятой не поддерживается. Чтобы использовать его, включите настройку [allow_floating_point_partition_key](../../../operations/settings/merge-tree-settings.md#allow_floating_point_partition_key). + Каждая партиция состоит из отдельных фрагментов или так называемых *кусков данных*. Каждый кусок отсортирован по первичному ключу. При вставке данных в таблицу каждая отдельная запись сохраняется в виде отдельного куска. Через некоторое время после вставки (обычно до 10 минут), ClickHouse выполняет в фоновом режиме слияние данных — в результате куски для одной и той же партиции будут объединены в более крупный кусок. !!! info "Info" diff --git a/docs/ru/engines/table-engines/mergetree-family/replacingmergetree.md b/docs/ru/engines/table-engines/mergetree-family/replacingmergetree.md index ec0b339e8c9..ebd7875179d 100644 --- a/docs/ru/engines/table-engines/mergetree-family/replacingmergetree.md +++ b/docs/ru/engines/table-engines/mergetree-family/replacingmergetree.md @@ -33,11 +33,11 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] **Параметры ReplacingMergeTree** -- `ver` — столбец с версией, тип `UInt*`, `Date` или `DateTime`. Необязательный параметр. +- `ver` — столбец с номером версии. Тип `UInt*`, `Date`, `DateTime` или `DateTime64`. Необязательный параметр. При слиянии `ReplacingMergeTree` оставляет только строку для каждого уникального ключа сортировки: - - Последнюю в выборке, если `ver` не задан. Под выборкой здесь понимается набор строк в наборе партов, участвующих в слиянии. Последний по времени создания парт (последний инсерт) будет последним в выборке. Таким образом, после дедупликации для каждого значения ключа сортировки останется самая последняя строка из самого последнего инсерта. + - Последнюю в выборке, если `ver` не задан. Под выборкой здесь понимается набор строк в наборе кусков данных, участвующих в слиянии. Последний по времени создания кусок (последняя вставка) будет последним в выборке. Таким образом, после дедупликации для каждого значения ключа сортировки останется самая последняя строка из самой последней вставки. - С максимальной версией, если `ver` задан. **Секции запроса** @@ -62,7 +62,6 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] Все параметры, кроме `ver` имеют то же значение, что в и `MergeTree`. -- `ver` — столбец с версией. Необязательный параметр. Описание смотрите выше по тексту. +- `ver` — столбец с номером версии. Необязательный параметр. Описание смотрите выше по тексту. - diff --git a/docs/ru/engines/table-engines/mergetree-family/replication.md b/docs/ru/engines/table-engines/mergetree-family/replication.md index 848adbee4da..cb92084695a 100644 --- a/docs/ru/engines/table-engines/mergetree-family/replication.md +++ b/docs/ru/engines/table-engines/mergetree-family/replication.md @@ -65,6 +65,8 @@ ClickHouse хранит метаинформацию о репликах в [Apa Репликация асинхронная, мульти-мастер. Запросы `INSERT` и `ALTER` можно направлять на любой доступный сервер. Данные вставятся на сервер, где выполнен запрос, а затем скопируются на остальные серверы. В связи с асинхронностью, только что вставленные данные появляются на остальных репликах с небольшой задержкой. Если часть реплик недоступна, данные на них запишутся тогда, когда они станут доступны. Если реплика доступна, то задержка составляет столько времени, сколько требуется для передачи блока сжатых данных по сети. Количество потоков для выполнения фоновых задач можно задать с помощью настройки [background_schedule_pool_size](../../../operations/settings/settings.md#background_schedule_pool_size). +Движок `ReplicatedMergeTree` использует отдельный пул потоков для скачивания кусков данных. Размер пула ограничен настройкой [background_fetches_pool_size](../../../operations/settings/settings.md#background_fetches_pool_size), которую можно указать при перезапуске сервера. + По умолчанию, запрос INSERT ждёт подтверждения записи только от одной реплики. Если данные были успешно записаны только на одну реплику, и сервер с этой репликой перестал существовать, то записанные данные будут потеряны. Вы можете включить подтверждение записи от нескольких реплик, используя настройку `insert_quorum`. Каждый блок данных записывается атомарно. Запрос INSERT разбивается на блоки данных размером до `max_insert_block_size = 1048576` строк. То есть, если в запросе `INSERT` менее 1048576 строк, то он делается атомарно. @@ -249,5 +251,6 @@ $ sudo -u clickhouse touch /var/lib/clickhouse/flags/force_restore_data **Смотрите также** - [background_schedule_pool_size](../../../operations/settings/settings.md#background_schedule_pool_size) +- [background_fetches_pool_size](../../../operations/settings/settings.md#background_fetches_pool_size) - [execute_merges_on_single_replica_time_threshold](../../../operations/settings/settings.md#execute-merges-on-single-replica-time-threshold) diff --git a/docs/ru/operations/configuration-files.md b/docs/ru/operations/configuration-files.md index 11a01d1e6d2..8b4b0da8f2b 100644 --- a/docs/ru/operations/configuration-files.md +++ b/docs/ru/operations/configuration-files.md @@ -6,9 +6,9 @@ toc_title: "Конфигурационные файлы" # Конфигурационные файлы {#configuration_files} -Основной конфигурационный файл сервера - `config.xml`. Он расположен в директории `/etc/clickhouse-server/`. +Основной конфигурационный файл сервера - `config.xml` или `config.yaml`. Он расположен в директории `/etc/clickhouse-server/`. -Отдельные настройки могут быть переопределены в файлах `*.xml` и `*.conf` из директории `config.d` рядом с конфигом. +Отдельные настройки могут быть переопределены в файлах `*.xml` и `*.conf`, а также `.yaml` (для файлов в формате YAML) из директории `config.d` рядом с конфигом. У элементов этих конфигурационных файлов могут быть указаны атрибуты `replace` или `remove`. @@ -25,7 +25,7 @@ toc_title: "Конфигурационные файлы" В элементе `users_config` файла `config.xml` можно указать относительный путь к конфигурационному файлу с настройками пользователей, профилей и квот. Значение `users_config` по умолчанию — `users.xml`. Если `users_config` не указан, то настройки пользователей, профилей и квот можно задать непосредственно в `config.xml`. Настройки пользователя могут быть разделены в несколько отдельных файлов аналогичных `config.xml` и `config.d\`. Имя директории задаётся также как `users_config`. -Имя директории задаётся так же, как имя файла в `users_config`, с подстановкой `.d` вместо `.xml`. +Имя директории задаётся так же, как имя файла в `users_config`, с подстановкой `.d` вместо `.xml`/`.yaml`. Директория `users.d` используется по умолчанию, также как `users.xml` используется для `users_config`. Например, можно иметь по отдельному конфигурационному файлу для каждого пользователя: @@ -52,3 +52,66 @@ $ cat /etc/clickhouse-server/users.d/alice.xml Сервер следит за изменениями конфигурационных файлов, а также файлов и ZooKeeper-узлов, которые были использованы при выполнении подстановок и переопределений, и перезагружает настройки пользователей и кластеров на лету. То есть, можно изменять кластера, пользователей и их настройки без перезапуска сервера. +## Примеры записи конфигурации на YAML {#example} + +Здесь можно рассмотреть пример реальной конфигурации записанной на YAML: [config.yaml.example](https://github.com/ClickHouse/ClickHouse/blob/master/programs/server/config.yaml.example). + +Между стандартами XML и YAML имеются различия, поэтому в этом разделе будут перечислены некоторые подсказки для написания конфигурации на YMAL. + +Для записи обычной пары ключ-значение следует использовать Scalar: +``` yaml +key: value +``` + +Для создания тега, содержащего подтеги следует использовать Map: +``` yaml +map_key: + key1: val1 + key2: val2 + key3: val3 +``` + +Для создания списка значений или подтегов, расположенных по определенному ключу, следует использовать Sequence: +``` yaml +seq_key: + - val1 + - val2 + - key1: val3 + - map: + key2: val4 + key3: val5 +``` + +В случае, усли необходимо объявить тег, аналогичный XML-атрибуту, необходимо задать скаляр, имеющий ключ с префиксом @ и заключенный в кавычки: + +``` yaml +map: + "@attr1": value1 + "@attr2": value2 + key: 123 +``` + +Из такой Map мы получим после конвертации: + +``` xml + + 123 + +``` + +Помимо Map, можно задавать атрибуты для Sequence: + +``` yaml +seq: + - "@attr1": value1 + - "@attr2": value2 + - 123 + - abc +``` + +Таким образом получая аналог следующей записи на XML: + +``` xml +123 +abc +``` diff --git a/docs/ru/operations/settings/merge-tree-settings.md b/docs/ru/operations/settings/merge-tree-settings.md index e58948b0148..2af99bb8026 100644 --- a/docs/ru/operations/settings/merge-tree-settings.md +++ b/docs/ru/operations/settings/merge-tree-settings.md @@ -246,4 +246,15 @@ Eсли суммарное число активных кусков во все Значение по умолчанию: -1 (неограниченно). +## allow_floating_point_partition_key {#allow_floating_point_partition_key} + +Позволяет использовать число с плавающей запятой в качестве ключа партиционирования. + +Возможные значения: + +- 0 — Ключ партиционирования с плавающей запятой не разрешен. +- 1 — Ключ партиционирования с плавающей запятой разрешен. + +Значение по умолчанию: `0`. + [Original article](https://clickhouse.tech/docs/ru/operations/settings/merge_tree_settings/) diff --git a/docs/ru/operations/settings/settings.md b/docs/ru/operations/settings/settings.md index 45def8c4b50..ada8ee91293 100644 --- a/docs/ru/operations/settings/settings.md +++ b/docs/ru/operations/settings/settings.md @@ -347,7 +347,31 @@ INSERT INTO table_with_enum_column_for_tsv_insert FORMAT TSV 102 2; ## input_format_null_as_default {#settings-input-format-null-as-default} -Включает или отключает использование значений по умолчанию в случаях, когда во входных данных содержится `NULL`, но тип соответствующего столбца не `Nullable(T)` (для текстовых форматов). +Включает или отключает инициализацию [значениями по умолчанию](../../sql-reference/statements/create/table.md#create-default-values) ячеек с [NULL](../../sql-reference/syntax.md#null-literal), если тип данных столбца не позволяет [хранить NULL](../../sql-reference/data-types/nullable.md#data_type-nullable). +Если столбец не позволяет хранить `NULL` и эта настройка отключена, то вставка `NULL` приведет к возникновению исключения. Если столбец позволяет хранить `NULL`, то значения `NULL` вставляются независимо от этой настройки. + +Эта настройка используется для запросов [INSERT ... VALUES](../../sql-reference/statements/insert-into.md) для текстовых входных форматов. + +Возможные значения: + +- 0 — вставка `NULL` в столбец, не позволяющий хранить `NULL`, приведет к возникновению исключения. +- 1 — ячейки с `NULL` инициализируются значением столбца по умолчанию. + +Значение по умолчанию: `1`. + +## insert_null_as_default {#insert_null_as_default} + +Включает или отключает вставку [значений по умолчанию](../../sql-reference/statements/create/table.md#create-default-values) вместо [NULL](../../sql-reference/syntax.md#null-literal) в столбцы, которые не позволяют [хранить NULL](../../sql-reference/data-types/nullable.md#data_type-nullable). +Если столбец не позволяет хранить `NULL` и эта настройка отключена, то вставка `NULL` приведет к возникновению исключения. Если столбец позволяет хранить `NULL`, то значения `NULL` вставляются независимо от этой настройки. + +Эта настройка используется для запросов [INSERT ... SELECT](../../sql-reference/statements/insert-into.md#insert_query_insert-select). При этом подзапросы `SELECT` могут объединяться с помощью `UNION ALL`. + +Возможные значения: + +- 0 — вставка `NULL` в столбец, не позволяющий хранить `NULL`, приведет к возникновению исключения. +- 1 — вместо `NULL` вставляется значение столбца по умолчанию. + +Значение по умолчанию: `1`. ## input_format_skip_unknown_fields {#settings-input-format-skip-unknown-fields} @@ -2043,6 +2067,16 @@ SELECT idx, i FROM null_in WHERE i IN (1, NULL) SETTINGS transform_null_in = 1; Значение по умолчанию: 16. +## background_fetches_pool_size {#background_fetches_pool_size} + +Задает количество потоков для скачивания кусков данных для [реплицируемых](../../engines/table-engines/mergetree-family/replication.md) таблиц. Настройка применяется при запуске сервера ClickHouse и не может быть изменена в пользовательском сеансе. Для использования в продакшене с частыми небольшими вставками или медленным кластером ZooKeeper рекомендуется использовать значение по умолчанию. + +Допустимые значения: + +- Положительное целое число. + +Значение по умолчанию: 8. + ## background_distributed_schedule_pool_size {#background_distributed_schedule_pool_size} Задает количество потоков для выполнения фоновых задач. Работает для таблиц с движком [Distributed](../../engines/table-engines/special/distributed.md). Настройка применяется при запуске сервера ClickHouse и не может быть изменена в пользовательском сеансе. @@ -2890,4 +2924,36 @@ SELECT * FROM test LIMIT 10 OFFSET 100; Значение по умолчанию: `1800`. +## optimize_fuse_sum_count_avg {#optimize_fuse_sum_count_avg} + +Позволяет объединить агрегатные функции с одинаковым аргументом. Запрос, содержащий по крайней мере две агрегатные функции: [sum](../../sql-reference/aggregate-functions/reference/sum.md#agg_function-sum), [count](../../sql-reference/aggregate-functions/reference/count.md#agg_function-count) или [avg](../../sql-reference/aggregate-functions/reference/avg.md#agg_function-avg) с одинаковым аргументом, перезаписывается как [sumCount](../../sql-reference/aggregate-functions/reference/sumcount.md#agg_function-sumCount). + +Возможные значения: + +- 0 — функции с одинаковым аргументом не объединяются. +- 1 — функции с одинаковым аргументом объединяются. + +Значение по умолчанию: `0`. + +**Пример** + +Запрос: + +``` sql +CREATE TABLE fuse_tbl(a Int8, b Int8) Engine = Log; +SET optimize_fuse_sum_count_avg = 1; +EXPLAIN SYNTAX SELECT sum(a), sum(b), count(b), avg(b) from fuse_tbl FORMAT TSV; +``` + +Результат: + +``` text +SELECT + sum(a), + sumCount(b).1, + sumCount(b).2, + (sumCount(b).1) / (sumCount(b).2) +FROM fuse_tbl +``` + [Оригинальная статья](https://clickhouse.tech/docs/ru/operations/settings/settings/) diff --git a/docs/ru/operations/system-tables/stack_trace.md b/docs/ru/operations/system-tables/stack_trace.md index 58d0a1c4b6a..338c14534cf 100644 --- a/docs/ru/operations/system-tables/stack_trace.md +++ b/docs/ru/operations/system-tables/stack_trace.md @@ -6,9 +6,10 @@ Столбцы: -- `thread_id` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Идентификатор потока. -- `query_id` ([String](../../sql-reference/data-types/string.md)) — Идентификатор запроса. Может быть использован для получения подробной информации о выполненном запросе из системной таблицы [query_log](#system_tables-query_log). -- `trace` ([Array(UInt64)](../../sql-reference/data-types/array.md)) — [Трассировка стека](https://en.wikipedia.org/wiki/Stack_trace). Представляет собой список физических адресов, по которым расположены вызываемые методы. +- `thread_name` ([String](../../sql-reference/data-types/string.md)) — имя потока. +- `thread_id` ([UInt64](../../sql-reference/data-types/int-uint.md)) — идентификатор потока. +- `query_id` ([String](../../sql-reference/data-types/string.md)) — идентификатор запроса. Может быть использован для получения подробной информации о выполненном запросе из системной таблицы [query_log](#system_tables-query_log). +- `trace` ([Array(UInt64)](../../sql-reference/data-types/array.md)) — [трассировка стека](https://en.wikipedia.org/wiki/Stack_trace). Представляет собой список физических адресов, по которым расположены вызываемые методы. **Пример** @@ -21,12 +22,14 @@ SET allow_introspection_functions = 1; Получение символов из объектных файлов ClickHouse: ``` sql -WITH arrayMap(x -> demangle(addressToSymbol(x)), trace) AS all SELECT thread_id, query_id, arrayStringConcat(all, '\n') AS res FROM system.stack_trace LIMIT 1 \G +WITH arrayMap(x -> demangle(addressToSymbol(x)), trace) AS all SELECT thread_name, thread_id, query_id, arrayStringConcat(all, '\n') AS res FROM system.stack_trace LIMIT 1 \G; ``` ``` text Row 1: ────── +thread_name: clickhouse-serv + thread_id: 686 query_id: 1a11f70b-626d-47c1-b948-f9c7b206395d res: sigqueue @@ -51,12 +54,14 @@ __clone Получение имен файлов и номеров строк в исходном коде ClickHouse: ``` sql -WITH arrayMap(x -> addressToLine(x), trace) AS all, arrayFilter(x -> x LIKE '%/dbms/%', all) AS dbms SELECT thread_id, query_id, arrayStringConcat(notEmpty(dbms) ? dbms : all, '\n') AS res FROM system.stack_trace LIMIT 1 \G +WITH arrayMap(x -> addressToLine(x), trace) AS all, arrayFilter(x -> x LIKE '%/dbms/%', all) AS dbms SELECT thread_name, thread_id, query_id, arrayStringConcat(notEmpty(dbms) ? dbms : all, '\n') AS res FROM system.stack_trace LIMIT 1 \G; ``` ``` text Row 1: ────── +thread_name: clickhouse-serv + thread_id: 686 query_id: cad353e7-1c29-4b2e-949f-93e597ab7a54 res: /lib/x86_64-linux-gnu/libc-2.27.so @@ -78,10 +83,9 @@ res: /lib/x86_64-linux-gnu/libc-2.27.so /lib/x86_64-linux-gnu/libc-2.27.so ``` -**См. также** - -- [Функции интроспекции](../../sql-reference/functions/introspection.md) — Что такое функции интроспекции и как их использовать. -- [system.trace_log](../../operations/system-tables/trace_log.md#system_tables-trace_log) — Содержит трассировки стека, собранные профилировщиком выборочных запросов. -- [arrayMap](../../sql-reference/functions/array-functions.md#array-map) — Описание и пример использования функции `arrayMap`. -- [arrayFilter](../../sql-reference/functions/array-functions.md#array-filter) — Описание и пример использования функции `arrayFilter`. +**Смотрите также** +- [Функции интроспекции](../../sql-reference/functions/introspection.md) — описание функций интроспекции и примеры использования. +- [system.trace_log](../../operations/system-tables/trace_log.md#system_tables-trace_log) — системная таблица, содержащая трассировки стека, собранные профилировщиком выборочных запросов. +- [arrayMap](../../sql-reference/functions/array-functions.md#array-map) — описание и пример использования функции `arrayMap`. +- [arrayFilter](../../sql-reference/functions/array-functions.md#array-filter) — описание и пример использования функции `arrayFilter`. diff --git a/docs/ru/operations/system-tables/tables.md b/docs/ru/operations/system-tables/tables.md index 11bb6a9eda2..3dec1e7d940 100644 --- a/docs/ru/operations/system-tables/tables.md +++ b/docs/ru/operations/system-tables/tables.md @@ -39,6 +39,8 @@ - `lifetime_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) - общее количество байт, добавленных оператором `INSERT` с момента запуска сервера (только для таблиц `Buffer`). +- `comment` ([String](../../sql-reference/data-types/string.md)) — комментарий к таблице. + Таблица `system.tables` используется при выполнении запроса `SHOW TABLES`. **Пример** @@ -50,45 +52,51 @@ SELECT * FROM system.tables LIMIT 2 FORMAT Vertical; ```text Row 1: ────── -database: system -name: aggregate_function_combinators -uuid: 00000000-0000-0000-0000-000000000000 -engine: SystemAggregateFunctionCombinators +database: base +name: t1 +uuid: 81b1c20a-b7c6-4116-a2ce-7583fb6b6736 +engine: MergeTree is_temporary: 0 -data_paths: [] -metadata_path: /var/lib/clickhouse/metadata/system/aggregate_function_combinators.sql -metadata_modification_time: 1970-01-01 03:00:00 +data_paths: ['/var/lib/clickhouse/store/81b/81b1c20a-b7c6-4116-a2ce-7583fb6b6736/'] +metadata_path: /var/lib/clickhouse/store/461/461cf698-fd0b-406d-8c01-5d8fd5748a91/t1.sql +metadata_modification_time: 2021-01-25 19:14:32 dependencies_database: [] dependencies_table: [] -create_table_query: -engine_full: -partition_key: -sorting_key: -primary_key: -sampling_key: -storage_policy: -total_rows: ᴺᵁᴸᴸ -total_bytes: ᴺᵁᴸᴸ +create_table_query: CREATE TABLE base.t1 (`n` UInt64) ENGINE = MergeTree ORDER BY n SETTINGS index_granularity = 8192 +engine_full: MergeTree ORDER BY n SETTINGS index_granularity = 8192 +partition_key: +sorting_key: n +primary_key: n +sampling_key: +storage_policy: default +total_rows: 1 +total_bytes: 99 +lifetime_rows: ᴺᵁᴸᴸ +lifetime_bytes: ᴺᵁᴸᴸ +comment: Row 2: ────── -database: system -name: asynchronous_metrics +database: default +name: 53r93yleapyears uuid: 00000000-0000-0000-0000-000000000000 -engine: SystemAsynchronousMetrics +engine: MergeTree is_temporary: 0 -data_paths: [] -metadata_path: /var/lib/clickhouse/metadata/system/asynchronous_metrics.sql -metadata_modification_time: 1970-01-01 03:00:00 +data_paths: ['/var/lib/clickhouse/data/default/53r93yleapyears/'] +metadata_path: /var/lib/clickhouse/metadata/default/53r93yleapyears.sql +metadata_modification_time: 2020-09-23 09:05:36 dependencies_database: [] dependencies_table: [] -create_table_query: -engine_full: -partition_key: -sorting_key: -primary_key: -sampling_key: -storage_policy: -total_rows: ᴺᵁᴸᴸ -total_bytes: ᴺᵁᴸᴸ +create_table_query: CREATE TABLE default.`53r93yleapyears` (`id` Int8, `febdays` Int8) ENGINE = MergeTree ORDER BY id SETTINGS index_granularity = 8192 +engine_full: MergeTree ORDER BY id SETTINGS index_granularity = 8192 +partition_key: +sorting_key: id +primary_key: id +sampling_key: +storage_policy: default +total_rows: 2 +total_bytes: 155 +lifetime_rows: ᴺᵁᴸᴸ +lifetime_bytes: ᴺᵁᴸᴸ +comment: ``` diff --git a/docs/ru/sql-reference/aggregate-functions/parametric-functions.md b/docs/ru/sql-reference/aggregate-functions/parametric-functions.md index e5162b63b88..508c8de2a58 100644 --- a/docs/ru/sql-reference/aggregate-functions/parametric-functions.md +++ b/docs/ru/sql-reference/aggregate-functions/parametric-functions.md @@ -253,7 +253,7 @@ windowFunnel(window, [mode, [mode, ... ]])(timestamp, cond1, cond2, ..., condN) **Параметры** -- `window` — ширина скользящего окна по времени. Единица измерения зависит от `timestamp` и может варьироваться. Должно соблюдаться условие `timestamp события cond2 <= timestamp события cond1 + window`. +- `window` — ширина скользящего окна по времени. Это время между первым и последним условием. Единица измерения зависит от `timestamp` и может варьироваться. Должно соблюдаться условие `timestamp события cond1 <= timestamp события cond2 <= ... <= timestamp события condN <= timestamp события cond1 + window`. - `mode` — необязательный параметр. Может быть установленно несколько значений одновременно. - `'strict'` — не учитывать подряд идущие повторяющиеся события. - `'strict_order'` — запрещает посторонние события в искомой последовательности. Например, при поиске цепочки `A->B->C` в `A->B->D->C` поиск будет остановлен на `D` и функция вернет 2. @@ -311,7 +311,7 @@ FROM GROUP BY user_id ) GROUP BY level -ORDER BY level ASC +ORDER BY level ASC; ``` ## retention {#retention} diff --git a/docs/ru/sql-reference/aggregate-functions/reference/rankCorr.md b/docs/ru/sql-reference/aggregate-functions/reference/rankCorr.md index c98e7b88bcf..73d0552fc6f 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/rankCorr.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/rankCorr.md @@ -1,4 +1,8 @@ -## rankCorr {#agg_function-rankcorr} +--- +toc_priority: 145 +--- + +# rankCorr {#agg_function-rankcorr} Вычисляет коэффициент ранговой корреляции. diff --git a/docs/ru/sql-reference/aggregate-functions/reference/sumcount.md b/docs/ru/sql-reference/aggregate-functions/reference/sumcount.md new file mode 100644 index 00000000000..0606b06fba0 --- /dev/null +++ b/docs/ru/sql-reference/aggregate-functions/reference/sumcount.md @@ -0,0 +1,46 @@ +--- +toc_priority: 144 +--- + +# sumCount {#agg_function-sumCount} + +Вычисляет сумму чисел и одновременно подсчитывает количество строк. + +**Синтаксис** + +``` sql +sumCount(x) +``` + +**Аргументы** + +- `x` — Входное значение типа [Integer](../../../sql-reference/data-types/int-uint.md), [Float](../../../sql-reference/data-types/float.md), или [Decimal](../../../sql-reference/data-types/decimal.md). + +**Возвращаемое значение** + +- Кортеж из элементов `(sum, count)`, где `sum` — это сумма чисел и `count` — количество строк со значениями, отличными от `NULL`. + +Тип: [Tuple](../../../sql-reference/data-types/tuple.md). + +**Пример** + +Запрос: + +``` sql +CREATE TABLE s_table (x Nullable(Int8)) Engine = Log; +INSERT INTO s_table SELECT number FROM numbers(0, 20); +INSERT INTO s_table VALUES (NULL); +SELECT sumCount(x) from s_table; +``` + +Результат: + +``` text +┌─sumCount(x)─┐ +│ (190,20) │ +└─────────────┘ +``` + +**Смотрите также** + +- Настройка [optimize_fuse_sum_count_avg](../../../operations/settings/settings.md#optimize_fuse_sum_count_avg) diff --git a/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md b/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md index e63b5574e30..f1f4a8fae18 100644 --- a/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md +++ b/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md @@ -121,7 +121,7 @@ LAYOUT(HASHED(PREALLOCATE 0)) Аналогичен `hashed`, но при этом занимает меньше места в памяти и генерирует более высокую загрузку CPU. -Для этого типа размещения также можно задать `preallocate` в значении `true`. В данном случае это более важно, чем для типа `hashed`. +Для этого типа размещения также можно задать `preallocate` в значении `true`. В данном случае это более важно, чем для типа `hashed`. Пример конфигурации: @@ -338,7 +338,7 @@ LAYOUT(CACHE(SIZE_IN_CELLS 1000000000)) ``` sql LAYOUT(SSD_CACHE(BLOCK_SIZE 4096 FILE_SIZE 16777216 READ_BUFFER_SIZE 1048576 - PATH /var/lib/clickhouse/clickhouse_dictionaries/test_dict)) + PATH ./user_files/test_dict)) ``` ### complex_key_ssd_cache {#complex-key-ssd-cache} diff --git a/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md b/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md index a7999470330..ff83eb425d0 100644 --- a/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md +++ b/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md @@ -53,7 +53,7 @@ SOURCE(SOURCE_TYPE(param1 val1 ... paramN valN)) -- Source configuration или ``` sql -SOURCE(FILE(path '/opt/dictionaries/os.tsv' format 'TabSeparated')) +SOURCE(FILE(path './user_files/os.tsv' format 'TabSeparated')) SETTINGS(format_csv_allow_single_quotes = 0) ``` @@ -69,7 +69,7 @@ SETTINGS(format_csv_allow_single_quotes = 0) - [ClickHouse](#dicts-external_dicts_dict_sources-clickhouse) - [MongoDB](#dicts-external_dicts_dict_sources-mongodb) - [Redis](#dicts-external_dicts_dict_sources-redis) - - [PostgreSQL](#dicts-external_dicts_dict_sources-postgresql) + - [PostgreSQL](#dicts-external_dicts_dict_sources-postgresql) ## Локальный файл {#dicts-external_dicts_dict_sources-local_file} @@ -87,7 +87,7 @@ SETTINGS(format_csv_allow_single_quotes = 0) или ``` sql -SOURCE(FILE(path '/opt/dictionaries/os.tsv' format 'TabSeparated')) +SOURCE(FILE(path './user_files/os.tsv' format 'TabSeparated')) ``` Поля настройки: diff --git a/docs/ru/sql-reference/dictionaries/index.md b/docs/ru/sql-reference/dictionaries/index.md index 59c7518d0c5..84c6f1a3c13 100644 --- a/docs/ru/sql-reference/dictionaries/index.md +++ b/docs/ru/sql-reference/dictionaries/index.md @@ -12,6 +12,5 @@ ClickHouse поддерживает специальные функции для ClickHouse поддерживает: -- [Встроенные словари](internal-dicts.md#internal_dicts) со специфическим [набором функций](../../sql-reference/dictionaries/external-dictionaries/index.md). -- [Подключаемые (внешние) словари](external-dictionaries/external-dicts.md#dicts-external-dicts) с [набором функций](../../sql-reference/dictionaries/external-dictionaries/index.md). - +- [Встроенные словари](internal-dicts.md#internal_dicts) со специфическим [набором функций](../../sql-reference/functions/ext-dict-functions.md). +- [Подключаемые (внешние) словари](external-dictionaries/external-dicts.md#dicts-external-dicts) с [набором функций](../../sql-reference/functions/ext-dict-functions.md). \ No newline at end of file diff --git a/docs/ru/sql-reference/functions/encoding-functions.md b/docs/ru/sql-reference/functions/encoding-functions.md index f4fa21ba46a..23e840a7898 100644 --- a/docs/ru/sql-reference/functions/encoding-functions.md +++ b/docs/ru/sql-reference/functions/encoding-functions.md @@ -153,8 +153,60 @@ Result: ## unhex(str) {#unhexstr} -Accepts a string containing any number of hexadecimal digits, and returns a string containing the corresponding bytes. Supports both uppercase and lowercase letters A-F. The number of hexadecimal digits does not have to be even. If it is odd, the last digit is interpreted as the least significant half of the 00-0F byte. If the argument string contains anything other than hexadecimal digits, some implementation-defined result is returned (an exception isn’t thrown). -If you want to convert the result to a number, you can use the ‘reverse’ and ‘reinterpretAsType’ functions. +Выполняет операцию, обратную [hex](#hex). Функция интерпретирует каждую пару шестнадцатеричных цифр аргумента как число и преобразует его в символ. Возвращаемое значение представляет собой двоичную строку (BLOB). + +Если вы хотите преобразовать результат в число, вы можете использовать функции [reverse](../../sql-reference/functions/string-functions.md#reverse) и [reinterpretAs](../../sql-reference/functions/type-conversion-functions.md#type-conversion-functions). + +!!! note "Примечание" + Если `unhex` вызывается из `clickhouse-client`, двоичные строки отображаются с использованием UTF-8. + +Синоним: `UNHEX`. + +**Синтаксис** + +``` sql +unhex(arg) +``` + +**Аргументы** + +- `arg` — Строка, содержащая любое количество шестнадцатеричных цифр. Тип: [String](../../sql-reference/data-types/string.md). + +Поддерживаются как прописные, так и строчные буквы `A-F`. Количество шестнадцатеричных цифр не обязательно должно быть четным. Если оно нечетное, последняя цифра интерпретируется как наименее значимая половина байта `00-0F`. Если строка аргумента содержит что-либо, кроме шестнадцатеричных цифр, возвращается некоторый результат, определенный реализацией (исключение не создается). + +**Возвращаемое значение** + +- Бинарная строка (BLOB). + +Тип: [String](../../sql-reference/data-types/string.md). + +**Пример** + +Запрос: +``` sql +SELECT unhex('303132'), UNHEX('4D7953514C'); +``` + +Результат: +``` text +┌─unhex('303132')─┬─unhex('4D7953514C')─┐ +│ 012 │ MySQL │ +└─────────────────┴─────────────────────┘ +``` + +Запрос: + +``` sql +SELECT reinterpretAsUInt64(reverse(unhex('FFF'))) AS num; +``` + +Результат: + +``` text +┌──num─┐ +│ 4095 │ +└──────┘ +``` ## UUIDStringToNum(str) {#uuidstringtonumstr} @@ -171,4 +223,3 @@ If you want to convert the result to a number, you can use the ‘reverse’ and ## bitmaskToArray(num) {#bitmasktoarraynum} Принимает целое число. Возвращает массив чисел типа UInt64, содержащий степени двойки, в сумме дающих исходное число; числа в массиве идут по возрастанию. - diff --git a/docs/ru/sql-reference/functions/encryption-functions.md b/docs/ru/sql-reference/functions/encryption-functions.md index 844f9cc3197..44957fde152 100644 --- a/docs/ru/sql-reference/functions/encryption-functions.md +++ b/docs/ru/sql-reference/functions/encryption-functions.md @@ -5,11 +5,11 @@ toc_title: "Функции для шифрования" # Функции шифрования {#encryption-functions} -Даннвые функции реализуют шифрование и расшифровку данных с помощью AES (Advanced Encryption Standard) алгоритма. +Данные функции реализуют шифрование и расшифровку данных с помощью AES (Advanced Encryption Standard) алгоритма. Длина ключа зависит от режима шифрования. Он может быть длинной в 16, 24 и 32 байта для режимов шифрования `-128-`, `-196-` и `-256-` соответственно. -Длина инициализирующего вектора всегда 16 байт (лишнии байты игнорируются). +Длина инициализирующего вектора всегда 16 байт (лишние байты игнорируются). Обратите внимание, что до версии Clickhouse 21.1 эти функции работали медленно. diff --git a/docs/ru/sql-reference/functions/ext-dict-functions.md b/docs/ru/sql-reference/functions/ext-dict-functions.md index be91145659e..612477dc806 100644 --- a/docs/ru/sql-reference/functions/ext-dict-functions.md +++ b/docs/ru/sql-reference/functions/ext-dict-functions.md @@ -3,6 +3,9 @@ toc_priority: 58 toc_title: "Функции для работы с внешними словарями" --- +!!! attention "Внимание" + Для словарей, созданных с помощью [DDL-запросов](../../sql-reference/statements/create/dictionary.md), в параметре `dict_name` указывается полное имя словаря вместе с базой данных, например: `.`. Если база данных не указана, используется текущая. + # Функции для работы с внешними словарями {#ext_dict_functions} Информацию о подключении и настройке внешних словарей смотрите в разделе [Внешние словари](../../sql-reference/dictionaries/external-dictionaries/external-dicts.md). @@ -241,7 +244,7 @@ dictHas('dict_name', id) - 0, если ключа нет. - 1, если ключ есть. -Тип — `UInt8`. +Тип: [UInt8](../../sql-reference/data-types/int-uint.md). ## dictGetHierarchy {#dictgethierarchy} @@ -262,7 +265,7 @@ dictGetHierarchy('dict_name', key) - Цепочка предков заданного ключа. -Type: [Array(UInt64)](../../sql-reference/functions/ext-dict-functions.md). +Type: [Array](../../sql-reference/data-types/array.md)([UInt64](../../sql-reference/data-types/int-uint.md)). ## dictIsIn {#dictisin} @@ -281,7 +284,120 @@ Type: [Array(UInt64)](../../sql-reference/functions/ext-dict-functions.md). - 0, если `child_id_expr` — не дочерний элемент `ancestor_id_expr`. - 1, если `child_id_expr` — дочерний элемент `ancestor_id_expr` или если `child_id_expr` и есть `ancestor_id_expr`. -Тип — `UInt8`. +Тип: [UInt8](../../sql-reference/data-types/int-uint.md). + +## dictGetChildren {#dictgetchildren} + +Возвращает потомков первого уровня в виде массива индексов. Это обратное преобразование для [dictGetHierarchy](#dictgethierarchy). + +**Синтаксис** + +``` sql +dictGetChildren(dict_name, key) +``` + +**Аргументы** + +- `dict_name` — имя словаря. [String literal](../../sql-reference/syntax.md#syntax-string-literal). +- `key` — значение ключа. [Выражение](../syntax.md#syntax-expressions), возвращающее значение типа [UInt64](../../sql-reference/functions/ext-dict-functions.md). + +**Возвращаемые значения** + +- Потомки первого уровня для ключа. + +Тип: [Array](../../sql-reference/data-types/array.md)([UInt64](../../sql-reference/data-types/int-uint.md)). + +**Пример** + +Рассмотрим иерархический словарь: + +``` text +┌─id─┬─parent_id─┐ +│ 1 │ 0 │ +│ 2 │ 1 │ +│ 3 │ 1 │ +│ 4 │ 2 │ +└────┴───────────┘ +``` + +Потомки первого уровня: + +``` sql +SELECT dictGetChildren('hierarchy_flat_dictionary', number) FROM system.numbers LIMIT 4; +``` + +``` text +┌─dictGetChildren('hierarchy_flat_dictionary', number)─┐ +│ [1] │ +│ [2,3] │ +│ [4] │ +│ [] │ +└──────────────────────────────────────────────────────┘ +``` + +## dictGetDescendant {#dictgetdescendant} + +Возвращает всех потомков, как если бы функция [dictGetChildren](#dictgetchildren) была выполнена `level` раз рекурсивно. + +**Синтаксис** + +``` sql +dictGetDescendants(dict_name, key, level) +``` + +**Аргументы** + +- `dict_name` — имя словаря. [String literal](../../sql-reference/syntax.md#syntax-string-literal). +- `key` — значение ключа. [Выражение](../syntax.md#syntax-expressions), возвращающее значение типа [UInt64](../../sql-reference/functions/ext-dict-functions.md). +- `level` — уровень иерархии. Если `level = 0`, возвращаются все потомки. [UInt8](../../sql-reference/data-types/int-uint.md). + +**Возвращаемые значения** + +- Потомки для ключа. + +Тип: [Array](../../sql-reference/data-types/array.md)([UInt64](../../sql-reference/data-types/int-uint.md)). + +**Пример** + +Рассмотрим иерархический словарь: + +``` text +┌─id─┬─parent_id─┐ +│ 1 │ 0 │ +│ 2 │ 1 │ +│ 3 │ 1 │ +│ 4 │ 2 │ +└────┴───────────┘ +``` +Все потомки: + +``` sql +SELECT dictGetDescendants('hierarchy_flat_dictionary', number) FROM system.numbers LIMIT 4; +``` + +``` text +┌─dictGetDescendants('hierarchy_flat_dictionary', number)─┐ +│ [1,2,3,4] │ +│ [2,3,4] │ +│ [4] │ +│ [] │ +└─────────────────────────────────────────────────────────┘ +``` + +Потомки первого уровня: + +``` sql +SELECT dictGetDescendants('hierarchy_flat_dictionary', number, 1) FROM system.numbers LIMIT 4; +``` + +``` text +┌─dictGetDescendants('hierarchy_flat_dictionary', number, 1)─┐ +│ [1] │ +│ [2,3] │ +│ [4] │ +│ [] │ +└────────────────────────────────────────────────────────────┘ +``` ## Прочие функции {#ext_dict_functions-other} diff --git a/docs/ru/sql-reference/functions/ip-address-functions.md b/docs/ru/sql-reference/functions/ip-address-functions.md index d7f6d2f7618..b02d45d7667 100644 --- a/docs/ru/sql-reference/functions/ip-address-functions.md +++ b/docs/ru/sql-reference/functions/ip-address-functions.md @@ -397,9 +397,9 @@ SELECT addr, isIPv6String(addr) FROM ( SELECT ['::', '1111::ffff', '::ffff:127.0 ## isIPAddressInRange {#isipaddressinrange} -Проверяет попадает ли IP адрес в интервал, заданный в [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) нотации. +Проверяет, попадает ли IP адрес в интервал, заданный в нотации [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing). -**Syntax** +**Синтаксис** ``` sql isIPAddressInRange(address, prefix) @@ -409,7 +409,7 @@ isIPAddressInRange(address, prefix) **Аргументы** - `address` — IPv4 или IPv6 адрес. [String](../../sql-reference/data-types/string.md). -- `prefix` — IPv4 или IPv6 подсеть, заданная в CIDR нотации. [String](../../sql-reference/data-types/string.md). +- `prefix` — IPv4 или IPv6 подсеть, заданная в нотации CIDR. [String](../../sql-reference/data-types/string.md). **Возвращаемое значение** @@ -422,7 +422,7 @@ isIPAddressInRange(address, prefix) Запрос: ``` sql -SELECT isIPAddressInRange('127.0.0.1', '127.0.0.0/8') +SELECT isIPAddressInRange('127.0.0.1', '127.0.0.0/8'); ``` Результат: @@ -436,7 +436,7 @@ SELECT isIPAddressInRange('127.0.0.1', '127.0.0.0/8') Запрос: ``` sql -SELECT isIPAddressInRange('127.0.0.1', 'ffff::/16') +SELECT isIPAddressInRange('127.0.0.1', 'ffff::/16'); ``` Результат: diff --git a/docs/ru/sql-reference/functions/json-functions.md b/docs/ru/sql-reference/functions/json-functions.md index 4de487c03ad..8941ccc1691 100644 --- a/docs/ru/sql-reference/functions/json-functions.md +++ b/docs/ru/sql-reference/functions/json-functions.md @@ -18,37 +18,37 @@ toc_title: JSON Проверяет наличие поля с именем `name`. -Алиас: `simpleJSONHas`. +Синоним: `simpleJSONHas`. ## visitParamExtractUInt(params, name) {#visitparamextractuintparams-name} Пытается выделить число типа UInt64 из значения поля с именем `name`. Если поле строковое, пытается выделить число из начала строки. Если такого поля нет, или если оно есть, но содержит не число, то возвращает 0. -Алиас: `simpleJSONExtractUInt`. +Синоним: `simpleJSONExtractUInt`. ## visitParamExtractInt(params, name) {#visitparamextractintparams-name} Аналогично для Int64. -Алиас: `simpleJSONExtractInt`. +Синоним: `simpleJSONExtractInt`. ## visitParamExtractFloat(params, name) {#visitparamextractfloatparams-name} Аналогично для Float64. -Алиас: `simpleJSONExtractFloat`. +Синоним: `simpleJSONExtractFloat`. ## visitParamExtractBool(params, name) {#visitparamextractboolparams-name} Пытается выделить значение true/false. Результат — UInt8. -Алиас: `simpleJSONExtractBool`. +Синоним: `simpleJSONExtractBool`. ## visitParamExtractRaw(params, name) {#visitparamextractrawparams-name} Возвращает значение поля, включая разделители. -Алиас: `simpleJSONExtractRaw`. +Синоним: `simpleJSONExtractRaw`. Примеры: @@ -61,7 +61,7 @@ visitParamExtractRaw('{"abc":{"def":[1,2,3]}}', 'abc') = '{"def":[1,2,3]}'; Разбирает строку в двойных кавычках. У значения убирается экранирование. Если убрать экранированные символы не удалось, то возвращается пустая строка. -Алиас: `simpleJSONExtractString`. +Синоним: `simpleJSONExtractString`. Примеры: diff --git a/docs/ru/sql-reference/functions/splitting-merging-functions.md b/docs/ru/sql-reference/functions/splitting-merging-functions.md index b8d04982b91..5a0c540cf3a 100644 --- a/docs/ru/sql-reference/functions/splitting-merging-functions.md +++ b/docs/ru/sql-reference/functions/splitting-merging-functions.md @@ -14,7 +14,7 @@ separator должен быть константной строкой из ро **Синтаксис** ``` sql -splitByChar(, ) +splitByChar(separator, s) ``` **Аргументы** @@ -30,12 +30,12 @@ splitByChar(, ) - Задано несколько последовательных разделителей; - Исходная строка `s` пуста. -Type: [Array](../../sql-reference/data-types/array.md) of [String](../../sql-reference/data-types/string.md). +Тип: [Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md)). **Пример** ``` sql -SELECT splitByChar(',', '1,2,3,abcde') +SELECT splitByChar(',', '1,2,3,abcde'); ``` ``` text @@ -67,12 +67,12 @@ splitByString(separator, s) - Задано несколько последовательных разделителей; - Исходная строка `s` пуста. -Тип: [Array](../../sql-reference/data-types/array.md) of [String](../../sql-reference/data-types/string.md). +Тип: [Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md)). **Примеры** ``` sql -SELECT splitByString(', ', '1, 2 3, 4,5, abcde') +SELECT splitByString(', ', '1, 2 3, 4,5, abcde'); ``` ``` text @@ -82,7 +82,7 @@ SELECT splitByString(', ', '1, 2 3, 4,5, abcde') ``` ``` sql -SELECT splitByString('', 'abcde') +SELECT splitByString('', 'abcde'); ``` ``` text @@ -91,6 +91,60 @@ SELECT splitByString('', 'abcde') └────────────────────────────┘ ``` +## splitByRegexp(regexp, s) {#splitbyregexpseparator-s} + +Разбивает строку на подстроки, разделенные регулярным выражением. В качестве разделителя используется строка регулярного выражения `regexp`. Если `regexp` пустая, функция разделит строку `s` на массив одиночных символов. Если для регулярного выражения совпадения не найдено, строка `s` не будет разбита. + +**Синтаксис** + +``` sql +splitByRegexp(regexp, s) +``` + +**Аргументы** + +- `regexp` — регулярное выражение. Константа. [String](../data-types/string.md) или [FixedString](../data-types/fixedstring.md). +- `s` — разбиваемая строка. [String](../../sql-reference/data-types/string.md). + +**Возвращаемые значения** + +Возвращает массив выбранных подстрок. Пустая подстрока может быть возвращена, если: + +- Непустое совпадение с регулярным выражением происходит в начале или конце строки; +- Имеется несколько последовательных совпадений c непустым регулярным выражением; +- Исходная строка `s` пуста, а регулярное выражение не пустое. + +Тип: [Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md)). + +**Примеры** + +Запрос: + +``` sql +SELECT splitByRegexp('\\d+', 'a12bc23de345f'); +``` + +Результат: + +``` text +┌─splitByRegexp('\\d+', 'a12bc23de345f')─┐ +│ ['a','bc','de','f'] │ +└────────────────────────────────────────┘ +``` + +Запрос: + +``` sql +SELECT splitByRegexp('', 'abcde'); +``` + +Результат: + +``` text +┌─splitByRegexp('', 'abcde')─┐ +│ ['a','b','c','d','e'] │ +└────────────────────────────┘ +``` ## arrayStringConcat(arr\[, separator\]) {#arraystringconcatarr-separator} @@ -106,7 +160,7 @@ separator - необязательный параметр, константна **Пример:** ``` sql -SELECT alphaTokens('abca1abc') +SELECT alphaTokens('abca1abc'); ``` ``` text @@ -114,4 +168,3 @@ SELECT alphaTokens('abca1abc') │ ['abca','abc'] │ └─────────────────────────┘ ``` - diff --git a/docs/ru/sql-reference/functions/type-conversion-functions.md b/docs/ru/sql-reference/functions/type-conversion-functions.md index fc1dd15f8e3..8707642eb59 100644 --- a/docs/ru/sql-reference/functions/type-conversion-functions.md +++ b/docs/ru/sql-reference/functions/type-conversion-functions.md @@ -3,7 +3,7 @@ toc_priority: 38 toc_title: "Функции преобразования типов" --- -# Функции преобразования типов {#funktsii-preobrazovaniia-tipov} +# Функции преобразования типов {#type-conversion-functions} ## Общие проблемы преобразования чисел {#numeric-conversion-issues} @@ -369,7 +369,7 @@ SELECT toFixedString('foo\0bar', 8) AS s, toStringCutToZero(s) AS s_cut; ## reinterpretAsUUID {#reinterpretasuuid} -Функция принимает шестнадцатибайтную строку и интерпретирует ее байты в network order (big-endian). Если строка имеет недостаточную длину, то функция работает так, как будто строка дополнена необходимым количетсвом нулевых байт с конца. Если строка длиннее, чем шестнадцать байт, то игнорируются лишние байты с конца. +Функция принимает строку из 16 байт и интерпретирует ее байты в порядок от старшего к младшему. Если строка имеет недостаточную длину, то функция работает так, как будто строка дополнена необходимым количеством нулевых байтов с конца. Если строка длиннее, чем 16 байтов, то лишние байты с конца игнорируются. **Синтаксис** @@ -425,9 +425,27 @@ SELECT uuid = uuid2; ## reinterpret(x, T) {#type_conversion_function-reinterpret} -Использует туже самую исходную последовательность байт в памяти для значения `x` и переинтерпретирует ее как конечный тип данных +Использует ту же самую исходную последовательность байтов в памяти для значения `x` и интерпретирует ее как конечный тип данных `T`. + +**Синтаксис** + +``` sql +reinterpret(x, type) +``` + +**Аргументы** + +- `x` — любой тип данных. +- `type` — конечный тип данных. [String](../../sql-reference/data-types/string.md). + +**Возвращаемое значение** + +- Значение конечного типа данных. + +**Примеры** Запрос: + ```sql SELECT reinterpret(toInt8(-1), 'UInt8') as int_to_uint, reinterpret(toInt8(1), 'Float32') as int_to_float, @@ -448,7 +466,23 @@ SELECT reinterpret(toInt8(-1), 'UInt8') as int_to_uint, Поддерживается также синтаксис `CAST(x AS t)`. -Обратите внимание, что если значение `x` не может быть преобразовано к типу `T`, возникает переполнение. Например, `CAST(-1, 'UInt8')` возвращает 255. +!!! warning "Предупреждение" + Если значение `x` не может быть преобразовано к типу `T`, возникает переполнение. Например, `CAST(-1, 'UInt8')` возвращает 255. + +**Синтаксис** + +``` sql +CAST(x, T) +``` + +**Аргументы** + +- `x` — любой тип данных. +- `T` — конечный тип данных. [String](../../sql-reference/data-types/string.md). + +**Возвращаемое значение** + +- Значение конечного типа данных. **Примеры** @@ -456,9 +490,9 @@ SELECT reinterpret(toInt8(-1), 'UInt8') as int_to_uint, ```sql SELECT - cast(toInt8(-1), 'UInt8') AS cast_int_to_uint, - cast(toInt8(1), 'Float32') AS cast_int_to_float, - cast('1', 'UInt32') AS cast_string_to_int + CAST(toInt8(-1), 'UInt8') AS cast_int_to_uint, + CAST(toInt8(1), 'Float32') AS cast_int_to_float, + CAST('1', 'UInt32') AS cast_string_to_int ``` Результат: @@ -488,9 +522,9 @@ SELECT └─────────────────────┴─────────────────────┴────────────┴─────────────────────┴───────────────────────────┘ ``` -Преобразование в FixedString(N) работает только для аргументов типа String или FixedString(N). +Преобразование в FixedString(N) работает только для аргументов типа [String](../../sql-reference/data-types/string.md) или [FixedString](../../sql-reference/data-types/fixedstring.md). -Поддержано преобразование к типу [Nullable](../../sql-reference/functions/type-conversion-functions.md) и обратно. +Поддерживается преобразование к типу [Nullable](../../sql-reference/functions/type-conversion-functions.md) и обратно. **Примеры** @@ -860,7 +894,7 @@ AS parseDateTimeBestEffortUS; ## parseDateTimeBestEffortOrZero {#parsedatetimebesteffortorzero} ## parseDateTime32BestEffortOrZero {#parsedatetime32besteffortorzero} -Работает также как [parseDateTimeBestEffort](#parsedatetimebesteffort), но возвращает нулевую дату или нулевую дату и время когда получает формат даты который не может быть обработан. +Работает аналогично функции [parseDateTimeBestEffort](#parsedatetimebesteffort), но возвращает нулевое значение, если формат даты не может быть обработан. ## parseDateTimeBestEffortUSOrNull {#parsedatetimebesteffortusornull} @@ -1036,19 +1070,23 @@ SELECT parseDateTimeBestEffortUSOrZero('02.2021') AS parseDateTimeBestEffortUSOr ## parseDateTime64BestEffort {#parsedatetime64besteffort} -Работает также как функция [parseDateTimeBestEffort](#parsedatetimebesteffort) но также понимамет милисекунды и микросекунды и возвращает `DateTime64(3)` или `DateTime64(6)` типы данных в зависимости от заданной точности. +Работает аналогично функции [parseDateTimeBestEffort](#parsedatetimebesteffort), но также принимает миллисекунды и микросекунды. Возвращает тип данных [DateTime](../../sql-reference/functions/type-conversion-functions.md#data_type-datetime). -**Syntax** +**Синтаксис** ``` sql parseDateTime64BestEffort(time_string [, precision [, time_zone]]) ``` -**Parameters** +**Аргументы** -- `time_string` — String containing a date or date with time to convert. [String](../../sql-reference/data-types/string.md). -- `precision` — `3` for milliseconds, `6` for microseconds. Default `3`. Optional [UInt8](../../sql-reference/data-types/int-uint.md). -- `time_zone` — [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone). The function parses `time_string` according to the timezone. Optional. [String](../../sql-reference/data-types/string.md). +- `time_string` — строка, содержащая дату или дату со временем, которые нужно преобразовать. [String](../../sql-reference/data-types/string.md). +- `precision` — требуемая точность: `3` — для миллисекунд, `6` — для микросекунд. По умолчанию — `3`. Необязательный. [UInt8](../../sql-reference/data-types/int-uint.md). +- `time_zone` — [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone). Разбирает значение `time_string` в зависимости от часового пояса. Необязательный. [String](../../sql-reference/data-types/string.md). + +**Возвращаемое значение** + +- `time_string`, преобразованная в тип данных [DateTime](../../sql-reference/data-types/datetime.md). **Примеры** @@ -1062,7 +1100,7 @@ UNION ALL SELECT parseDateTime64BestEffort('2021-01-01 01:01:00.12346',6) AS a, toTypeName(a) AS t UNION ALL SELECT parseDateTime64BestEffort('2021-01-01 01:01:00.12346',3,'Europe/Moscow') AS a, toTypeName(a) AS t -FORMAT PrettyCompactMonoBlcok +FORMAT PrettyCompactMonoBlock; ``` Результат: @@ -1078,12 +1116,11 @@ FORMAT PrettyCompactMonoBlcok ## parseDateTime64BestEffortOrNull {#parsedatetime32besteffortornull} -Работает также как функция [parseDateTime64BestEffort](#parsedatetime64besteffort) но возвращает `NULL` когда встречает формат даты который не может обработать. +Работает аналогично функции [parseDateTime64BestEffort](#parsedatetime64besteffort), но возвращает `NULL`, если формат даты не может быть обработан. ## parseDateTime64BestEffortOrZero {#parsedatetime64besteffortorzero} -Работает также как функция [parseDateTime64BestEffort](#parsedatetimebesteffort) но возвращает "нулевую" дату и время когда встречает формат даты который не может обработать. - +Работает аналогично функции [parseDateTime64BestEffort](#parsedatetimebesteffort), но возвращает нулевую дату и время, если формат даты не может быть обработан. ## toLowCardinality {#tolowcardinality} @@ -1130,11 +1167,14 @@ SELECT toLowCardinality('1'); ## toUnixTimestamp64Nano {#tounixtimestamp64nano} Преобразует значение `DateTime64` в значение `Int64` с фиксированной точностью менее одной секунды. -Входное значение округляется соответствующим образом вверх или вниз в зависимости от его точности. Обратите внимание, что возвращаемое значение - это временная метка в UTC, а не в часовом поясе `DateTime64`. +Входное значение округляется соответствующим образом вверх или вниз в зависимости от его точности. + +!!! info "Примечание" + Возвращаемое значение — это временная метка в UTC, а не в часовом поясе `DateTime64`. **Синтаксис** -``` sql +```sql toUnixTimestamp64Milli(value) ``` @@ -1150,7 +1190,7 @@ toUnixTimestamp64Milli(value) Запрос: -``` sql +```sql WITH toDateTime64('2019-09-16 19:20:12.345678910', 6) AS dt64 SELECT toUnixTimestamp64Milli(dt64); ``` @@ -1296,4 +1336,3 @@ FROM numbers(3); │ 2,"good" │ └───────────────────────────────────────────┘ ``` - diff --git a/docs/ru/sql-reference/statements/create/table.md b/docs/ru/sql-reference/statements/create/table.md index 1ccd0a600f3..1d65d82b24c 100644 --- a/docs/ru/sql-reference/statements/create/table.md +++ b/docs/ru/sql-reference/statements/create/table.md @@ -346,4 +346,39 @@ SELECT * FROM base.t1; └───┘ ``` +## Секция COMMENT {#comment-table} + +Вы можете добавить комментарий к таблице при ее создании. + +!!!note "Замечание" + Комментарий поддерживается для всех движков таблиц, кроме [Kafka](../../../engines/table-engines/integrations/kafka.md), [RabbitMQ](../../../engines/table-engines/integrations/rabbitmq.md) и [EmbeddedRocksDB](../../../engines/table-engines/integrations/embedded-rocksdb.md). + +**Синтаксис** + +``` sql +CREATE TABLE db.table_name +( + name1 type1, name2 type2, ... +) +ENGINE = engine +COMMENT 'Comment' +``` + +**Пример** + +Запрос: + +``` sql +CREATE TABLE t1 (x String) ENGINE = Memory COMMENT 'The temporary table'; +SELECT name, comment FROM system.tables WHERE name = 't1'; +``` + +Результат: + +```text +┌─name─┬─comment─────────────┐ +│ t1 │ The temporary table │ +└──────┴─────────────────────┘ +``` + diff --git a/docs/ru/sql-reference/statements/explain.md b/docs/ru/sql-reference/statements/explain.md index c2a35f1b925..c925e7030a7 100644 --- a/docs/ru/sql-reference/statements/explain.md +++ b/docs/ru/sql-reference/statements/explain.md @@ -111,9 +111,11 @@ CROSS JOIN system.numbers AS c Настройки: -- `header` — выводит выходной заголовок для шага. По умолчанию: 0. -- `description` — выводит описание шага. По умолчанию: 1. -- `actions` — выводит подробную информацию о действиях, выполняемых на данном шаге. По умолчанию: 0. +- `header` — выводит выходной заголовок для шага. По умолчанию: 0. +- `description` — выводит описание шага. По умолчанию: 1. +- `indexes` — показывает используемые индексы, количество отфильтрованных кусков и гранул для каждого примененного индекса. По умолчанию: 0. Поддерживается для таблиц семейства [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md). +- `actions` — выводит подробную информацию о действиях, выполняемых на данном шаге. По умолчанию: 0. +- `json` — выводит шаги выполнения запроса в виде строки в формате [JSON](../../interfaces/formats.md#json). По умолчанию: 0. Чтобы избежать ненужного экранирования, рекомендуется использовать формат [TSVRaw](../../interfaces/formats.md#tabseparatedraw). Пример: @@ -133,6 +135,225 @@ Union !!! note "Примечание" Оценка стоимости выполнения шага и запроса не поддерживается. + +При `json = 1` шаги выполнения запроса выводятся в формате JSON. Каждый узел — это словарь, в котором всегда есть ключи `Node Type` и `Plans`. `Node Type` — это строка с именем шага. `Plans` — это массив с описаниями дочерних шагов. Другие дополнительные ключи могут быть добавлены в зависимости от типа узла и настроек. + +Пример: + +```sql +EXPLAIN json = 1, description = 0 SELECT 1 UNION ALL SELECT 2 FORMAT TSVRaw; +``` + +```json +[ + { + "Plan": { + "Node Type": "Union", + "Plans": [ + { + "Node Type": "Expression", + "Plans": [ + { + "Node Type": "SettingQuotaAndLimits", + "Plans": [ + { + "Node Type": "ReadFromStorage" + } + ] + } + ] + }, + { + "Node Type": "Expression", + "Plans": [ + { + "Node Type": "SettingQuotaAndLimits", + "Plans": [ + { + "Node Type": "ReadFromStorage" + } + ] + } + ] + } + ] + } + } +] +``` + +При `description` = 1 к шагу добавляется ключ `Description`: + +```json +{ + "Node Type": "ReadFromStorage", + "Description": "SystemOne" +} +``` + +При `header` = 1 к шагу добавляется ключ `Header` в виде массива столбцов. + +Пример: + +```sql +EXPLAIN json = 1, description = 0, header = 1 SELECT 1, 2 + dummy; +``` + +```json +[ + { + "Plan": { + "Node Type": "Expression", + "Header": [ + { + "Name": "1", + "Type": "UInt8" + }, + { + "Name": "plus(2, dummy)", + "Type": "UInt16" + } + ], + "Plans": [ + { + "Node Type": "SettingQuotaAndLimits", + "Header": [ + { + "Name": "dummy", + "Type": "UInt8" + } + ], + "Plans": [ + { + "Node Type": "ReadFromStorage", + "Header": [ + { + "Name": "dummy", + "Type": "UInt8" + } + ] + } + ] + } + ] + } + } +] +``` + +При `indexes` = 1 добавляется ключ `Indexes`. Он содержит массив используемых индексов. Каждый индекс описывается как строка в формате JSON с ключом `Type` (`MinMax`, `Partition`, `PrimaryKey` или `Skip`) и дополнительные ключи: + +- `Name` — имя индекса (на данный момент используется только для индекса `Skip`). +- `Keys` — массив столбцов, используемых индексом. +- `Condition` — строка с используемым условием. +- `Description` — индекс (на данный момент используется только для индекса `Skip`). +- `Initial Parts` — количество кусков до применения индекса. +- `Selected Parts` — количество кусков после применения индекса. +- `Initial Granules` — количество гранул до применения индекса. +- `Selected Granulesis` — количество гранул после применения индекса. + +Пример: + +```json +"Node Type": "ReadFromMergeTree", +"Indexes": [ + { + "Type": "MinMax", + "Keys": ["y"], + "Condition": "(y in [1, +inf))", + "Initial Parts": 5, + "Selected Parts": 4, + "Initial Granules": 12, + "Selected Granules": 11 + }, + { + "Type": "Partition", + "Keys": ["y", "bitAnd(z, 3)"], + "Condition": "and((bitAnd(z, 3) not in [1, 1]), and((y in [1, +inf)), (bitAnd(z, 3) not in [1, 1])))", + "Initial Parts": 4, + "Selected Parts": 3, + "Initial Granules": 11, + "Selected Granules": 10 + }, + { + "Type": "PrimaryKey", + "Keys": ["x", "y"], + "Condition": "and((x in [11, +inf)), (y in [1, +inf)))", + "Initial Parts": 3, + "Selected Parts": 2, + "Initial Granules": 10, + "Selected Granules": 6 + }, + { + "Type": "Skip", + "Name": "t_minmax", + "Description": "minmax GRANULARITY 2", + "Initial Parts": 2, + "Selected Parts": 1, + "Initial Granules": 6, + "Selected Granules": 2 + }, + { + "Type": "Skip", + "Name": "t_set", + "Description": "set GRANULARITY 2", + "Initial Parts": 1, + "Selected Parts": 1, + "Initial Granules": 2, + "Selected Granules": 1 + } +] +``` + +При `actions` = 1 добавляются ключи, зависящие от типа шага. + +Пример: + +```sql +EXPLAIN json = 1, actions = 1, description = 0 SELECT 1 FORMAT TSVRaw; +``` + +```json +[ + { + "Plan": { + "Node Type": "Expression", + "Expression": { + "Inputs": [], + "Actions": [ + { + "Node Type": "Column", + "Result Type": "UInt8", + "Result Type": "Column", + "Column": "Const(UInt8)", + "Arguments": [], + "Removed Arguments": [], + "Result": 0 + } + ], + "Outputs": [ + { + "Name": "1", + "Type": "UInt8" + } + ], + "Positions": [0], + "Project Input": true + }, + "Plans": [ + { + "Node Type": "SettingQuotaAndLimits", + "Plans": [ + { + "Node Type": "ReadFromStorage" + } + ] + } + ] + } + } +] +``` ### EXPLAIN PIPELINE {#explain-pipeline} @@ -141,7 +362,6 @@ Union - `header` — выводит заголовок для каждого выходного порта. По умолчанию: 0. - `graph` — выводит граф, описанный на языке [DOT](https://ru.wikipedia.org/wiki/DOT_(язык)). По умолчанию: 0. - `compact` — выводит граф в компактном режиме, если включена настройка `graph`. По умолчанию: 1. -- `indexes` — показывает используемые индексы, количество отфильтрованных кусков и гранул для каждого примененного индекса. По умолчанию: 0. Поддерживается для таблиц семейства [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md). Пример: diff --git a/docs/ru/sql-reference/statements/insert-into.md b/docs/ru/sql-reference/statements/insert-into.md index bbd330962cf..328f1023624 100644 --- a/docs/ru/sql-reference/statements/insert-into.md +++ b/docs/ru/sql-reference/statements/insert-into.md @@ -107,6 +107,8 @@ INSERT INTO [db.]table [(c1, c2, c3)] SELECT ... Для табличной функции [input()](../table-functions/input.md) после секции `SELECT` должна следовать секция `FORMAT`. +Чтобы вставить значение по умолчанию вместо `NULL` в столбец, который не позволяет хранить `NULL`, включите настройку [insert_null_as_default](../../operations/settings/settings.md#insert_null_as_default). + ### Замечания о производительности {#zamechaniia-o-proizvoditelnosti} `INSERT` сортирует входящие данные по первичному ключу и разбивает их на партиции по ключу партиционирования. Если вы вставляете данные в несколько партиций одновременно, то это может значительно снизить производительность запроса `INSERT`. Чтобы избежать этого: diff --git a/docs/ru/sql-reference/statements/select/prewhere.md b/docs/ru/sql-reference/statements/select/prewhere.md index c2a02b1a436..5ba25e6fa6e 100644 --- a/docs/ru/sql-reference/statements/select/prewhere.md +++ b/docs/ru/sql-reference/statements/select/prewhere.md @@ -16,6 +16,9 @@ Prewhere — это оптимизация для более эффективн Если значение параметра `optimize_move_to_prewhere` равно 0, эвристика по автоматическому перемещнию части выражений из `WHERE` к `PREWHERE` отключается. +!!! note "Внимание" + Секция `PREWHERE` выполняется до `FINAL`, поэтому результаты запросов `FROM FINAL` могут исказится при использовании `PREWHERE` с полями не входящями в `ORDER BY` таблицы. + ## Ограничения {#limitations} `PREWHERE` поддерживается только табличными движками из семейства `*MergeTree`. diff --git a/docs/ru/sql-reference/statements/select/union.md b/docs/ru/sql-reference/statements/select/union.md index de8a9b0e4ea..a1e31a0be7f 100644 --- a/docs/ru/sql-reference/statements/select/union.md +++ b/docs/ru/sql-reference/statements/select/union.md @@ -78,3 +78,7 @@ SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 2; Запросы, которые являются частью `UNION/UNION ALL/UNION DISTINCT`, выполняются параллельно, и их результаты могут быть смешаны вместе. +**Смотрите также** + +- Настройка [insert_null_as_default](../../../operations/settings/settings.md#insert_null_as_default). +- Настройка [union_default_mode](../../../operations/settings/settings.md#union-default-mode). diff --git a/docs/ru/sql-reference/statements/system.md b/docs/ru/sql-reference/statements/system.md index f0f9b77b5ba..8c82cacdc43 100644 --- a/docs/ru/sql-reference/statements/system.md +++ b/docs/ru/sql-reference/statements/system.md @@ -196,7 +196,7 @@ SYSTEM STOP MOVES [[db.]merge_tree_family_table_name] Возвращает `Ok.` даже если указана несуществующая таблица или таблица имеет тип отличный от MergeTree. Возвращает ошибку если указана не существующая база данных: ``` sql -SYSTEM STOP MOVES [[db.]merge_tree_family_table_name] +SYSTEM START MOVES [[db.]merge_tree_family_table_name] ``` ## Managing ReplicatedMergeTree Tables {#query-language-system-replicated} diff --git a/docs/tools/README.md b/docs/tools/README.md index 0a6c41d8089..4340561fa57 100644 --- a/docs/tools/README.md +++ b/docs/tools/README.md @@ -51,5 +51,5 @@ The easiest way to see the result is to use `--livereload=8888` argument of buil At the moment there’s no easy way to do just that, but you can consider: -- To hit the “Watch” button on top of GitHub web interface to know as early as possible, even during pull request. Alternative to this is `#github-activity` channel of [public ClickHouse Slack](https://join.slack.com/t/clickhousedb/shared_invite/zt-nwwakmk4-xOJ6cdy0sJC3It8j348~IA). +- To hit the “Watch” button on top of GitHub web interface to know as early as possible, even during pull request. Alternative to this is `#github-activity` channel of [public ClickHouse Slack](https://join.slack.com/t/clickhousedb/shared_invite/zt-qfort0u8-TWqK4wIP0YSdoDE0btKa1w). - Some search engines allow to subscribe on specific website changes via email and you can opt-in for that for https://clickhouse.tech. diff --git a/docs/tools/amp.py b/docs/tools/amp.py index b08b58d3cba..22417407946 100644 --- a/docs/tools/amp.py +++ b/docs/tools/amp.py @@ -62,7 +62,6 @@ def build_amp(lang, args, cfg): for root, _, filenames in os.walk(site_temp): if 'index.html' in filenames: paths.append(prepare_amp_html(lang, args, root, site_temp, main_site_dir)) - test.test_amp(paths, lang) logging.info(f'Finished building AMP version for {lang}') diff --git a/docs/tools/blog.py b/docs/tools/blog.py index c3261f61d4d..d0f2496f914 100644 --- a/docs/tools/blog.py +++ b/docs/tools/blog.py @@ -40,7 +40,7 @@ def build_for_lang(lang, args): site_names = { 'en': 'ClickHouse Blog', - 'ru': 'Блог ClickHouse ' + 'ru': 'Блог ClickHouse' } assert len(site_names) == len(languages) @@ -62,7 +62,7 @@ def build_for_lang(lang, args): strict=True, theme=theme_cfg, nav=blog_nav, - copyright='©2016–2020 Yandex LLC', + copyright='©2016–2021 Yandex LLC', use_directory_urls=True, repo_name='ClickHouse/ClickHouse', repo_url='https://github.com/ClickHouse/ClickHouse/', diff --git a/docs/tools/build.py b/docs/tools/build.py index 5a1f10268ab..61112d5a4f5 100755 --- a/docs/tools/build.py +++ b/docs/tools/build.py @@ -94,7 +94,7 @@ def build_for_lang(lang, args): site_dir=site_dir, strict=True, theme=theme_cfg, - copyright='©2016–2020 Yandex LLC', + copyright='©2016–2021 Yandex LLC', use_directory_urls=True, repo_name='ClickHouse/ClickHouse', repo_url='https://github.com/ClickHouse/ClickHouse/', diff --git a/docs/tools/nav.py b/docs/tools/nav.py index 291797a1633..db64d1ba404 100644 --- a/docs/tools/nav.py +++ b/docs/tools/nav.py @@ -31,7 +31,16 @@ def build_nav_entry(root, args): result_items.append((prio, title, payload)) elif filename.endswith('.md'): path = os.path.join(root, filename) - meta, content = util.read_md_file(path) + + meta = '' + content = '' + + try: + meta, content = util.read_md_file(path) + except: + print('Error in file: {}'.format(path)) + raise + path = path.split('/', 2)[-1] title = meta.get('toc_title', find_first_header(content)) if title: diff --git a/docs/tools/test.py b/docs/tools/test.py index 00d1d47137f..ada4df29644 100755 --- a/docs/tools/test.py +++ b/docs/tools/test.py @@ -3,34 +3,9 @@ import logging import os import sys - import bs4 - -import logging -import os import subprocess -import bs4 - - -def test_amp(paths, lang): - try: - # Get latest amp validator version - subprocess.check_call('amphtml-validator --help', - stdout=subprocess.DEVNULL, - stderr=subprocess.DEVNULL, - shell=True) - except subprocess.CalledProcessError: - subprocess.check_call('npm i -g amphtml-validator', stderr=subprocess.DEVNULL, shell=True) - - paths = ' '.join(paths) - command = f'amphtml-validator {paths}' - try: - subprocess.check_output(command, shell=True).decode('utf-8') - except subprocess.CalledProcessError: - logging.error(f'Invalid AMP for {lang}') - raise - def test_template(template_path): if template_path.endswith('amp.html'): diff --git a/docs/tools/website.py b/docs/tools/website.py index 6927fbd87bb..f0346de5c94 100644 --- a/docs/tools/website.py +++ b/docs/tools/website.py @@ -155,10 +155,6 @@ def build_website(args): os.path.join(args.src_dir, 'utils', 'list-versions', 'version_date.tsv'), os.path.join(args.output_dir, 'data', 'version_date.tsv')) - shutil.copy2( - os.path.join(args.website_dir, 'js', 'embedd.min.js'), - os.path.join(args.output_dir, 'js', 'embedd.min.js')) - for root, _, filenames in os.walk(args.output_dir): for filename in filenames: if filename == 'main.html': diff --git a/docs/zh/engines/table-engines/integrations/odbc.md b/docs/zh/engines/table-engines/integrations/odbc.md index 1264efeaa41..767c32cc438 100644 --- a/docs/zh/engines/table-engines/integrations/odbc.md +++ b/docs/zh/engines/table-engines/integrations/odbc.md @@ -7,11 +7,11 @@ toc_title: ODBC # ODBC {#table-engine-odbc} -允许ClickHouse通过以下方式连接到外部数据库 [ODBC](https://en.wikipedia.org/wiki/Open_Database_Connectivity). +允许ClickHouse通过[ODBC](https://en.wikipedia.org/wiki/Open_Database_Connectivity)方式连接到外部数据库. -为了安全地实现ODBC连接,ClickHouse使用单独的程序 `clickhouse-odbc-bridge`. 如果直接从ODBC驱动程序加载 `clickhouse-server`,驱动程序问题可能会导致ClickHouse服务器崩溃。 ClickHouse自动启动 `clickhouse-odbc-bridge` 当它是必需的。 ODBC桥程序是从相同的软件包作为安装 `clickhouse-server`. +为了安全地实现ODBC连接,ClickHouse使用了一个独立程序 `clickhouse-odbc-bridge`. 如果ODBC驱动程序是直接从 `clickhouse-server`中加载的,那么驱动问题可能会导致ClickHouse服务崩溃。 当有需要时,ClickHouse会自动启动 `clickhouse-odbc-bridge`。 ODBC桥梁程序与`clickhouse-server`来自相同的安装包. -该引擎支持 [可为空](../../../sql-reference/data-types/nullable.md) 数据类型。 +该引擎支持 [可为空](../../../sql-reference/data-types/nullable.md) 的数据类型。 ## 创建表 {#creating-a-table} @@ -25,14 +25,14 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] ENGINE = ODBC(connection_settings, external_database, external_table) ``` -请参阅的详细说明 [CREATE TABLE](../../../sql-reference/statements/create.md#create-table-query) 查询。 +详情请见 [CREATE TABLE](../../../sql-reference/statements/create.md#create-table-query) 查询。 表结构可以与源表结构不同: - 列名应与源表中的列名相同,但您可以按任何顺序使用其中的一些列。 -- 列类型可能与源表中的列类型不同。 ClickHouse尝试 [投](../../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) ClickHouse数据类型的值。 +- 列类型可能与源表中的列类型不同。 ClickHouse尝试将数值[映射](../../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) 到ClickHouse的数据类型。 -**发动机参数** +**引擎参数** - `connection_settings` — Name of the section with connection settings in the `odbc.ini` 文件 - `external_database` — Name of a database in an external DBMS. @@ -40,13 +40,13 @@ ENGINE = ODBC(connection_settings, external_database, external_table) ## 用法示例 {#usage-example} -**通过ODBC从本地MySQL安装中检索数据** +**通过ODBC从本地安装的MySQL中检索数据** -此示例检查Ubuntu Linux18.04和MySQL服务器5.7。 +本示例针对Ubuntu Linux18.04和MySQL服务器5.7进行检查。 -确保安装了unixODBC和MySQL连接器。 +请确保安装了unixODBC和MySQL连接器。 -默认情况下(如果从软件包安装),ClickHouse以用户身份启动 `clickhouse`. 因此,您需要在MySQL服务器中创建和配置此用户。 +默认情况下(如果从软件包安装),ClickHouse以用户`clickhouse`的身份启动 . 因此,您需要在MySQL服务器中创建和配置此用户。 ``` bash $ sudo mysql @@ -57,7 +57,7 @@ mysql> CREATE USER 'clickhouse'@'localhost' IDENTIFIED BY 'clickhouse'; mysql> GRANT ALL PRIVILEGES ON *.* TO 'clickhouse'@'clickhouse' WITH GRANT OPTION; ``` -然后配置连接 `/etc/odbc.ini`. +然后在`/etc/odbc.ini`中配置连接 . ``` bash $ cat /etc/odbc.ini @@ -70,7 +70,7 @@ USERNAME = clickhouse PASSWORD = clickhouse ``` -您可以使用 `isql` unixodbc安装中的实用程序。 +您可以从安装的unixodbc中使用 `isql` 实用程序来检查连接情况。 ``` bash $ isql -v mysqlconn diff --git a/docs/zh/operations/backup.md b/docs/zh/operations/backup.md index 1b1993e3ae6..6d517e6ccb3 100644 --- a/docs/zh/operations/backup.md +++ b/docs/zh/operations/backup.md @@ -7,37 +7,37 @@ toc_title: "\u6570\u636E\u5907\u4EFD" # 数据备份 {#data-backup} -尽管[副本](../engines/table-engines/mergetree-family/replication.md) 可以预防硬件错误带来的数据丢失, 但是它不能防止人为操作的错误: 意外删除数据, 删除错误的 table 或者删除错误 cluster 上的 table, 可以导致错误数据处理错误或者数据损坏的 bugs. 这类意外可能会影响所有的副本. ClickHouse 有内建的保障措施可以预防一些错误 — 例如, 默认情况下[您不能使用类似MergeTree的引擎删除包含超过50Gb数据的表](server-configuration-parameters/settings.md#max-table-size-to-drop). 但是,这些保障措施不能涵盖所有可能的情况,并且可以规避。 +尽管 [副本] (../engines/table-engines/mergetree-family/replication.md) 可以提供针对硬件的错误防护, 但是它不能预防人为操作失误: 数据的意外删除, 错误表的删除或者错误集群上表的删除, 以及导致错误数据处理或者数据损坏的软件bug. 在很多案例中,这类意外可能会影响所有的副本. ClickHouse 有内置的保护措施可以预防一些错误 — 例如, 默认情况下 [不能人工删除使用带有MergeTree引擎且包含超过50Gb数据的表] (server-configuration-parameters/settings.md#max-table-size-to-drop). 但是,这些保护措施不能覆盖所有可能情况,并且这些措施可以被绕过。 -为了有效地减少可能的人为错误,您应该 **提前**准备备份和还原数据的策略. +为了有效地减少可能的人为错误,您应该 **提前** 仔细的准备备份和数据还原的策略. -不同公司有不同的可用资源和业务需求,因此没有适合各种情况的ClickHouse备份和恢复通用解决方案。 适用于 1GB 的数据的方案可能并不适用于几十 PB 数据的情况。 有多种可能的并有自己优缺点的方法,这将在下面讨论。 好的主意是同时结合使用多种方法而不是仅使用一种,这样可以弥补不同方法各自的缺点。 +不同公司有不同的可用资源和业务需求,因此不存在一个通用的解决方案可以应对各种情况下的ClickHouse备份和恢复。 适用于 1GB 数据的方案可能并不适用于几十 PB 数据的情况。 有多种具备各自优缺点的可能方法,将在下面对其进行讨论。最好使用几种方法而不是仅仅使用一种方法来弥补它们的各种缺点。。 !!! note "注" - 请记住,如果您备份了某些内容并且从未尝试过还原它,那么当您实际需要它时(或者至少需要比业务能够容忍的时间更长),恢复可能无法正常工作。 因此,无论您选择哪种备份方法,请确保自动还原过程,并定期在备用ClickHouse群集上练习。 + 需要注意的是,如果您备份了某些内容并且从未尝试过还原它,那么当您实际需要它时可能无法正常恢复(或者至少需要的时间比业务能够容忍的时间更长)。 因此,无论您选择哪种备份方法,请确保自动还原过程,并定期在备用ClickHouse群集上演练。 -## 将源数据复制到其他地方 {#duplicating-source-data-somewhere-else} +## 将源数据复制到其它地方 {#duplicating-source-data-somewhere-else} -通常被聚集到ClickHouse的数据是通过某种持久队列传递的,例如 [Apache Kafka](https://kafka.apache.org). 在这种情况下,可以配置一组额外的订阅服务器,这些订阅服务器将在写入ClickHouse时读取相同的数据流,并将其存储在冷存储中。 大多数公司已经有一些默认的推荐冷存储,可能是对象存储或分布式文件系统,如 [HDFS](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html). +通常摄入到ClickHouse的数据是通过某种持久队列传递的,例如 [Apache Kafka] (https://kafka.apache.org). 在这种情况下,可以配置一组额外的订阅服务器,这些订阅服务器将在写入ClickHouse时读取相同的数据流,并将其存储在冷存储中。 大多数公司已经有一些默认推荐的冷存储,可能是对象存储或分布式文件系统,如 [HDFS] (https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html). ## 文件系统快照 {#filesystem-snapshots} -某些本地文件系统提供快照功能(例如, [ZFS](https://en.wikipedia.org/wiki/ZFS)),但它们可能不是提供实时查询的最佳选择。 一个可能的解决方案是使用这种文件系统创建额外的副本,并将它们从 [分布](../engines/table-engines/special/distributed.md) 用于以下目的的表 `SELECT` 查询。 任何修改数据的查询都无法访问此类副本上的快照。 作为奖励,这些副本可能具有特殊的硬件配置,每个服务器附加更多的磁盘,这将是经济高效的。 +某些本地文件系统提供快照功能(例如, [ZFS] (https://en.wikipedia.org/wiki/ZFS)),但它们可能不是提供实时查询的最佳选择。 一个可能的解决方案是使用这种文件系统创建额外的副本,并将它们与用于`SELECT` 查询的 [分布式] (../engines/table-engines/special/distributed.md) 表分离。 任何修改数据的查询都无法访问此类副本上的快照。 作为回报,这些副本可能具有特殊的硬件配置,每个服务器附加更多的磁盘,这将是经济高效的。 ## clickhouse-copier {#clickhouse-copier} -[clickhouse-copier](utilities/clickhouse-copier.md) 是一个多功能工具,最初创建用于重新分片pb大小的表。 因为它可以在ClickHouse表和集群之间可靠地复制数据,所以它还可用于备份和还原数据。 +[clickhouse-copier] (utilities/clickhouse-copier.md) 是一个多功能工具,最初创建它是为了用于重新切分pb大小的表。 因为它能够在ClickHouse表和集群之间可靠地复制数据,所以它也可用于备份和还原数据。 对于较小的数据量,一个简单的 `INSERT INTO ... SELECT ...` 到远程表也可以工作。 -## 部件操作 {#manipulations-with-parts} +## part操作 {#manipulations-with-parts} -ClickHouse允许使用 `ALTER TABLE ... FREEZE PARTITION ...` 查询以创建表分区的本地副本。 这是利用硬链接(hardlink)到 `/var/lib/clickhouse/shadow/` 文件夹中实现的,所以它通常不会占用旧数据的额外磁盘空间。 创建的文件副本不由ClickHouse服务器处理,所以你可以把它们留在那里:你将有一个简单的备份,不需要任何额外的外部系统,但它仍然会容易出现硬件问题。 出于这个原因,最好将它们远程复制到另一个位置,然后删除本地副本。 分布式文件系统和对象存储仍然是一个不错的选择,但是具有足够大容量的正常附加文件服务器也可以工作(在这种情况下,传输将通过网络文件系统 [rsync](https://en.wikipedia.org/wiki/Rsync)). +ClickHouse允许使用 `ALTER TABLE ... FREEZE PARTITION ...` 查询以创建表分区的本地副本。 这是利用硬链接(hardlink)到 `/var/lib/clickhouse/shadow/` 文件夹中实现的,所以它通常不会因为旧数据而占用额外的磁盘空间。 创建的文件副本不由ClickHouse服务器处理,所以你可以把它们留在那里:你将有一个简单的备份,不需要任何额外的外部系统,但它仍然容易出现硬件问题。 出于这个原因,最好将它们远程复制到另一个位置,然后删除本地副本。 分布式文件系统和对象存储仍然是一个不错的选择,但是具有足够大容量的正常附加文件服务器也可以工作(在这种情况下,传输将通过网络文件系统或者也许是 [rsync] (https://en.wikipedia.org/wiki/Rsync) 来进行). 数据可以使用 `ALTER TABLE ... ATTACH PARTITION ...` 从备份中恢复。 -有关与分区操作相关的查询的详细信息,请参阅 [更改文档](../sql-reference/statements/alter.md#alter_manipulations-with-partitions). +有关与分区操作相关的查询的详细信息,请参阅 [更改文档] (../sql-reference/statements/alter.md#alter_manipulations-with-partitions). -第三方工具可用于自动化此方法: [clickhouse-backup](https://github.com/AlexAkulov/clickhouse-backup). +第三方工具可用于自动化此方法: [clickhouse-backup] (https://github.com/AlexAkulov/clickhouse-backup). -[原始文章](https://clickhouse.tech/docs/en/operations/backup/) +[原始文章] (https://clickhouse.tech/docs/en/operations/backup/) diff --git a/docs/zh/operations/system-tables/data_type_families.md b/docs/zh/operations/system-tables/data_type_families.md index 21eb4785e23..db08ff0371b 100644 --- a/docs/zh/operations/system-tables/data_type_families.md +++ b/docs/zh/operations/system-tables/data_type_families.md @@ -5,13 +5,13 @@ machine_translated_rev: 5decc73b5dc60054f19087d3690c4eb99446a6c3 # 系统。data_type_families {#system_tables-data_type_families} -包含有关受支持的信息 [数据类型](../../sql-reference/data-types/). +包含有关受支持的[数据类型](../../sql-reference/data-types/)的信息. -列: +列字段包括: -- `name` ([字符串](../../sql-reference/data-types/string.md)) — Data type name. -- `case_insensitive` ([UInt8](../../sql-reference/data-types/int-uint.md)) — Property that shows whether you can use a data type name in a query in case insensitive manner or not. For example, `Date` 和 `date` 都是有效的。 -- `alias_to` ([字符串](../../sql-reference/data-types/string.md)) — Data type name for which `name` 是个化名 +- `name` ([String](../../sql-reference/data-types/string.md)) — 数据类型的名称. +- `case_insensitive` ([UInt8](../../sql-reference/data-types/int-uint.md)) — 该属性显示是否可以在查询中以不区分大小写的方式使用数据类型名称。例如 `Date` 和 `date` 都是有效的。 +- `alias_to` ([String](../../sql-reference/data-types/string.md)) — 名称为别名的数据类型名称。 **示例** @@ -36,4 +36,4 @@ SELECT * FROM system.data_type_families WHERE alias_to = 'String' **另请参阅** -- [语法](../../sql-reference/syntax.md) — Information about supported syntax. +- [Syntax](../../sql-reference/syntax.md) — 关于所支持的语法信息. diff --git a/docs/zh/operations/system-tables/index.md b/docs/zh/operations/system-tables/index.md index 56067bc5057..0e5778e3051 100644 --- a/docs/zh/operations/system-tables/index.md +++ b/docs/zh/operations/system-tables/index.md @@ -7,33 +7,33 @@ toc_title: "\u7CFB\u7EDF\u8868" # 系统表 {#system-tables} -## 导言 {#system-tables-introduction} +## 引言 {#system-tables-introduction} -系统表提供以下信息: +系统表提供的信息如下: -- 服务器状态、进程和环境。 +- 服务器的状态、进程以及环境。 - 服务器的内部进程。 系统表: -- 坐落于 `system` 数据库。 -- 仅适用于读取数据。 -- 不能删除或更改,但可以分离。 +- 存储于 `system` 数据库。 +- 仅提供数据读取功能。 +- 不能被删除或更改,但可以对其进行分离(detach)操作。 -大多数系统表将数据存储在RAM中。 ClickHouse服务器在开始时创建此类系统表。 +大多数系统表将其数据存储在RAM中。 一个ClickHouse服务在刚启动时便会创建此类系统表。 -与其他系统表不同,系统日志表 [metric_log](../../operations/system-tables/metric_log.md#system_tables-metric_log), [query_log](../../operations/system-tables/query_log.md#system_tables-query_log), [query_thread_log](../../operations/system-tables/query_thread_log.md#system_tables-query_thread_log), [trace_log](../../operations/system-tables/trace_log.md#system_tables-trace_log), [part_log](../../operations/system-tables/part_log.md#system.part_log), crash_log and text_log 默认采用[MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) 引擎并将其数据存储在存储文件系统中。 如果从文件系统中删除表,ClickHouse服务器会在下一次写入数据时再次创建空表。 如果系统表架构在新版本中发生更改,则ClickHouse会重命名当前表并创建一个新表。 +不同于其他系统表,系统日志表 [metric_log](../../operations/system-tables/metric_log.md#system_tables-metric_log), [query_log](../../operations/system-tables/query_log.md#system_tables-query_log), [query_thread_log](../../operations/system-tables/query_thread_log.md#system_tables-query_thread_log), [trace_log](../../operations/system-tables/trace_log.md#system_tables-trace_log), [part_log](../../operations/system-tables/part_log.md#system.part_log), crash_log and text_log 默认采用[MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) 引擎并将其数据存储在文件系统中。 如果人为的从文件系统中删除表,ClickHouse服务器会在下一次进行数据写入时再次创建空表。 如果系统表结构在新版本中发生更改,那么ClickHouse会重命名当前表并创建一个新表。 -用户可以通过在`/etc/clickhouse-server/config.d/`下创建与系统表同名的配置文件, 或者在`/etc/clickhouse-server/config.xml`中设置相应配置项,来自定义系统日志表的结构。可以自定义的配置项如下: +用户可以通过在`/etc/clickhouse-server/config.d/`下创建与系统表同名的配置文件, 或者在`/etc/clickhouse-server/config.xml`中设置相应配置项,来自定义系统日志表的结构。可供自定义的配置项如下: -- `database`: 系统日志表所在的数据库。这个选项目前已经废弃。所有的系统日表都位于`system`库中。 -- `table`: 系统日志表名。 +- `database`: 系统日志表所在的数据库。这个选项目前已经不推荐使用。所有的系统日表都位于`system`库中。 +- `table`: 接收数据写入的系统日志表。 - `partition_by`: 指定[PARTITION BY](../../engines/table-engines/mergetree-family/custom-partitioning-key.md)表达式。 - `ttl`: 指定系统日志表TTL选项。 -- `flush_interval_milliseconds`: 指定系统日志表数据落盘时间。 -- `engine`: 指定完整的表引擎定义。(以`ENGINE = `开始)。 这个选项与`partition_by`以及`ttl`冲突。如果两者一起设置,服务启动时会抛出异常并且退出。 +- `flush_interval_milliseconds`: 指定日志表数据刷新到磁盘的时间间隔。 +- `engine`: 指定完整的表引擎定义。(以`ENGINE = `开头)。 这个选项与`partition_by`以及`ttl`冲突。如果与两者一起设置,服务启动时会抛出异常并且退出。 -一个配置定义的例子如下: +配置定义的示例如下: ``` @@ -50,20 +50,20 @@ toc_title: "\u7CFB\u7EDF\u8868" ``` -默认情况下,表增长是无限的。 要控制表的大小,可以使用 TTL 删除过期日志记录的设置。 你也可以使用分区功能 `MergeTree`-发动机表。 +默认情况下,表增长是无限的。可以通过TTL 删除过期日志记录的设置来控制表的大小。 你也可以使用分区功能 `MergeTree`-引擎表。 ## 系统指标的来源 {#system-tables-sources-of-system-metrics} 用于收集ClickHouse服务器使用的系统指标: - `CAP_NET_ADMIN` 能力。 -- [procfs](https://en.wikipedia.org/wiki/Procfs) (仅在Linux中)。 +- [procfs](https://en.wikipedia.org/wiki/Procfs) (仅限于Linux)。 **procfs** -如果ClickHouse服务器没有 `CAP_NET_ADMIN` 能力,它试图回落到 `ProcfsMetricsProvider`. `ProcfsMetricsProvider` 允许收集每个查询系统指标(用于CPU和I/O)。 +如果ClickHouse服务器没有 `CAP_NET_ADMIN` 能力,那么它将试图退回到 `ProcfsMetricsProvider`. `ProcfsMetricsProvider` 允许收集每个查询系统指标(包括CPU和I/O)。 -如果系统上支持并启用procfs,ClickHouse server将收集这些指标: +如果系统上支持并启用procfs,ClickHouse server将收集如下指标: - `OSCPUVirtualTimeMicroseconds` - `OSCPUWaitMicroseconds` diff --git a/docs/zh/sql-reference/data-types/special-data-types/interval.md b/docs/zh/sql-reference/data-types/special-data-types/interval.md index df2ce097df0..9df25e3f555 100644 --- a/docs/zh/sql-reference/data-types/special-data-types/interval.md +++ b/docs/zh/sql-reference/data-types/special-data-types/interval.md @@ -5,9 +5,9 @@ toc_priority: 61 toc_title: "\u95F4\u9694" --- -# 间隔 {#data-type-interval} +# Interval类型 {#data-type-interval} -表示时间和日期间隔的数据类型族。 由此产生的类型 [INTERVAL](../../../sql-reference/operators/index.md#operator-interval) 接线员 +表示时间和日期间隔的数据类型家族。 [INTERVAL](../../../sql-reference/operators/index.md#operator-interval) 运算的结果类型。 !!! warning "警告" `Interval` 数据类型值不能存储在表中。 @@ -15,7 +15,7 @@ toc_title: "\u95F4\u9694" 结构: - 时间间隔作为无符号整数值。 -- 间隔的类型。 +- 时间间隔的类型。 支持的时间间隔类型: @@ -28,7 +28,7 @@ toc_title: "\u95F4\u9694" - `QUARTER` - `YEAR` -对于每个间隔类型,都有一个单独的数据类型。 例如, `DAY` 间隔对应于 `IntervalDay` 数据类型: +对于每个时间间隔类型,都有一个单独的数据类型。 例如, `DAY` 间隔对应于 `IntervalDay` 数据类型: ``` sql SELECT toTypeName(INTERVAL 4 DAY) @@ -42,7 +42,7 @@ SELECT toTypeName(INTERVAL 4 DAY) ## 使用说明 {#data-type-interval-usage-remarks} -您可以使用 `Interval`-在算术运算类型值 [日期](../../../sql-reference/data-types/date.md) 和 [日期时间](../../../sql-reference/data-types/datetime.md)-类型值。 例如,您可以将4天添加到当前时间: +您可以在与 [日期](../../../sql-reference/data-types/date.md) 和 [日期时间](../../../sql-reference/data-types/datetime.md) 类型值的算术运算中使用 `Interval` 类型值。 例如,您可以将4天添加到当前时间: ``` sql SELECT now() as current_date_time, current_date_time + INTERVAL 4 DAY @@ -54,10 +54,10 @@ SELECT now() as current_date_time, current_date_time + INTERVAL 4 DAY └─────────────────────┴───────────────────────────────┘ ``` -不同类型的间隔不能合并。 你不能使用间隔,如 `4 DAY 1 HOUR`. 以小于或等于间隔的最小单位的单位指定间隔,例如,间隔 `1 day and an hour` 间隔可以表示为 `25 HOUR` 或 `90000 SECOND`. - -你不能执行算术运算 `Interval`-类型值,但你可以添加不同类型的时间间隔,因此值 `Date` 或 `DateTime` 数据类型。 例如: +不同类型的间隔不能合并。 你不能使用诸如 `4 DAY 1 HOUR` 的时间间隔. 以小于或等于时间间隔最小单位的单位来指定间隔,例如,时间间隔 `1 day and an hour` 可以表示为 `25 HOUR` 或 `90000 SECOND`. +你不能对 `Interval` 类型的值执行算术运算,但你可以向 `Date` 或 `DateTime` 数据类型的值添加不同类型的时间间隔,例如: + ``` sql SELECT now() AS current_date_time, current_date_time + INTERVAL 4 DAY + INTERVAL 3 HOUR ``` @@ -81,5 +81,5 @@ Code: 43. DB::Exception: Received from localhost:9000. DB::Exception: Wrong argu ## 另请参阅 {#see-also} -- [INTERVAL](../../../sql-reference/operators/index.md#operator-interval) 接线员 +- [INTERVAL](../../../sql-reference/operators/index.md#operator-interval) 操作 - [toInterval](../../../sql-reference/functions/type-conversion-functions.md#function-tointerval) 类型转换函数 diff --git a/docs/zh/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md b/docs/zh/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md index f981f442fb6..cbd88de0038 100644 --- a/docs/zh/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md +++ b/docs/zh/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md @@ -55,7 +55,7 @@ SOURCE(SOURCE_TYPE(param1 val1 ... paramN valN)) -- Source configuration 或 ``` sql -SOURCE(FILE(path '/opt/dictionaries/os.tsv' format 'TabSeparated')) +SOURCE(FILE(path './user_files/os.tsv' format 'TabSeparated')) SETTINGS(format_csv_allow_single_quotes = 0) ``` @@ -87,7 +87,7 @@ SETTINGS(format_csv_allow_single_quotes = 0) 或 ``` sql -SOURCE(FILE(path '/opt/dictionaries/os.tsv' format 'TabSeparated')) +SOURCE(FILE(path './user_files/os.tsv' format 'TabSeparated')) ``` 设置字段: diff --git a/docs/zh/sql-reference/dictionaries/index.md b/docs/zh/sql-reference/dictionaries/index.md index 7e8f5e83aa7..092afd5bac1 100644 --- a/docs/zh/sql-reference/dictionaries/index.md +++ b/docs/zh/sql-reference/dictionaries/index.md @@ -8,15 +8,15 @@ toc_title: "\u5BFC\u8A00" # 字典 {#dictionaries} -字典是一个映射 (`key -> attributes`)这是方便各种类型的参考清单。 +字典是一个映射 (`键 -> 属性`), 是方便各种类型的参考清单。 -ClickHouse支持使用可用于查询的字典的特殊功能。 这是更容易和更有效地使用字典与功能比 `JOIN` 与参考表。 +ClickHouse支持一些特殊函数配合字典在查询中使用。 将字典与函数结合使用比将 `JOIN` 操作与引用表结合使用更简单、更有效。 [NULL](../../sql-reference/syntax.md#null-literal) 值不能存储在字典中。 ClickHouse支持: -- [内置字典](internal-dicts.md#internal_dicts) 具有特定的 [功能集](../../sql-reference/functions/ym-dict-functions.md). -- [插件(外部)字典](external-dictionaries/external-dicts.md#dicts-external-dicts) 用一个 [功能集](../../sql-reference/functions/ext-dict-functions.md). +- [内置字典](internal-dicts.md#internal_dicts) ,这些字典具有特定的 [函数集](../../sql-reference/functions/ym-dict-functions.md). +- [插件(外部)字典](external-dictionaries/external-dicts.md#dicts-external-dicts) ,这些字典拥有一个 [函数集](../../sql-reference/functions/ext-dict-functions.md). [原始文章](https://clickhouse.tech/docs/en/query_language/dicts/) diff --git a/docs/zh/sql-reference/statements/create.md b/docs/zh/sql-reference/statements/create.md index 639af0841dc..46e82bd1733 100644 --- a/docs/zh/sql-reference/statements/create.md +++ b/docs/zh/sql-reference/statements/create.md @@ -238,7 +238,7 @@ SELECT a, b, c FROM (SELECT ...) 当一个`SELECT`子句包含`DISTINCT`, `GROUP BY`, `ORDER BY`, `LIMIT`时,请注意,这些仅会在插入数据时在每个单独的数据块上执行。例如,如果你在其中包含了`GROUP BY`,则只会在查询期间进行聚合,但聚合范围仅限于单个批的写入数据。数据不会进一步被聚合。但是当你使用一些其他数据聚合引擎时这是例外的,如:`SummingMergeTree`。 -目前对物化视图执行`ALTER`是不支持的,因此这可能是不方便的。如果物化视图是使用的`TO [db.]name`的方式进行构建的,你可以使用`DETACH`语句现将视图剥离,然后使用`ALTER`运行在目标表上,然后使用`ATTACH`将之前剥离的表重新加载进来。 +目前对物化视图执行`ALTER`是不支持的,因此这可能是不方便的。如果物化视图是使用的`TO [db.]name`的方式进行构建的,你可以使用`DETACH`语句先将视图剥离,然后使用`ALTER`运行在目标表上,然后使用`ATTACH`将之前剥离的表重新加载进来。 视图看起来和普通的表相同。例如,你可以通过`SHOW TABLES`查看到它们。 diff --git a/docs/zh/sql-reference/syntax.md b/docs/zh/sql-reference/syntax.md index 8c331db1139..c05c5a1a7bf 100644 --- a/docs/zh/sql-reference/syntax.md +++ b/docs/zh/sql-reference/syntax.md @@ -14,7 +14,7 @@ INSERT INTO t VALUES (1, 'Hello, world'), (2, 'abc'), (3, 'def') 含`INSERT INTO t VALUES` 的部分由完整SQL解析器处理,包含数据的部分 `(1, 'Hello, world'), (2, 'abc'), (3, 'def')` 交给快速流式解析器解析。通过设置参数 [input_format_values_interpret_expressions](../operations/settings/settings.md#settings-input_format_values_interpret_expressions),你也可以对数据部分开启完整SQL解析器。当 `input_format_values_interpret_expressions = 1` 时,CH优先采用快速流式解析器来解析数据。如果失败,CH再尝试用完整SQL解析器来处理,就像处理SQL [expression](#syntax-expressions) 一样。 -数据可以采用任何格式。当CH接受到请求时,服务端先在内存中计算不超过 [max_query_size](../operations/settings/settings.md#settings-max_query_size) 字节的请求数据(默认1 mb),然后剩下部分交给快速流式解析器。 +数据可以采用任何格式。当CH接收到请求时,服务端先在内存中计算不超过 [max_query_size](../operations/settings/settings.md#settings-max_query_size) 字节的请求数据(默认1 mb),然后剩下部分交给快速流式解析器。 这将避免在处理大型的 `INSERT`语句时出现问题。 diff --git a/programs/CMakeLists.txt b/programs/CMakeLists.txt index 09199e83026..2af0331c70b 100644 --- a/programs/CMakeLists.txt +++ b/programs/CMakeLists.txt @@ -47,6 +47,15 @@ option (ENABLE_CLICKHOUSE_LIBRARY_BRIDGE "HTTP-server working like a proxy to Li option (ENABLE_CLICKHOUSE_GIT_IMPORT "A tool to analyze Git repositories" ${ENABLE_CLICKHOUSE_ALL}) + +option (ENABLE_CLICKHOUSE_KEEPER "ClickHouse alternative to ZooKeeper" ${ENABLE_CLICKHOUSE_ALL}) +if (NOT USE_NURAFT) + # RECONFIGURE_MESSAGE_LEVEL should not be used here, + # since USE_NURAFT is set to OFF for FreeBSD and Darwin. + message (STATUS "clickhouse-keeper will not be built (lack of NuRaft)") + set(ENABLE_CLICKHOUSE_KEEPER OFF) +endif() + if (CLICKHOUSE_SPLIT_BINARY) option(ENABLE_CLICKHOUSE_INSTALL "Install ClickHouse without .deb/.rpm/.tgz packages (having the binary only)" OFF) else () @@ -134,6 +143,12 @@ else() message(STATUS "ClickHouse git-import: OFF") endif() +if (ENABLE_CLICKHOUSE_KEEPER) + message(STATUS "ClickHouse keeper mode: ON") +else() + message(STATUS "ClickHouse keeper mode: OFF") +endif() + if(NOT (MAKE_STATIC_LIBRARIES OR SPLIT_SHARED_LIBRARIES)) set(CLICKHOUSE_ONE_SHARED ON) endif() @@ -189,6 +204,54 @@ macro(clickhouse_program_add name) clickhouse_program_add_executable(${name}) endmacro() +# Embed default config files as a resource into the binary. +# This is needed for two purposes: +# 1. Allow to run the binary without download of any other files. +# 2. Allow to implement "sudo clickhouse install" tool. +# +# Arguments: target (server, client, keeper, etc.) and list of files +# +# Also dependency on TARGET_FILE is required, look at examples in programs/server and programs/keeper +macro(clickhouse_embed_binaries) + # TODO We actually need this on Mac, FreeBSD. + if (OS_LINUX) + + set(arguments_list "${ARGN}") + list(GET arguments_list 0 target) + + # for some reason cmake iterates loop including + math(EXPR arguments_count "${ARGC}-1") + + foreach(RESOURCE_POS RANGE 1 "${arguments_count}") + list(GET arguments_list "${RESOURCE_POS}" RESOURCE_FILE) + set(RESOURCE_OBJ ${RESOURCE_FILE}.o) + set(RESOURCE_OBJS ${RESOURCE_OBJS} ${RESOURCE_OBJ}) + + # https://stackoverflow.com/questions/14776463/compile-and-add-an-object-file-from-a-binary-with-cmake + # PPC64LE fails to do this with objcopy, use ld or lld instead + if (ARCH_PPC64LE) + add_custom_command(OUTPUT ${RESOURCE_OBJ} + COMMAND cd ${CMAKE_CURRENT_SOURCE_DIR} && ${CMAKE_LINKER} -m elf64lppc -r -b binary -o "${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ}" ${RESOURCE_FILE}) + else() + add_custom_command(OUTPUT ${RESOURCE_OBJ} + COMMAND cd ${CMAKE_CURRENT_SOURCE_DIR} && ${OBJCOPY_PATH} -I binary ${OBJCOPY_ARCH_OPTIONS} ${RESOURCE_FILE} "${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ}" + COMMAND ${OBJCOPY_PATH} --rename-section .data=.rodata,alloc,load,readonly,data,contents + "${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ}" "${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ}") + endif() + set_source_files_properties(${RESOURCE_OBJ} PROPERTIES EXTERNAL_OBJECT true GENERATED true) + endforeach() + + add_library(clickhouse_${target}_configs STATIC ${RESOURCE_OBJS}) + set_target_properties(clickhouse_${target}_configs PROPERTIES LINKER_LANGUAGE C) + + # whole-archive prevents symbols from being discarded for unknown reason + # CMake can shuffle each of target_link_libraries arguments with other + # libraries in linker command. To avoid this we hardcode whole-archive + # library into single string. + add_dependencies(clickhouse-${target}-lib clickhouse_${target}_configs) + endif () +endmacro() + add_subdirectory (server) add_subdirectory (client) @@ -203,6 +266,10 @@ add_subdirectory (install) add_subdirectory (git-import) add_subdirectory (bash-completion) +if (ENABLE_CLICKHOUSE_KEEPER) + add_subdirectory (keeper) +endif() + if (ENABLE_CLICKHOUSE_ODBC_BRIDGE) add_subdirectory (odbc-bridge) endif () @@ -212,15 +279,26 @@ if (ENABLE_CLICKHOUSE_LIBRARY_BRIDGE) endif () if (CLICKHOUSE_ONE_SHARED) - add_library(clickhouse-lib SHARED ${CLICKHOUSE_SERVER_SOURCES} ${CLICKHOUSE_CLIENT_SOURCES} ${CLICKHOUSE_LOCAL_SOURCES} ${CLICKHOUSE_BENCHMARK_SOURCES} ${CLICKHOUSE_COPIER_SOURCES} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_SOURCES} ${CLICKHOUSE_COMPRESSOR_SOURCES} ${CLICKHOUSE_FORMAT_SOURCES} ${CLICKHOUSE_OBFUSCATOR_SOURCES} ${CLICKHOUSE_GIT_IMPORT_SOURCES} ${CLICKHOUSE_ODBC_BRIDGE_SOURCES}) - target_link_libraries(clickhouse-lib ${CLICKHOUSE_SERVER_LINK} ${CLICKHOUSE_CLIENT_LINK} ${CLICKHOUSE_LOCAL_LINK} ${CLICKHOUSE_BENCHMARK_LINK} ${CLICKHOUSE_COPIER_LINK} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_LINK} ${CLICKHOUSE_COMPRESSOR_LINK} ${CLICKHOUSE_FORMAT_LINK} ${CLICKHOUSE_OBFUSCATOR_LINK} ${CLICKHOUSE_GIT_IMPORT_LINK} ${CLICKHOUSE_ODBC_BRIDGE_LINK}) - target_include_directories(clickhouse-lib ${CLICKHOUSE_SERVER_INCLUDE} ${CLICKHOUSE_CLIENT_INCLUDE} ${CLICKHOUSE_LOCAL_INCLUDE} ${CLICKHOUSE_BENCHMARK_INCLUDE} ${CLICKHOUSE_COPIER_INCLUDE} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_INCLUDE} ${CLICKHOUSE_COMPRESSOR_INCLUDE} ${CLICKHOUSE_FORMAT_INCLUDE} ${CLICKHOUSE_OBFUSCATOR_INCLUDE} ${CLICKHOUSE_GIT_IMPORT_INCLUDE} ${CLICKHOUSE_ODBC_BRIDGE_INCLUDE}) + add_library(clickhouse-lib SHARED ${CLICKHOUSE_SERVER_SOURCES} ${CLICKHOUSE_CLIENT_SOURCES} ${CLICKHOUSE_LOCAL_SOURCES} ${CLICKHOUSE_BENCHMARK_SOURCES} ${CLICKHOUSE_COPIER_SOURCES} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_SOURCES} ${CLICKHOUSE_COMPRESSOR_SOURCES} ${CLICKHOUSE_FORMAT_SOURCES} ${CLICKHOUSE_OBFUSCATOR_SOURCES} ${CLICKHOUSE_GIT_IMPORT_SOURCES} ${CLICKHOUSE_ODBC_BRIDGE_SOURCES} ${CLICKHOUSE_KEEPER_SOURCES}) + target_link_libraries(clickhouse-lib ${CLICKHOUSE_SERVER_LINK} ${CLICKHOUSE_CLIENT_LINK} ${CLICKHOUSE_LOCAL_LINK} ${CLICKHOUSE_BENCHMARK_LINK} ${CLICKHOUSE_COPIER_LINK} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_LINK} ${CLICKHOUSE_COMPRESSOR_LINK} ${CLICKHOUSE_FORMAT_LINK} ${CLICKHOUSE_OBFUSCATOR_LINK} ${CLICKHOUSE_GIT_IMPORT_LINK} ${CLICKHOUSE_ODBC_BRIDGE_LINK} ${CLICKHOUSE_KEEPER_LINK}) + target_include_directories(clickhouse-lib ${CLICKHOUSE_SERVER_INCLUDE} ${CLICKHOUSE_CLIENT_INCLUDE} ${CLICKHOUSE_LOCAL_INCLUDE} ${CLICKHOUSE_BENCHMARK_INCLUDE} ${CLICKHOUSE_COPIER_INCLUDE} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_INCLUDE} ${CLICKHOUSE_COMPRESSOR_INCLUDE} ${CLICKHOUSE_FORMAT_INCLUDE} ${CLICKHOUSE_OBFUSCATOR_INCLUDE} ${CLICKHOUSE_GIT_IMPORT_INCLUDE} ${CLICKHOUSE_ODBC_BRIDGE_INCLUDE} ${CLICKHOUSE_KEEPER_INCLUDE}) set_target_properties(clickhouse-lib PROPERTIES SOVERSION ${VERSION_MAJOR}.${VERSION_MINOR} VERSION ${VERSION_SO} OUTPUT_NAME clickhouse DEBUG_POSTFIX "") install (TARGETS clickhouse-lib LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR} COMPONENT clickhouse) endif() if (CLICKHOUSE_SPLIT_BINARY) - set (CLICKHOUSE_ALL_TARGETS clickhouse-server clickhouse-client clickhouse-local clickhouse-benchmark clickhouse-extract-from-config clickhouse-compressor clickhouse-format clickhouse-obfuscator clickhouse-git-import clickhouse-copier) + set (CLICKHOUSE_ALL_TARGETS + clickhouse-server + clickhouse-client + clickhouse-local + clickhouse-benchmark + clickhouse-extract-from-config + clickhouse-compressor + clickhouse-format + clickhouse-obfuscator + clickhouse-git-import + clickhouse-copier + ) if (ENABLE_CLICKHOUSE_ODBC_BRIDGE) list (APPEND CLICKHOUSE_ALL_TARGETS clickhouse-odbc-bridge) @@ -230,6 +308,10 @@ if (CLICKHOUSE_SPLIT_BINARY) list (APPEND CLICKHOUSE_ALL_TARGETS clickhouse-library-bridge) endif () + if (ENABLE_CLICKHOUSE_KEEPER) + list (APPEND CLICKHOUSE_ALL_TARGETS clickhouse-keeper) + endif () + set_target_properties(${CLICKHOUSE_ALL_TARGETS} PROPERTIES RUNTIME_OUTPUT_DIRECTORY ..) add_custom_target (clickhouse-bundle ALL DEPENDS ${CLICKHOUSE_ALL_TARGETS}) @@ -277,6 +359,9 @@ else () if (ENABLE_CLICKHOUSE_GIT_IMPORT) clickhouse_target_link_split_lib(clickhouse git-import) endif () + if (ENABLE_CLICKHOUSE_KEEPER) + clickhouse_target_link_split_lib(clickhouse keeper) + endif() if (ENABLE_CLICKHOUSE_INSTALL) clickhouse_target_link_split_lib(clickhouse install) endif () @@ -332,6 +417,11 @@ else () install (FILES "${CMAKE_CURRENT_BINARY_DIR}/clickhouse-git-import" DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse) list(APPEND CLICKHOUSE_BUNDLE clickhouse-git-import) endif () + if (ENABLE_CLICKHOUSE_KEEPER) + add_custom_target (clickhouse-keeper ALL COMMAND ${CMAKE_COMMAND} -E create_symlink clickhouse clickhouse-keeper DEPENDS clickhouse) + install (FILES "${CMAKE_CURRENT_BINARY_DIR}/clickhouse-keeper" DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse) + list(APPEND CLICKHOUSE_BUNDLE clickhouse-keeper) + endif () install (TARGETS clickhouse RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse) diff --git a/programs/benchmark/Benchmark.cpp b/programs/benchmark/Benchmark.cpp index 1d2b579db3a..82c93eef9ff 100644 --- a/programs/benchmark/Benchmark.cpp +++ b/programs/benchmark/Benchmark.cpp @@ -159,7 +159,7 @@ private: bool print_stacktrace; const Settings & settings; SharedContextHolder shared_context; - ContextPtr global_context; + ContextMutablePtr global_context; QueryProcessingStage::Enum query_processing_stage; /// Don't execute new queries after timelimit or SIGINT or exception diff --git a/programs/client/Client.cpp b/programs/client/Client.cpp index 4e59016138d..db2878d7d4c 100644 --- a/programs/client/Client.cpp +++ b/programs/client/Client.cpp @@ -180,7 +180,7 @@ private: bool has_vertical_output_suffix = false; /// Is \G present at the end of the query string? SharedContextHolder shared_context = Context::createShared(); - ContextPtr context = Context::createGlobal(shared_context.get()); + ContextMutablePtr context = Context::createGlobal(shared_context.get()); /// Buffer that reads from stdin in batch mode. ReadBufferFromFileDescriptor std_in{STDIN_FILENO}; @@ -1365,6 +1365,27 @@ private: { const auto * exception = server_exception ? server_exception.get() : client_exception.get(); fmt::print(stderr, "Error on processing query '{}': {}\n", ast_to_process->formatForErrorMessage(), exception->message()); + + // Try to reconnect after errors, for two reasons: + // 1. We might not have realized that the server died, e.g. if + // it sent us a trace and closed connection properly. + // 2. The connection might have gotten into a wrong state and + // the next query will get false positive about + // "Unknown packet from server". + try + { + connection->forceConnected(connection_parameters.timeouts); + } + catch (...) + { + // Just report it, we'll terminate below. + fmt::print(stderr, + "Error while reconnecting to the server: Code: {}: {}\n", + getCurrentExceptionCode(), + getCurrentExceptionMessage(true)); + + assert(!connection->isConnected()); + } } if (!connection->isConnected()) @@ -1468,11 +1489,6 @@ private: server_exception.reset(); client_exception.reset(); have_error = false; - - // We have to reinitialize connection after errors, because it - // might have gotten into a wrong state and we'll get false - // positives about "Unknown packet from server". - connection->forceConnected(connection_parameters.timeouts); } else if (ast_to_process->formatForErrorMessage().size() > 500) { diff --git a/programs/config_tools.h.in b/programs/config_tools.h.in index abe9ef8c562..50ba0c16a83 100644 --- a/programs/config_tools.h.in +++ b/programs/config_tools.h.in @@ -16,3 +16,4 @@ #cmakedefine01 ENABLE_CLICKHOUSE_INSTALL #cmakedefine01 ENABLE_CLICKHOUSE_ODBC_BRIDGE #cmakedefine01 ENABLE_CLICKHOUSE_LIBRARY_BRIDGE +#cmakedefine01 ENABLE_CLICKHOUSE_KEEPER diff --git a/programs/copier/ClusterCopier.h b/programs/copier/ClusterCopier.h index e875ca7df2e..085fa2ece06 100644 --- a/programs/copier/ClusterCopier.h +++ b/programs/copier/ClusterCopier.h @@ -12,14 +12,14 @@ namespace DB { -class ClusterCopier : WithContext +class ClusterCopier : WithMutableContext { public: ClusterCopier(const String & task_path_, const String & host_id_, const String & proxy_database_name_, - ContextPtr context_) - : WithContext(context_), + ContextMutablePtr context_) + : WithMutableContext(context_), task_zookeeper_path(task_path_), host_id(host_id_), working_database_name(proxy_database_name_), diff --git a/programs/install/Install.cpp b/programs/install/Install.cpp index 96d336673d0..8c187978106 100644 --- a/programs/install/Install.cpp +++ b/programs/install/Install.cpp @@ -983,7 +983,7 @@ int mainEntryClickHouseStop(int argc, char ** argv) desc.add_options() ("help,h", "produce help message") ("pid-path", po::value()->default_value("/var/run/clickhouse-server"), "directory for pid file") - ("force", po::value()->default_value(false), "Stop with KILL signal instead of TERM") + ("force", po::bool_switch(), "Stop with KILL signal instead of TERM") ; po::variables_map options; diff --git a/programs/keeper/CMakeLists.txt b/programs/keeper/CMakeLists.txt new file mode 100644 index 00000000000..e604d0e304e --- /dev/null +++ b/programs/keeper/CMakeLists.txt @@ -0,0 +1,24 @@ +set(CLICKHOUSE_KEEPER_SOURCES + Keeper.cpp +) + +if (OS_LINUX) + set (LINK_RESOURCE_LIB INTERFACE "-Wl,${WHOLE_ARCHIVE} $ -Wl,${NO_WHOLE_ARCHIVE}") +endif () + +set (CLICKHOUSE_KEEPER_LINK + PRIVATE + clickhouse_common_config + clickhouse_common_io + clickhouse_common_zookeeper + daemon + dbms + + ${LINK_RESOURCE_LIB} +) + +clickhouse_program_add(keeper) + +install (FILES keeper_config.xml DESTINATION "${CLICKHOUSE_ETC_DIR}/clickhouse-keeper" COMPONENT clickhouse-keeper) + +clickhouse_embed_binaries(keeper keeper_config.xml keeper_embedded.xml) diff --git a/programs/keeper/Keeper.cpp b/programs/keeper/Keeper.cpp new file mode 100644 index 00000000000..8b35ec12850 --- /dev/null +++ b/programs/keeper/Keeper.cpp @@ -0,0 +1,468 @@ +#include "Keeper.h" + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#if !defined(ARCADIA_BUILD) +# include "config_core.h" +# include "Common/config_version.h" +#endif + +#if USE_SSL +# include +# include +#endif + +#include + +#if defined(OS_LINUX) +# include +# include +#endif + + +int mainEntryClickHouseKeeper(int argc, char ** argv) +{ + DB::Keeper app; + + try + { + return app.run(argc, argv); + } + catch (...) + { + std::cerr << DB::getCurrentExceptionMessage(true) << "\n"; + auto code = DB::getCurrentExceptionCode(); + return code ? code : 1; + } +} + +namespace DB +{ + +namespace ErrorCodes +{ + extern const int NO_ELEMENTS_IN_CONFIG; + extern const int SUPPORT_IS_DISABLED; + extern const int NETWORK_ERROR; + extern const int MISMATCHING_USERS_FOR_PROCESS_AND_DATA; + extern const int FAILED_TO_GETPWUID; +} + +namespace +{ + +int waitServersToFinish(std::vector & servers, size_t seconds_to_wait) +{ + const int sleep_max_ms = 1000 * seconds_to_wait; + const int sleep_one_ms = 100; + int sleep_current_ms = 0; + int current_connections = 0; + for (;;) + { + current_connections = 0; + + for (auto & server : servers) + { + server.stop(); + current_connections += server.currentConnections(); + } + + if (!current_connections) + break; + + sleep_current_ms += sleep_one_ms; + if (sleep_current_ms < sleep_max_ms) + std::this_thread::sleep_for(std::chrono::milliseconds(sleep_one_ms)); + else + break; + } + return current_connections; +} + +Poco::Net::SocketAddress makeSocketAddress(const std::string & host, UInt16 port, Poco::Logger * log) +{ + Poco::Net::SocketAddress socket_address; + try + { + socket_address = Poco::Net::SocketAddress(host, port); + } + catch (const Poco::Net::DNSException & e) + { + const auto code = e.code(); + if (code == EAI_FAMILY +#if defined(EAI_ADDRFAMILY) + || code == EAI_ADDRFAMILY +#endif + ) + { + LOG_ERROR(log, "Cannot resolve listen_host ({}), error {}: {}. " + "If it is an IPv6 address and your host has disabled IPv6, then consider to " + "specify IPv4 address to listen in element of configuration " + "file. Example: 0.0.0.0", + host, e.code(), e.message()); + } + + throw; + } + return socket_address; +} + +[[noreturn]] void forceShutdown() +{ +#if defined(THREAD_SANITIZER) && defined(OS_LINUX) + /// Thread sanitizer tries to do something on exit that we don't need if we want to exit immediately, + /// while connection handling threads are still run. + (void)syscall(SYS_exit_group, 0); + __builtin_unreachable(); +#else + _exit(0); +#endif +} + +std::string getUserName(uid_t user_id) +{ + /// Try to convert user id into user name. + auto buffer_size = sysconf(_SC_GETPW_R_SIZE_MAX); + if (buffer_size <= 0) + buffer_size = 1024; + std::string buffer; + buffer.reserve(buffer_size); + + struct passwd passwd_entry; + struct passwd * result = nullptr; + const auto error = getpwuid_r(user_id, &passwd_entry, buffer.data(), buffer_size, &result); + + if (error) + throwFromErrno("Failed to find user name for " + toString(user_id), ErrorCodes::FAILED_TO_GETPWUID, error); + else if (result) + return result->pw_name; + return toString(user_id); +} + +} + +Poco::Net::SocketAddress Keeper::socketBindListen(Poco::Net::ServerSocket & socket, const std::string & host, UInt16 port, [[maybe_unused]] bool secure) const +{ + auto address = makeSocketAddress(host, port, &logger()); +#if !defined(POCO_CLICKHOUSE_PATCH) || POCO_VERSION < 0x01090100 + if (secure) + /// Bug in old (<1.9.1) poco, listen() after bind() with reusePort param will fail because have no implementation in SecureServerSocketImpl + /// https://github.com/pocoproject/poco/pull/2257 + socket.bind(address, /* reuseAddress = */ true); + else +#endif +#if POCO_VERSION < 0x01080000 + socket.bind(address, /* reuseAddress = */ true); +#else + socket.bind(address, /* reuseAddress = */ true, /* reusePort = */ config().getBool("listen_reuse_port", false)); +#endif + + socket.listen(/* backlog = */ config().getUInt("listen_backlog", 64)); + + return address; +} + +void Keeper::createServer(const std::string & listen_host, const char * port_name, bool listen_try, CreateServerFunc && func) const +{ + /// For testing purposes, user may omit tcp_port or http_port or https_port in configuration file. + if (!config().has(port_name)) + return; + + auto port = config().getInt(port_name); + try + { + func(port); + } + catch (const Poco::Exception &) + { + std::string message = "Listen [" + listen_host + "]:" + std::to_string(port) + " failed: " + getCurrentExceptionMessage(false); + + if (listen_try) + { + LOG_WARNING(&logger(), "{}. If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to " + "specify not disabled IPv4 or IPv6 address to listen in element of configuration " + "file. Example for disabled IPv6: 0.0.0.0 ." + " Example for disabled IPv4: ::", + message); + } + else + { + throw Exception{message, ErrorCodes::NETWORK_ERROR}; + } + } +} + +void Keeper::uninitialize() +{ + logger().information("shutting down"); + BaseDaemon::uninitialize(); +} + +int Keeper::run() +{ + if (config().hasOption("help")) + { + Poco::Util::HelpFormatter help_formatter(Keeper::options()); + auto header_str = fmt::format("{} [OPTION] [-- [ARG]...]\n" + "positional arguments can be used to rewrite config.xml properties, for example, --http_port=8010", + commandName()); + help_formatter.setHeader(header_str); + help_formatter.format(std::cout); + return 0; + } + if (config().hasOption("version")) + { + std::cout << DBMS_NAME << " keeper version " << VERSION_STRING << VERSION_OFFICIAL << "." << std::endl; + return 0; + } + + return Application::run(); // NOLINT +} + +void Keeper::initialize(Poco::Util::Application & self) +{ + BaseDaemon::initialize(self); + logger().information("starting up"); + + LOG_INFO(&logger(), "OS Name = {}, OS Version = {}, OS Architecture = {}", + Poco::Environment::osName(), + Poco::Environment::osVersion(), + Poco::Environment::osArchitecture()); +} + +std::string Keeper::getDefaultConfigFileName() const +{ + return "keeper_config.xml"; +} + +void Keeper::defineOptions(Poco::Util::OptionSet & options) +{ + options.addOption( + Poco::Util::Option("help", "h", "show help and exit") + .required(false) + .repeatable(false) + .binding("help")); + options.addOption( + Poco::Util::Option("version", "V", "show version and exit") + .required(false) + .repeatable(false) + .binding("version")); + BaseDaemon::defineOptions(options); +} + +int Keeper::main(const std::vector & /*args*/) +{ + Poco::Logger * log = &logger(); + + UseSSL use_ssl; + + MainThreadStatus::getInstance(); + +#if !defined(NDEBUG) || !defined(__OPTIMIZE__) + LOG_WARNING(log, "Keeper was built in debug mode. It will work slowly."); +#endif + +#if defined(SANITIZER) + LOG_WARNING(log, "Keeper was built with sanitizer. It will work slowly."); +#endif + + auto shared_context = Context::createShared(); + global_context = Context::createGlobal(shared_context.get()); + + global_context->makeGlobalContext(); + global_context->setApplicationType(Context::ApplicationType::KEEPER); + + if (!config().has("keeper_server")) + throw Exception(ErrorCodes::NO_ELEMENTS_IN_CONFIG, "Keeper configuration ( section) not found in config"); + + + std::string path; + + if (config().has("keeper_server.storage_path")) + path = config().getString("keeper_server.storage_path"); + else if (config().has("keeper_server.log_storage_path")) + path = config().getString("keeper_server.log_storage_path"); + else if (config().has("keeper_server.snapshot_storage_path")) + path = config().getString("keeper_server.snapshot_storage_path"); + else + path = std::filesystem::path{KEEPER_DEFAULT_PATH}; + + + /// Check that the process user id matches the owner of the data. + const auto effective_user_id = geteuid(); + struct stat statbuf; + if (stat(path.c_str(), &statbuf) == 0 && effective_user_id != statbuf.st_uid) + { + const auto effective_user = getUserName(effective_user_id); + const auto data_owner = getUserName(statbuf.st_uid); + std::string message = "Effective user of the process (" + effective_user + + ") does not match the owner of the data (" + data_owner + ")."; + if (effective_user_id == 0) + { + message += " Run under 'sudo -u " + data_owner + "'."; + throw Exception(message, ErrorCodes::MISMATCHING_USERS_FOR_PROCESS_AND_DATA); + } + else + { + LOG_WARNING(log, message); + } + } + + const Settings & settings = global_context->getSettingsRef(); + + GlobalThreadPool::initialize(config().getUInt("max_thread_pool_size", 100)); + + static ServerErrorHandler error_handler; + Poco::ErrorHandler::set(&error_handler); + + /// Initialize DateLUT early, to not interfere with running time of first query. + LOG_DEBUG(log, "Initializing DateLUT."); + DateLUT::instance(); + LOG_TRACE(log, "Initialized DateLUT with time zone '{}'.", DateLUT::instance().getTimeZone()); + + /// Don't want to use DNS cache + DNSResolver::instance().setDisableCacheFlag(); + + Poco::ThreadPool server_pool(3, config().getUInt("max_connections", 1024)); + + std::vector listen_hosts = DB::getMultipleValuesFromConfig(config(), "", "listen_host"); + + bool listen_try = config().getBool("listen_try", false); + if (listen_hosts.empty()) + { + listen_hosts.emplace_back("::1"); + listen_hosts.emplace_back("127.0.0.1"); + listen_try = true; + } + + auto servers = std::make_shared>(); + + /// Initialize test keeper RAFT. Do nothing if no nu_keeper_server in config. + global_context->initializeKeeperStorageDispatcher(); + for (const auto & listen_host : listen_hosts) + { + /// TCP Keeper + const char * port_name = "keeper_server.tcp_port"; + createServer(listen_host, port_name, listen_try, [&](UInt16 port) + { + Poco::Net::ServerSocket socket; + auto address = socketBindListen(socket, listen_host, port); + socket.setReceiveTimeout(settings.receive_timeout); + socket.setSendTimeout(settings.send_timeout); + servers->emplace_back( + port_name, + std::make_unique( + new KeeperTCPHandlerFactory(*this, false), server_pool, socket, new Poco::Net::TCPServerParams)); + + LOG_INFO(log, "Listening for connections to Keeper (tcp): {}", address.toString()); + }); + + const char * secure_port_name = "keeper_server.tcp_port_secure"; + createServer(listen_host, secure_port_name, listen_try, [&](UInt16 port) + { +#if USE_SSL + Poco::Net::SecureServerSocket socket; + auto address = socketBindListen(socket, listen_host, port, /* secure = */ true); + socket.setReceiveTimeout(settings.receive_timeout); + socket.setSendTimeout(settings.send_timeout); + servers->emplace_back( + secure_port_name, + std::make_unique( + new KeeperTCPHandlerFactory(*this, true), server_pool, socket, new Poco::Net::TCPServerParams)); + LOG_INFO(log, "Listening for connections to Keeper with secure protocol (tcp_secure): {}", address.toString()); +#else + UNUSED(port); + throw Exception{"SSL support for TCP protocol is disabled because Poco library was built without NetSSL support.", + ErrorCodes::SUPPORT_IS_DISABLED}; +#endif + }); + } + + for (auto & server : *servers) + server.start(); + + SCOPE_EXIT({ + LOG_INFO(log, "Shutting down."); + + global_context->shutdown(); + + LOG_DEBUG(log, "Waiting for current connections to Keeper to finish."); + int current_connections = 0; + for (auto & server : *servers) + { + server.stop(); + current_connections += server.currentConnections(); + } + + if (current_connections) + LOG_INFO(log, "Closed all listening sockets. Waiting for {} outstanding connections.", current_connections); + else + LOG_INFO(log, "Closed all listening sockets."); + + if (current_connections > 0) + current_connections = waitServersToFinish(*servers, config().getInt("shutdown_wait_unfinished", 5)); + + if (current_connections) + LOG_INFO(log, "Closed connections to Keeper. But {} remain. Probably some users cannot finish their connections after context shutdown.", current_connections); + else + LOG_INFO(log, "Closed connections to Keeper."); + + global_context->shutdownKeeperStorageDispatcher(); + + /// Wait server pool to avoid use-after-free of destroyed context in the handlers + server_pool.joinAll(); + + /** Explicitly destroy Context. It is more convenient than in destructor of Server, because logger is still available. + * At this moment, no one could own shared part of Context. + */ + global_context.reset(); + shared_context.reset(); + + LOG_DEBUG(log, "Destroyed global context."); + + if (current_connections) + { + LOG_INFO(log, "Will shutdown forcefully."); + forceShutdown(); + } + }); + + + buildLoggers(config(), logger()); + + LOG_INFO(log, "Ready for connections."); + + waitForTerminationRequest(); + + return Application::EXIT_OK; +} + + +void Keeper::logRevision() const +{ + Poco::Logger::root().information("Starting ClickHouse Keeper " + std::string{VERSION_STRING} + + " with revision " + std::to_string(ClickHouseRevision::getVersionRevision()) + + ", " + build_id_info + + ", PID " + std::to_string(getpid())); +} + + +} diff --git a/programs/keeper/Keeper.h b/programs/keeper/Keeper.h new file mode 100644 index 00000000000..f5b97dacf7d --- /dev/null +++ b/programs/keeper/Keeper.h @@ -0,0 +1,69 @@ +#pragma once + +#include +#include + +namespace Poco +{ + namespace Net + { + class ServerSocket; + } +} + +namespace DB +{ + +/// standalone clickhouse-keeper server (replacement for ZooKeeper). Uses the same +/// config as clickhouse-server. Serves requests on TCP ports with or without +/// SSL using ZooKeeper protocol. +class Keeper : public BaseDaemon, public IServer +{ +public: + using ServerApplication::run; + + Poco::Util::LayeredConfiguration & config() const override + { + return BaseDaemon::config(); + } + + Poco::Logger & logger() const override + { + return BaseDaemon::logger(); + } + + ContextMutablePtr context() const override + { + return global_context; + } + + bool isCancelled() const override + { + return BaseDaemon::isCancelled(); + } + + void defineOptions(Poco::Util::OptionSet & _options) override; + +protected: + void logRevision() const override; + + int run() override; + + void initialize(Application & self) override; + + void uninitialize() override; + + int main(const std::vector & args) override; + + std::string getDefaultConfigFileName() const override; + +private: + ContextMutablePtr global_context; + + Poco::Net::SocketAddress socketBindListen(Poco::Net::ServerSocket & socket, const std::string & host, UInt16 port, [[maybe_unused]] bool secure = false) const; + + using CreateServerFunc = std::function; + void createServer(const std::string & listen_host, const char * port_name, bool listen_try, CreateServerFunc && func) const; +}; + +} diff --git a/programs/keeper/clickhouse-keeper.cpp b/programs/keeper/clickhouse-keeper.cpp new file mode 100644 index 00000000000..baa673f79ee --- /dev/null +++ b/programs/keeper/clickhouse-keeper.cpp @@ -0,0 +1,6 @@ +int mainEntryClickHouseKeeper(int argc, char ** argv); + +int main(int argc_, char ** argv_) +{ + return mainEntryClickHouseKeeper(argc_, argv_); +} diff --git a/programs/keeper/keeper_config.xml b/programs/keeper/keeper_config.xml new file mode 100644 index 00000000000..ef218c9f2d7 --- /dev/null +++ b/programs/keeper/keeper_config.xml @@ -0,0 +1,81 @@ + + + + trace + /var/log/clickhouse-keeper/clickhouse-keeper.log + /var/log/clickhouse-keeper/clickhouse-keeper.err.log + + 1000M + 10 + + + + 4096 + + + 9181 + + + 1 + + /var/lib/clickhouse/coordination/logs + /var/lib/clickhouse/coordination/snapshots + + + 10000 + 30000 + information + + + + + + 1 + + + localhost + 44444 + + + + + + + + + + + + + /etc/clickhouse-keeper/server.crt + /etc/clickhouse-keeper/server.key + + /etc/clickhouse-keeper/dhparam.pem + none + true + true + sslv2,sslv3 + true + + + + diff --git a/programs/keeper/keeper_embedded.xml b/programs/keeper/keeper_embedded.xml new file mode 100644 index 00000000000..37edaedba80 --- /dev/null +++ b/programs/keeper/keeper_embedded.xml @@ -0,0 +1,21 @@ + + + trace + true + + + + 9181 + 1 + ./keeper_log + ./keeper_snapshot + + + + 1 + localhost + 44444 + + + + diff --git a/programs/local/LocalServer.cpp b/programs/local/LocalServer.cpp index c53230eaf3a..aa10a9c2d14 100644 --- a/programs/local/LocalServer.cpp +++ b/programs/local/LocalServer.cpp @@ -100,7 +100,7 @@ void LocalServer::initialize(Poco::Util::Application & self) } } -void LocalServer::applyCmdSettings(ContextPtr context) +void LocalServer::applyCmdSettings(ContextMutablePtr context) { context->applySettingsChanges(cmd_settings.changes()); } @@ -642,7 +642,7 @@ void LocalServer::init(int argc, char ** argv) argsToConfig(arguments, config(), 100); } -void LocalServer::applyCmdOptions(ContextPtr context) +void LocalServer::applyCmdOptions(ContextMutablePtr context) { context->setDefaultFormat(config().getString("output-format", config().getString("format", "TSV"))); applyCmdSettings(context); diff --git a/programs/local/LocalServer.h b/programs/local/LocalServer.h index a9e4668617e..e82caad7542 100644 --- a/programs/local/LocalServer.h +++ b/programs/local/LocalServer.h @@ -36,8 +36,8 @@ private: std::string getInitialCreateTableQuery(); void tryInitPath(); - void applyCmdOptions(ContextPtr context); - void applyCmdSettings(ContextPtr context); + void applyCmdOptions(ContextMutablePtr context); + void applyCmdSettings(ContextMutablePtr context); void processQueries(); void setupUsers(); void cleanup(); @@ -45,7 +45,7 @@ private: protected: SharedContextHolder shared_context; - ContextPtr global_context; + ContextMutablePtr global_context; /// Settings specified via command line args Settings cmd_settings; diff --git a/programs/main.cpp b/programs/main.cpp index cbb22b7a87b..ccdf4d50fb4 100644 --- a/programs/main.cpp +++ b/programs/main.cpp @@ -55,6 +55,9 @@ int mainEntryClickHouseObfuscator(int argc, char ** argv); #if ENABLE_CLICKHOUSE_GIT_IMPORT int mainEntryClickHouseGitImport(int argc, char ** argv); #endif +#if ENABLE_CLICKHOUSE_KEEPER +int mainEntryClickHouseKeeper(int argc, char ** argv); +#endif #if ENABLE_CLICKHOUSE_INSTALL int mainEntryClickHouseInstall(int argc, char ** argv); int mainEntryClickHouseStart(int argc, char ** argv); @@ -112,6 +115,9 @@ std::pair clickhouse_applications[] = #if ENABLE_CLICKHOUSE_GIT_IMPORT {"git-import", mainEntryClickHouseGitImport}, #endif +#if ENABLE_CLICKHOUSE_KEEPER + {"keeper", mainEntryClickHouseKeeper}, +#endif #if ENABLE_CLICKHOUSE_INSTALL {"install", mainEntryClickHouseInstall}, {"start", mainEntryClickHouseStart}, diff --git a/programs/obfuscator/Obfuscator.cpp b/programs/obfuscator/Obfuscator.cpp index fb6817fbf80..f68b255158c 100644 --- a/programs/obfuscator/Obfuscator.cpp +++ b/programs/obfuscator/Obfuscator.cpp @@ -1133,7 +1133,7 @@ try } SharedContextHolder shared_context = Context::createShared(); - ContextPtr context = Context::createGlobal(shared_context.get()); + auto context = Context::createGlobal(shared_context.get()); context->makeGlobalContext(); ReadBufferFromFileDescriptor file_in(STDIN_FILENO); diff --git a/programs/server/CMakeLists.txt b/programs/server/CMakeLists.txt index 0dcfbce1c30..f7f76fdb450 100644 --- a/programs/server/CMakeLists.txt +++ b/programs/server/CMakeLists.txt @@ -31,37 +31,4 @@ clickhouse_program_add(server) install(FILES config.xml users.xml DESTINATION "${CLICKHOUSE_ETC_DIR}/clickhouse-server" COMPONENT clickhouse) -# TODO We actually need this on Mac, FreeBSD. -if (OS_LINUX) - # Embed default config files as a resource into the binary. - # This is needed for two purposes: - # 1. Allow to run the binary without download of any other files. - # 2. Allow to implement "sudo clickhouse install" tool. - - foreach(RESOURCE_FILE config.xml users.xml embedded.xml play.html) - set(RESOURCE_OBJ ${RESOURCE_FILE}.o) - set(RESOURCE_OBJS ${RESOURCE_OBJS} ${RESOURCE_OBJ}) - - # https://stackoverflow.com/questions/14776463/compile-and-add-an-object-file-from-a-binary-with-cmake - # PPC64LE fails to do this with objcopy, use ld or lld instead - if (ARCH_PPC64LE) - add_custom_command(OUTPUT ${RESOURCE_OBJ} - COMMAND cd ${CMAKE_CURRENT_SOURCE_DIR} && ${CMAKE_LINKER} -m elf64lppc -r -b binary -o "${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ}" ${RESOURCE_FILE}) - else() - add_custom_command(OUTPUT ${RESOURCE_OBJ} - COMMAND cd ${CMAKE_CURRENT_SOURCE_DIR} && ${OBJCOPY_PATH} -I binary ${OBJCOPY_ARCH_OPTIONS} ${RESOURCE_FILE} "${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ}" - COMMAND ${OBJCOPY_PATH} --rename-section .data=.rodata,alloc,load,readonly,data,contents - "${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ}" "${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ}") - endif() - set_source_files_properties(${RESOURCE_OBJ} PROPERTIES EXTERNAL_OBJECT true GENERATED true) - endforeach(RESOURCE_FILE) - - add_library(clickhouse_server_configs STATIC ${RESOURCE_OBJS}) - set_target_properties(clickhouse_server_configs PROPERTIES LINKER_LANGUAGE C) - - # whole-archive prevents symbols from being discarded for unknown reason - # CMake can shuffle each of target_link_libraries arguments with other - # libraries in linker command. To avoid this we hardcode whole-archive - # library into single string. - add_dependencies(clickhouse-server-lib clickhouse_server_configs) -endif () +clickhouse_embed_binaries(server config.xml users.xml embedded.xml play.html) diff --git a/programs/server/Server.h b/programs/server/Server.h index c698108767c..45e5fccd51d 100644 --- a/programs/server/Server.h +++ b/programs/server/Server.h @@ -40,7 +40,7 @@ public: return BaseDaemon::logger(); } - ContextPtr context() const override + ContextMutablePtr context() const override { return global_context; } @@ -64,7 +64,7 @@ protected: std::string getDefaultCorePath() const override; private: - ContextPtr global_context; + ContextMutablePtr global_context; Poco::Net::SocketAddress socketBindListen(Poco::Net::ServerSocket & socket, const std::string & host, UInt16 port, [[maybe_unused]] bool secure = false) const; using CreateServerFunc = std::function; diff --git a/programs/server/config.xml b/programs/server/config.xml index df8a5266c39..75647b10416 100644 --- a/programs/server/config.xml +++ b/programs/server/config.xml @@ -362,6 +362,20 @@ bind_dn - template used to construct the DN to bind to. The resulting DN will be constructed by replacing all '{user_name}' substrings of the template with the actual user name during each authentication attempt. + user_dn_detection - section with LDAP search parameters for detecting the actual user DN of the bound user. + This is mainly used in search filters for further role mapping when the server is Active Directory. The + resulting user DN will be used when replacing '{user_dn}' substrings wherever they are allowed. By default, + user DN is set equal to bind DN, but once search is performed, it will be updated with to the actual detected + user DN value. + base_dn - template used to construct the base DN for the LDAP search. + The resulting DN will be constructed by replacing all '{user_name}' and '{bind_dn}' substrings + of the template with the actual user name and bind DN during the LDAP search. + scope - scope of the LDAP search. + Accepted values are: 'base', 'one_level', 'children', 'subtree' (the default). + search_filter - template used to construct the search filter for the LDAP search. + The resulting filter will be constructed by replacing all '{user_name}', '{bind_dn}', and '{base_dn}' + substrings of the template with the actual user name, bind DN, and base DN during the LDAP search. + Note, that the special characters must be escaped properly in XML. verification_cooldown - a period of time, in seconds, after a successful bind attempt, during which a user will be assumed to be successfully authenticated for all consecutive requests without contacting the LDAP server. Specify 0 (the default) to disable caching and force contacting the LDAP server for each authentication request. @@ -393,6 +407,17 @@ /path/to/tls_ca_cert_dir ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:AES256-GCM-SHA384 + Example (typical Active Directory with configured user DN detection for further role mapping): + + localhost + 389 + EXAMPLE\{user_name} + + CN=Users,DC=example,DC=com + (&(objectClass=user)(sAMAccountName={user_name})) + + no + --> @@ -444,15 +469,16 @@ There can be multiple 'role_mapping' sections defined inside the same 'ldap' section. All of them will be applied. base_dn - template used to construct the base DN for the LDAP search. - The resulting DN will be constructed by replacing all '{user_name}' and '{bind_dn}' substrings - of the template with the actual user name and bind DN during each LDAP search. + The resulting DN will be constructed by replacing all '{user_name}', '{bind_dn}', and '{user_dn}' + substrings of the template with the actual user name, bind DN, and user DN during each LDAP search. scope - scope of the LDAP search. Accepted values are: 'base', 'one_level', 'children', 'subtree' (the default). search_filter - template used to construct the search filter for the LDAP search. - The resulting filter will be constructed by replacing all '{user_name}', '{bind_dn}', and '{base_dn}' - substrings of the template with the actual user name, bind DN, and base DN during each LDAP search. + The resulting filter will be constructed by replacing all '{user_name}', '{bind_dn}', '{user_dn}', and + '{base_dn}' substrings of the template with the actual user name, bind DN, user DN, and base DN during + each LDAP search. Note, that the special characters must be escaped properly in XML. - attribute - attribute name whose values will be returned by the LDAP search. + attribute - attribute name whose values will be returned by the LDAP search. 'cn', by default. prefix - prefix, that will be expected to be in front of each string in the original list of strings returned by the LDAP search. Prefix will be removed from the original strings and resulting strings will be treated as local role names. Empty, by default. @@ -471,6 +497,17 @@ clickhouse_ + Example (typical Active Directory with role mapping that relies on the detected user DN): + + my_ad_server + + CN=Users,DC=example,DC=com + CN + subtree + (&(objectClass=group)(member={user_dn})) + clickhouse_ + + --> diff --git a/programs/server/config.yaml.example b/programs/server/config.yaml.example new file mode 100644 index 00000000000..bebfd74ff58 --- /dev/null +++ b/programs/server/config.yaml.example @@ -0,0 +1,950 @@ +# This is an example of a configuration file "config.xml" rewritten in YAML +# You can read this documentation for detailed information about YAML configuration: +# https://clickhouse.tech/docs/en/operations/configuration-files/ + +# NOTE: User and query level settings are set up in "users.yaml" file. +# If you have accidentally specified user-level settings here, server won't start. +# You can either move the settings to the right place inside "users.xml" file +# or add skip_check_for_incorrect_settings: 1 here. +logger: + # Possible levels [1]: + # - none (turns off logging) + # - fatal + # - critical + # - error + # - warning + # - notice + # - information + # - debug + # - trace + # [1]: https://github.com/pocoproject/poco/blob/poco-1.9.4-release/Foundation/include/Poco/Logger.h#L105-L114 + level: trace + log: /var/log/clickhouse-server/clickhouse-server.log + errorlog: /var/log/clickhouse-server/clickhouse-server.err.log + # Rotation policy + # See https://github.com/pocoproject/poco/blob/poco-1.9.4-release/Foundation/include/Poco/FileChannel.h#L54-L85 + size: 1000M + count: 10 + # console: 1 + # Default behavior is autodetection (log to console if not daemon mode and is tty) + + # Per level overrides (legacy): + # For example to suppress logging of the ConfigReloader you can use: + # NOTE: levels.logger is reserved, see below. + # levels: + # ConfigReloader: none + + # Per level overrides: + # For example to suppress logging of the RBAC for default user you can use: + # (But please note that the logger name maybe changed from version to version, even after minor upgrade) + # levels: + # - logger: + # name: 'ContextAccess (default)' + # level: none + # - logger: + # name: 'DatabaseOrdinary (test)' + # level: none + +# It is the name that will be shown in the clickhouse-client. +# By default, anything with "production" will be highlighted in red in query prompt. +# display_name: production + +# Port for HTTP API. See also 'https_port' for secure connections. +# This interface is also used by ODBC and JDBC drivers (DataGrip, Dbeaver, ...) +# and by most of web interfaces (embedded UI, Grafana, Redash, ...). +http_port: 8123 + +# Port for interaction by native protocol with: +# - clickhouse-client and other native ClickHouse tools (clickhouse-benchmark, clickhouse-copier); +# - clickhouse-server with other clickhouse-servers for distributed query processing; +# - ClickHouse drivers and applications supporting native protocol +# (this protocol is also informally called as "the TCP protocol"); +# See also 'tcp_port_secure' for secure connections. +tcp_port: 9000 + +# Compatibility with MySQL protocol. +# ClickHouse will pretend to be MySQL for applications connecting to this port. +mysql_port: 9004 + +# Compatibility with PostgreSQL protocol. +# ClickHouse will pretend to be PostgreSQL for applications connecting to this port. +postgresql_port: 9005 + +# HTTP API with TLS (HTTPS). +# You have to configure certificate to enable this interface. +# See the openSSL section below. +# https_port: 8443 + +# Native interface with TLS. +# You have to configure certificate to enable this interface. +# See the openSSL section below. +# tcp_port_secure: 9440 + +# Native interface wrapped with PROXYv1 protocol +# PROXYv1 header sent for every connection. +# ClickHouse will extract information about proxy-forwarded client address from the header. +# tcp_with_proxy_port: 9011 + +# Port for communication between replicas. Used for data exchange. +# It provides low-level data access between servers. +# This port should not be accessible from untrusted networks. +# See also 'interserver_http_credentials'. +# Data transferred over connections to this port should not go through untrusted networks. +# See also 'interserver_https_port'. +interserver_http_port: 9009 + +# Port for communication between replicas with TLS. +# You have to configure certificate to enable this interface. +# See the openSSL section below. +# See also 'interserver_http_credentials'. +# interserver_https_port: 9010 + +# Hostname that is used by other replicas to request this server. +# If not specified, than it is determined analogous to 'hostname -f' command. +# This setting could be used to switch replication to another network interface +# (the server may be connected to multiple networks via multiple addresses) +# interserver_http_host: example.yandex.ru + +# You can specify credentials for authenthication between replicas. +# This is required when interserver_https_port is accessible from untrusted networks, +# and also recommended to avoid SSRF attacks from possibly compromised services in your network. +# interserver_http_credentials: +# user: interserver +# password: '' + +# Listen specified address. +# Use :: (wildcard IPv6 address), if you want to accept connections both with IPv4 and IPv6 from everywhere. +# Notes: +# If you open connections from wildcard address, make sure that at least one of the following measures applied: +# - server is protected by firewall and not accessible from untrusted networks; +# - all users are restricted to subset of network addresses (see users.xml); +# - all users have strong passwords, only secure (TLS) interfaces are accessible, or connections are only made via TLS interfaces. +# - users without password have readonly access. +# See also: https://www.shodan.io/search?query=clickhouse +# listen_host: '::' + +# Same for hosts without support for IPv6: +# listen_host: 0.0.0.0 + +# Default values - try listen localhost on IPv4 and IPv6. +# listen_host: '::1' +# listen_host: 127.0.0.1 + +# Don't exit if IPv6 or IPv4 networks are unavailable while trying to listen. +# listen_try: 0 + +# Allow multiple servers to listen on the same address:port. This is not recommended. +# listen_reuse_port: 0 + +# listen_backlog: 64 +max_connections: 4096 + +# For 'Connection: keep-alive' in HTTP 1.1 +keep_alive_timeout: 3 + +# gRPC protocol (see src/Server/grpc_protos/clickhouse_grpc.proto for the API) +# grpc_port: 9100 +grpc: + enable_ssl: false + + # The following two files are used only if enable_ssl=1 + ssl_cert_file: /path/to/ssl_cert_file + ssl_key_file: /path/to/ssl_key_file + + # Whether server will request client for a certificate + ssl_require_client_auth: false + + # The following file is used only if ssl_require_client_auth=1 + ssl_ca_cert_file: /path/to/ssl_ca_cert_file + + # Default compression algorithm (applied if client doesn't specify another algorithm). + # Supported algorithms: none, deflate, gzip, stream_gzip + compression: deflate + + # Default compression level (applied if client doesn't specify another level). + # Supported levels: none, low, medium, high + compression_level: medium + + # Send/receive message size limits in bytes. -1 means unlimited + max_send_message_size: -1 + max_receive_message_size: -1 + + # Enable if you want very detailed logs + verbose_logs: false + +# Used with https_port and tcp_port_secure. Full ssl options list: https://github.com/ClickHouse-Extras/poco/blob/master/NetSSL_OpenSSL/include/Poco/Net/SSLManager.h#L71 +openSSL: + server: + # Used for https server AND secure tcp port + # openssl req -subj "/CN=localhost" -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout /etc/clickhouse-server/server.key -out /etc/clickhouse-server/server.crt + certificateFile: /etc/clickhouse-server/server.crt + privateKeyFile: /etc/clickhouse-server/server.key + + # dhparams are optional. You can delete the dhParamsFile: element. + # To generate dhparams, use the following command: + # openssl dhparam -out /etc/clickhouse-server/dhparam.pem 4096 + # Only file format with BEGIN DH PARAMETERS is supported. + dhParamsFile: /etc/clickhouse-server/dhparam.pem + verificationMode: none + loadDefaultCAFile: true + cacheSessions: true + disableProtocols: 'sslv2,sslv3' + preferServerCiphers: true + client: + # Used for connecting to https dictionary source and secured Zookeeper communication + loadDefaultCAFile: true + cacheSessions: true + disableProtocols: 'sslv2,sslv3' + preferServerCiphers: true + + # Use for self-signed: verificationMode: none + invalidCertificateHandler: + # Use for self-signed: name: AcceptCertificateHandler + name: RejectCertificateHandler + +# Default root page on http[s] server. For example load UI from https://tabix.io/ when opening http://localhost:8123 +# http_server_default_response: |- +#
+ +# Maximum number of concurrent queries. +max_concurrent_queries: 100 + +# Maximum memory usage (resident set size) for server process. +# Zero value or unset means default. Default is "max_server_memory_usage_to_ram_ratio" of available physical RAM. +# If the value is larger than "max_server_memory_usage_to_ram_ratio" of available physical RAM, it will be cut down. + +# The constraint is checked on query execution time. +# If a query tries to allocate memory and the current memory usage plus allocation is greater +# than specified threshold, exception will be thrown. + +# It is not practical to set this constraint to small values like just a few gigabytes, +# because memory allocator will keep this amount of memory in caches and the server will deny service of queries. +max_server_memory_usage: 0 + +# Maximum number of threads in the Global thread pool. +# This will default to a maximum of 10000 threads if not specified. +# This setting will be useful in scenarios where there are a large number +# of distributed queries that are running concurrently but are idling most +# of the time, in which case a higher number of threads might be required. +max_thread_pool_size: 10000 + +# On memory constrained environments you may have to set this to value larger than 1. +max_server_memory_usage_to_ram_ratio: 0.9 + +# Simple server-wide memory profiler. Collect a stack trace at every peak allocation step (in bytes). +# Data will be stored in system.trace_log table with query_id = empty string. +# Zero means disabled. +total_memory_profiler_step: 4194304 + +# Collect random allocations and deallocations and write them into system.trace_log with 'MemorySample' trace_type. +# The probability is for every alloc/free regardless to the size of the allocation. +# Note that sampling happens only when the amount of untracked memory exceeds the untracked memory limit, +# which is 4 MiB by default but can be lowered if 'total_memory_profiler_step' is lowered. +# You may want to set 'total_memory_profiler_step' to 1 for extra fine grained sampling. +total_memory_tracker_sample_probability: 0 + +# Set limit on number of open files (default: maximum). This setting makes sense on Mac OS X because getrlimit() fails to retrieve +# correct maximum value. +# max_open_files: 262144 + +# Size of cache of uncompressed blocks of data, used in tables of MergeTree family. +# In bytes. Cache is single for server. Memory is allocated only on demand. +# Cache is used when 'use_uncompressed_cache' user setting turned on (off by default). +# Uncompressed cache is advantageous only for very short queries and in rare cases. + +# Note: uncompressed cache can be pointless for lz4, because memory bandwidth +# is slower than multi-core decompression on some server configurations. +# Enabling it can sometimes paradoxically make queries slower. +uncompressed_cache_size: 8589934592 + +# Approximate size of mark cache, used in tables of MergeTree family. +# In bytes. Cache is single for server. Memory is allocated only on demand. +# You should not lower this value. +mark_cache_size: 5368709120 + +# If you enable the `min_bytes_to_use_mmap_io` setting, +# the data in MergeTree tables can be read with mmap to avoid copying from kernel to userspace. +# It makes sense only for large files and helps only if data reside in page cache. +# To avoid frequent open/mmap/munmap/close calls (which are very expensive due to consequent page faults) +# and to reuse mappings from several threads and queries, +# the cache of mapped files is maintained. Its size is the number of mapped regions (usually equal to the number of mapped files). +# The amount of data in mapped files can be monitored +# in system.metrics, system.metric_log by the MMappedFiles, MMappedFileBytes metrics +# and in system.asynchronous_metrics, system.asynchronous_metrics_log by the MMapCacheCells metric, +# and also in system.events, system.processes, system.query_log, system.query_thread_log by the +# CreatedReadBufferMMap, CreatedReadBufferMMapFailed, MMappedFileCacheHits, MMappedFileCacheMisses events. +# Note that the amount of data in mapped files does not consume memory directly and is not accounted +# in query or server memory usage - because this memory can be discarded similar to OS page cache. +# The cache is dropped (the files are closed) automatically on removal of old parts in MergeTree, +# also it can be dropped manually by the SYSTEM DROP MMAP CACHE query. +mmap_cache_size: 1000 + +# Cache size for compiled expressions. +compiled_expression_cache_size: 1073741824 + +# Path to data directory, with trailing slash. +path: /var/lib/clickhouse/ + +# Path to temporary data for processing hard queries. +tmp_path: /var/lib/clickhouse/tmp/ + +# Policy from the for the temporary files. +# If not set is used, otherwise is ignored. + +# Notes: +# - move_factor is ignored +# - keep_free_space_bytes is ignored +# - max_data_part_size_bytes is ignored +# - you must have exactly one volume in that policy +# tmp_policy: tmp + +# Directory with user provided files that are accessible by 'file' table function. +user_files_path: /var/lib/clickhouse/user_files/ + +# LDAP server definitions. +ldap_servers: '' + +# List LDAP servers with their connection parameters here to later 1) use them as authenticators for dedicated local users, +# who have 'ldap' authentication mechanism specified instead of 'password', or to 2) use them as remote user directories. +# Parameters: +# host - LDAP server hostname or IP, this parameter is mandatory and cannot be empty. +# port - LDAP server port, default is 636 if enable_tls is set to true, 389 otherwise. +# bind_dn - template used to construct the DN to bind to. +# The resulting DN will be constructed by replacing all '{user_name}' substrings of the template with the actual +# user name during each authentication attempt. +# user_dn_detection - section with LDAP search parameters for detecting the actual user DN of the bound user. +# This is mainly used in search filters for further role mapping when the server is Active Directory. The +# resulting user DN will be used when replacing '{user_dn}' substrings wherever they are allowed. By default, +# user DN is set equal to bind DN, but once search is performed, it will be updated with to the actual detected +# user DN value. +# base_dn - template used to construct the base DN for the LDAP search. +# The resulting DN will be constructed by replacing all '{user_name}' and '{bind_dn}' substrings +# of the template with the actual user name and bind DN during the LDAP search. +# scope - scope of the LDAP search. +# Accepted values are: 'base', 'one_level', 'children', 'subtree' (the default). +# search_filter - template used to construct the search filter for the LDAP search. +# The resulting filter will be constructed by replacing all '{user_name}', '{bind_dn}', and '{base_dn}' +# substrings of the template with the actual user name, bind DN, and base DN during the LDAP search. +# Note, that the special characters must be escaped properly in XML. +# verification_cooldown - a period of time, in seconds, after a successful bind attempt, during which a user will be assumed +# to be successfully authenticated for all consecutive requests without contacting the LDAP server. +# Specify 0 (the default) to disable caching and force contacting the LDAP server for each authentication request. +# enable_tls - flag to trigger use of secure connection to the LDAP server. +# Specify 'no' for plain text (ldap://) protocol (not recommended). +# Specify 'yes' for LDAP over SSL/TLS (ldaps://) protocol (recommended, the default). +# Specify 'starttls' for legacy StartTLS protocol (plain text (ldap://) protocol, upgraded to TLS). +# tls_minimum_protocol_version - the minimum protocol version of SSL/TLS. +# Accepted values are: 'ssl2', 'ssl3', 'tls1.0', 'tls1.1', 'tls1.2' (the default). +# tls_require_cert - SSL/TLS peer certificate verification behavior. +# Accepted values are: 'never', 'allow', 'try', 'demand' (the default). +# tls_cert_file - path to certificate file. +# tls_key_file - path to certificate key file. +# tls_ca_cert_file - path to CA certificate file. +# tls_ca_cert_dir - path to the directory containing CA certificates. +# tls_cipher_suite - allowed cipher suite (in OpenSSL notation). +# Example: +# my_ldap_server: +# host: localhost +# port: 636 +# bind_dn: 'uid={user_name},ou=users,dc=example,dc=com' +# verification_cooldown: 300 +# enable_tls: yes +# tls_minimum_protocol_version: tls1.2 +# tls_require_cert: demand +# tls_cert_file: /path/to/tls_cert_file +# tls_key_file: /path/to/tls_key_file +# tls_ca_cert_file: /path/to/tls_ca_cert_file +# tls_ca_cert_dir: /path/to/tls_ca_cert_dir +# tls_cipher_suite: ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:AES256-GCM-SHA384 + +# Example (typical Active Directory with configured user DN detection for further role mapping): +# my_ad_server: +# host: localhost +# port: 389 +# bind_dn: 'EXAMPLE\{user_name}' +# user_dn_detection: +# base_dn: CN=Users,DC=example,DC=com +# search_filter: '(&(objectClass=user)(sAMAccountName={user_name}))' +# enable_tls: no + +# To enable Kerberos authentication support for HTTP requests (GSS-SPNEGO), for those users who are explicitly configured +# to authenticate via Kerberos, define a single 'kerberos' section here. +# Parameters: +# principal - canonical service principal name, that will be acquired and used when accepting security contexts. +# This parameter is optional, if omitted, the default principal will be used. +# This parameter cannot be specified together with 'realm' parameter. +# realm - a realm, that will be used to restrict authentication to only those requests whose initiator's realm matches it. +# This parameter is optional, if omitted, no additional filtering by realm will be applied. +# This parameter cannot be specified together with 'principal' parameter. +# Example: +# kerberos: '' + +# Example: +# kerberos: +# principal: HTTP/clickhouse.example.com@EXAMPLE.COM + +# Example: +# kerberos: +# realm: EXAMPLE.COM + +# Sources to read users, roles, access rights, profiles of settings, quotas. +user_directories: + users_xml: + # Path to configuration file with predefined users. + path: users.yaml + local_directory: + # Path to folder where users created by SQL commands are stored. + path: /var/lib/clickhouse/access/ + +# # To add an LDAP server as a remote user directory of users that are not defined locally, define a single 'ldap' section +# # with the following parameters: +# # server - one of LDAP server names defined in 'ldap_servers' config section above. +# # This parameter is mandatory and cannot be empty. +# # roles - section with a list of locally defined roles that will be assigned to each user retrieved from the LDAP server. +# # If no roles are specified here or assigned during role mapping (below), user will not be able to perform any +# # actions after authentication. +# # role_mapping - section with LDAP search parameters and mapping rules. +# # When a user authenticates, while still bound to LDAP, an LDAP search is performed using search_filter and the +# # name of the logged in user. For each entry found during that search, the value of the specified attribute is +# # extracted. For each attribute value that has the specified prefix, the prefix is removed, and the rest of the +# # value becomes the name of a local role defined in ClickHouse, which is expected to be created beforehand by +# # CREATE ROLE command. +# # There can be multiple 'role_mapping' sections defined inside the same 'ldap' section. All of them will be +# # applied. +# # base_dn - template used to construct the base DN for the LDAP search. +# # The resulting DN will be constructed by replacing all '{user_name}', '{bind_dn}', and '{user_dn}' +# # substrings of the template with the actual user name, bind DN, and user DN during each LDAP search. +# # scope - scope of the LDAP search. +# # Accepted values are: 'base', 'one_level', 'children', 'subtree' (the default). +# # search_filter - template used to construct the search filter for the LDAP search. +# # The resulting filter will be constructed by replacing all '{user_name}', '{bind_dn}', '{user_dn}', and +# # '{base_dn}' substrings of the template with the actual user name, bind DN, user DN, and base DN during +# # each LDAP search. +# # Note, that the special characters must be escaped properly in XML. +# # attribute - attribute name whose values will be returned by the LDAP search. 'cn', by default. +# # prefix - prefix, that will be expected to be in front of each string in the original list of strings returned by +# # the LDAP search. Prefix will be removed from the original strings and resulting strings will be treated +# # as local role names. Empty, by default. +# # Example: +# # ldap: +# # server: my_ldap_server +# # roles: +# # my_local_role1: '' +# # my_local_role2: '' +# # role_mapping: +# # base_dn: 'ou=groups,dc=example,dc=com' +# # scope: subtree +# # search_filter: '(&(objectClass=groupOfNames)(member={bind_dn}))' +# # attribute: cn +# # prefix: clickhouse_ +# # Example (typical Active Directory with role mapping that relies on the detected user DN): +# # ldap: +# # server: my_ad_server +# # role_mapping: +# # base_dn: 'CN=Users,DC=example,DC=com' +# # attribute: CN +# # scope: subtree +# # search_filter: '(&(objectClass=group)(member={user_dn}))' +# # prefix: clickhouse_ + +# Default profile of settings. +default_profile: default + +# Comma-separated list of prefixes for user-defined settings. +# custom_settings_prefixes: '' +# System profile of settings. This settings are used by internal processes (Distributed DDL worker and so on). +# system_profile: default + +# Buffer profile of settings. +# This settings are used by Buffer storage to flush data to the underlying table. +# Default: used from system_profile directive. +# buffer_profile: default + +# Default database. +default_database: default + +# Server time zone could be set here. + +# Time zone is used when converting between String and DateTime types, +# when printing DateTime in text formats and parsing DateTime from text, +# it is used in date and time related functions, if specific time zone was not passed as an argument. + +# Time zone is specified as identifier from IANA time zone database, like UTC or Africa/Abidjan. +# If not specified, system time zone at server startup is used. + +# Please note, that server could display time zone alias instead of specified name. +# Example: W-SU is an alias for Europe/Moscow and Zulu is an alias for UTC. +# timezone: Europe/Moscow + +# You can specify umask here (see "man umask"). Server will apply it on startup. +# Number is always parsed as octal. Default umask is 027 (other users cannot read logs, data files, etc; group can only read). +# umask: 022 + +# Perform mlockall after startup to lower first queries latency +# and to prevent clickhouse executable from being paged out under high IO load. +# Enabling this option is recommended but will lead to increased startup time for up to a few seconds. +mlock_executable: true + +# Reallocate memory for machine code ("text") using huge pages. Highly experimental. +remap_executable: false + +# Uncomment below in order to use JDBC table engine and function. +# To install and run JDBC bridge in background: +# * [Debian/Ubuntu] +# export MVN_URL=https://repo1.maven.org/maven2/ru/yandex/clickhouse/clickhouse-jdbc-bridge +# export PKG_VER=$(curl -sL $MVN_URL/maven-metadata.xml | grep '' | sed -e 's|.*>\(.*\)<.*|\1|') +# wget https://github.com/ClickHouse/clickhouse-jdbc-bridge/releases/download/v$PKG_VER/clickhouse-jdbc-bridge_$PKG_VER-1_all.deb +# apt install --no-install-recommends -f ./clickhouse-jdbc-bridge_$PKG_VER-1_all.deb +# clickhouse-jdbc-bridge & +# * [CentOS/RHEL] +# export MVN_URL=https://repo1.maven.org/maven2/ru/yandex/clickhouse/clickhouse-jdbc-bridge +# export PKG_VER=$(curl -sL $MVN_URL/maven-metadata.xml | grep '' | sed -e 's|.*>\(.*\)<.*|\1|') +# wget https://github.com/ClickHouse/clickhouse-jdbc-bridge/releases/download/v$PKG_VER/clickhouse-jdbc-bridge-$PKG_VER-1.noarch.rpm +# yum localinstall -y clickhouse-jdbc-bridge-$PKG_VER-1.noarch.rpm +# clickhouse-jdbc-bridge & +# Please refer to https://github.com/ClickHouse/clickhouse-jdbc-bridge#usage for more information. + +# jdbc_bridge: +# host: 127.0.0.1 +# port: 9019 + +# Configuration of clusters that could be used in Distributed tables. +# https://clickhouse.tech/docs/en/operations/table_engines/distributed/ +remote_servers: + # Test only shard config for testing distributed storage + test_shard_localhost: + # Inter-server per-cluster secret for Distributed queries + # default: no secret (no authentication will be performed) + + # If set, then Distributed queries will be validated on shards, so at least: + # - such cluster should exist on the shard, + # - such cluster should have the same secret. + + # And also (and which is more important), the initial_user will + # be used as current user for the query. + + # Right now the protocol is pretty simple and it only takes into account: + # - cluster name + # - query + + # Also it will be nice if the following will be implemented: + # - source hostname (see interserver_http_host), but then it will depends from DNS, + # it can use IP address instead, but then the you need to get correct on the initiator node. + # - target hostname / ip address (same notes as for source hostname) + # - time-based security tokens + # secret: '' + shard: + # Optional. Whether to write data to just one of the replicas. Default: false (write data to all replicas). + # internal_replication: false + # Optional. Shard weight when writing data. Default: 1. + # weight: 1 + replica: + host: localhost + port: 9000 + # Optional. Priority of the replica for load_balancing. Default: 1 (less value has more priority). + # priority: 1 + test_cluster_two_shards_localhost: + shard: + - replica: + host: localhost + port: 9000 + - replica: + host: localhost + port: 9000 + test_cluster_two_shards: + shard: + - replica: + host: 127.0.0.1 + port: 9000 + - replica: + host: 127.0.0.2 + port: 9000 + test_cluster_two_shards_internal_replication: + shard: + - internal_replication: true + replica: + host: 127.0.0.1 + port: 9000 + - internal_replication: true + replica: + host: 127.0.0.2 + port: 9000 + test_shard_localhost_secure: + shard: + replica: + host: localhost + port: 9440 + secure: 1 + test_unavailable_shard: + shard: + - replica: + host: localhost + port: 9000 + - replica: + host: localhost + port: 1 + +# The list of hosts allowed to use in URL-related storage engines and table functions. +# If this section is not present in configuration, all hosts are allowed. +# remote_url_allow_hosts: + +# Host should be specified exactly as in URL. The name is checked before DNS resolution. +# Example: "yandex.ru", "yandex.ru." and "www.yandex.ru" are different hosts. +# If port is explicitly specified in URL, the host:port is checked as a whole. +# If host specified here without port, any port with this host allowed. +# "yandex.ru" -> "yandex.ru:443", "yandex.ru:80" etc. is allowed, but "yandex.ru:80" -> only "yandex.ru:80" is allowed. +# If the host is specified as IP address, it is checked as specified in URL. Example: "[2a02:6b8:a::a]". +# If there are redirects and support for redirects is enabled, every redirect (the Location field) is checked. + +# Regular expression can be specified. RE2 engine is used for regexps. +# Regexps are not aligned: don't forget to add ^ and $. Also don't forget to escape dot (.) metacharacter +# (forgetting to do so is a common source of error). + +# If element has 'incl' attribute, then for it's value will be used corresponding substitution from another file. +# By default, path to file with substitutions is /etc/metrika.xml. It could be changed in config in 'include_from' element. +# Values for substitutions are specified in /yandex/name_of_substitution elements in that file. + +# ZooKeeper is used to store metadata about replicas, when using Replicated tables. +# Optional. If you don't use replicated tables, you could omit that. +# See https://clickhouse.tech/docs/en/engines/table-engines/mergetree-family/replication/ + +# zookeeper: +# - node: +# host: example1 +# port: 2181 +# - node: +# host: example2 +# port: 2181 +# - node: +# host: example3 +# port: 2181 + +# Substitutions for parameters of replicated tables. +# Optional. If you don't use replicated tables, you could omit that. +# See https://clickhouse.tech/docs/en/engines/table-engines/mergetree-family/replication/#creating-replicated-tables +# macros: +# shard: 01 +# replica: example01-01-1 + +# Reloading interval for embedded dictionaries, in seconds. Default: 3600. +builtin_dictionaries_reload_interval: 3600 + +# Maximum session timeout, in seconds. Default: 3600. +max_session_timeout: 3600 + +# Default session timeout, in seconds. Default: 60. +default_session_timeout: 60 + +# Sending data to Graphite for monitoring. Several sections can be defined. +# interval - send every X second +# root_path - prefix for keys +# hostname_in_path - append hostname to root_path (default = true) +# metrics - send data from table system.metrics +# events - send data from table system.events +# asynchronous_metrics - send data from table system.asynchronous_metrics + +# graphite: +# host: localhost +# port: 42000 +# timeout: 0.1 +# interval: 60 +# root_path: one_min +# hostname_in_path: true + +# metrics: true +# events: true +# events_cumulative: false +# asynchronous_metrics: true + +# graphite: +# host: localhost +# port: 42000 +# timeout: 0.1 +# interval: 1 +# root_path: one_sec + +# metrics: true +# events: true +# events_cumulative: false +# asynchronous_metrics: false + +# Serve endpoint for Prometheus monitoring. +# endpoint - mertics path (relative to root, statring with "/") +# port - port to setup server. If not defined or 0 than http_port used +# metrics - send data from table system.metrics +# events - send data from table system.events +# asynchronous_metrics - send data from table system.asynchronous_metrics +# status_info - send data from different component from CH, ex: Dictionaries status + +# prometheus: +# endpoint: /metrics +# port: 9363 + +# metrics: true +# events: true +# asynchronous_metrics: true +# status_info: true + +# Query log. Used only for queries with setting log_queries = 1. +query_log: + # What table to insert data. If table is not exist, it will be created. + # When query log structure is changed after system update, + # then old table will be renamed and new table will be created automatically. + database: system + table: query_log + + # PARTITION BY expr: https://clickhouse.yandex/docs/en/table_engines/mergetree-family/custom_partitioning_key/ + # Example: + # event_date + # toMonday(event_date) + # toYYYYMM(event_date) + # toStartOfHour(event_time) + partition_by: toYYYYMM(event_date) + + # Table TTL specification: https://clickhouse.tech/docs/en/engines/table-engines/mergetree-family/mergetree/#mergetree-table-ttl + # Example: + # event_date + INTERVAL 1 WEEK + # event_date + INTERVAL 7 DAY DELETE + # event_date + INTERVAL 2 WEEK TO DISK 'bbb' + + # ttl: 'event_date + INTERVAL 30 DAY DELETE' + + # Instead of partition_by, you can provide full engine expression (starting with ENGINE = ) with parameters, + # Example: engine: 'ENGINE = MergeTree PARTITION BY toYYYYMM(event_date) ORDER BY (event_date, event_time) SETTINGS index_granularity = 1024' + + # Interval of flushing data. + flush_interval_milliseconds: 7500 + +# Trace log. Stores stack traces collected by query profilers. +# See query_profiler_real_time_period_ns and query_profiler_cpu_time_period_ns settings. +trace_log: + database: system + table: trace_log + partition_by: toYYYYMM(event_date) + flush_interval_milliseconds: 7500 + +# Query thread log. Has information about all threads participated in query execution. +# Used only for queries with setting log_query_threads = 1. +query_thread_log: + database: system + table: query_thread_log + partition_by: toYYYYMM(event_date) + flush_interval_milliseconds: 7500 + +# Uncomment if use part log. +# Part log contains information about all actions with parts in MergeTree tables (creation, deletion, merges, downloads). +# part_log: +# database: system +# table: part_log +# flush_interval_milliseconds: 7500 + +# Uncomment to write text log into table. +# Text log contains all information from usual server log but stores it in structured and efficient way. +# The level of the messages that goes to the table can be limited (), if not specified all messages will go to the table. +# text_log: +# database: system +# table: text_log +# flush_interval_milliseconds: 7500 +# level: '' + +# Metric log contains rows with current values of ProfileEvents, CurrentMetrics collected with "collect_interval_milliseconds" interval. +metric_log: + database: system + table: metric_log + flush_interval_milliseconds: 7500 + collect_interval_milliseconds: 1000 + +# Asynchronous metric log contains values of metrics from +# system.asynchronous_metrics. +asynchronous_metric_log: + database: system + table: asynchronous_metric_log + + # Asynchronous metrics are updated once a minute, so there is + # no need to flush more often. + flush_interval_milliseconds: 60000 + +# OpenTelemetry log contains OpenTelemetry trace spans. +opentelemetry_span_log: + + # The default table creation code is insufficient, this spec + # is a workaround. There is no 'event_time' for this log, but two times, + # start and finish. It is sorted by finish time, to avoid inserting + # data too far away in the past (probably we can sometimes insert a span + # that is seconds earlier than the last span in the table, due to a race + # between several spans inserted in parallel). This gives the spans a + # global order that we can use to e.g. retry insertion into some external + # system. + engine: |- + engine MergeTree + partition by toYYYYMM(finish_date) + order by (finish_date, finish_time_us, trace_id) + database: system + table: opentelemetry_span_log + flush_interval_milliseconds: 7500 + +# Crash log. Stores stack traces for fatal errors. +# This table is normally empty. +crash_log: + database: system + table: crash_log + partition_by: '' + flush_interval_milliseconds: 1000 + +# Parameters for embedded dictionaries, used in Yandex.Metrica. +# See https://clickhouse.yandex/docs/en/dicts/internal_dicts/ + +# Path to file with region hierarchy. +# path_to_regions_hierarchy_file: /opt/geo/regions_hierarchy.txt + +# Path to directory with files containing names of regions +# path_to_regions_names_files: /opt/geo/ + + +# top_level_domains_path: /var/lib/clickhouse/top_level_domains/ +# Custom TLD lists. +# Format: name: /path/to/file + +# Changes will not be applied w/o server restart. +# Path to the list is under top_level_domains_path (see above). +top_level_domains_lists: '' + +# public_suffix_list: /path/to/public_suffix_list.dat + +# Configuration of external dictionaries. See: +# https://clickhouse.tech/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts +dictionaries_config: '*_dictionary.xml' + +# Uncomment if you want data to be compressed 30-100% better. +# Don't do that if you just started using ClickHouse. + +# compression: +# # Set of variants. Checked in order. Last matching case wins. If nothing matches, lz4 will be used. +# case: +# Conditions. All must be satisfied. Some conditions may be omitted. +# # min_part_size: 10000000000 # Min part size in bytes. +# # min_part_size_ratio: 0.01 # Min size of part relative to whole table size. +# # What compression method to use. +# method: zstd + +# Allow to execute distributed DDL queries (CREATE, DROP, ALTER, RENAME) on cluster. +# Works only if ZooKeeper is enabled. Comment it if such functionality isn't required. +distributed_ddl: + # Path in ZooKeeper to queue with DDL queries + path: /clickhouse/task_queue/ddl + + # Settings from this profile will be used to execute DDL queries + # profile: default + + # Controls how much ON CLUSTER queries can be run simultaneously. + # pool_size: 1 + + # Cleanup settings (active tasks will not be removed) + + # Controls task TTL (default 1 week) + # task_max_lifetime: 604800 + + # Controls how often cleanup should be performed (in seconds) + # cleanup_delay_period: 60 + + # Controls how many tasks could be in the queue + # max_tasks_in_queue: 1000 + +# Settings to fine tune MergeTree tables. See documentation in source code, in MergeTreeSettings.h +# merge_tree: +# max_suspicious_broken_parts: 5 + +# Protection from accidental DROP. +# If size of a MergeTree table is greater than max_table_size_to_drop (in bytes) than table could not be dropped with any DROP query. +# If you want do delete one table and don't want to change clickhouse-server config, you could create special file /flags/force_drop_table and make DROP once. +# By default max_table_size_to_drop is 50GB; max_table_size_to_drop=0 allows to DROP any tables. +# The same for max_partition_size_to_drop. +# Uncomment to disable protection. + +# max_table_size_to_drop: 0 +# max_partition_size_to_drop: 0 + +# Example of parameters for GraphiteMergeTree table engine +graphite_rollup_example: + pattern: + regexp: click_cost + function: any + retention: + - age: 0 + precision: 3600 + - age: 86400 + precision: 60 + default: + function: max + retention: + - age: 0 + precision: 60 + - age: 3600 + precision: 300 + - age: 86400 + precision: 3600 + +# Directory in containing schema files for various input formats. +# The directory will be created if it doesn't exist. +format_schema_path: /var/lib/clickhouse/format_schemas/ + +# Default query masking rules, matching lines would be replaced with something else in the logs +# (both text logs and system.query_log). +# name - name for the rule (optional) +# regexp - RE2 compatible regular expression (mandatory) +# replace - substitution string for sensitive data (optional, by default - six asterisks) +query_masking_rules: + rule: + name: hide encrypt/decrypt arguments + regexp: '((?:aes_)?(?:encrypt|decrypt)(?:_mysql)?)\s*\(\s*(?:''(?:\\''|.)+''|.*?)\s*\)' + # or more secure, but also more invasive: + # (aes_\w+)\s*\(.*\) + replace: \1(???) + +# Uncomment to use custom http handlers. +# rules are checked from top to bottom, first match runs the handler +# url - to match request URL, you can use 'regex:' prefix to use regex match(optional) +# methods - to match request method, you can use commas to separate multiple method matches(optional) +# headers - to match request headers, match each child element(child element name is header name), you can use 'regex:' prefix to use regex match(optional) +# handler is request handler +# type - supported types: static, dynamic_query_handler, predefined_query_handler +# query - use with predefined_query_handler type, executes query when the handler is called +# query_param_name - use with dynamic_query_handler type, extracts and executes the value corresponding to the value in HTTP request params +# status - use with static type, response status code +# content_type - use with static type, response content-type +# response_content - use with static type, Response content sent to client, when using the prefix 'file://' or 'config://', find the content from the file or configuration send to client. + +# http_handlers: +# - rule: +# url: / +# methods: POST,GET +# headers: +# pragma: no-cache +# handler: +# type: dynamic_query_handler +# query_param_name: query +# - rule: +# url: /predefined_query +# methods: POST,GET +# handler: +# type: predefined_query_handler +# query: 'SELECT * FROM system.settings' +# - rule: +# handler: +# type: static +# status: 200 +# content_type: 'text/plain; charset=UTF-8' +# response_content: config://http_server_default_response + +send_crash_reports: + # Changing to true allows sending crash reports to + # the ClickHouse core developers team via Sentry https://sentry.io + # Doing so at least in pre-production environments is highly appreciated + enabled: false + # Change to true if you don't feel comfortable attaching the server hostname to the crash report + anonymize: false + # Default endpoint should be changed to different Sentry DSN only if you have + # some in-house engineers or hired consultants who're going to debug ClickHouse issues for you + endpoint: 'https://6f33034cfe684dd7a3ab9875e57b1c8d@o388870.ingest.sentry.io/5226277' + # Uncomment to disable ClickHouse internal DNS caching. + # disable_internal_dns_cache: 1 diff --git a/programs/server/users.yaml.example b/programs/server/users.yaml.example new file mode 100644 index 00000000000..76aee04c19b --- /dev/null +++ b/programs/server/users.yaml.example @@ -0,0 +1,107 @@ +# Profiles of settings. +profiles: + # Default settings. + default: + # Maximum memory usage for processing single query, in bytes. + max_memory_usage: 10000000000 + + # How to choose between replicas during distributed query processing. + # random - choose random replica from set of replicas with minimum number of errors + # nearest_hostname - from set of replicas with minimum number of errors, choose replica + # with minimum number of different symbols between replica's hostname and local hostname (Hamming distance). + # in_order - first live replica is chosen in specified order. + # first_or_random - if first replica one has higher number of errors, pick a random one from replicas with minimum number of errors. + load_balancing: random + + # Profile that allows only read queries. + readonly: + readonly: 1 + +# Users and ACL. +users: + # If user name was not specified, 'default' user is used. + default: + # Password could be specified in plaintext or in SHA256 (in hex format). + # + # If you want to specify password in plaintext (not recommended), place it in 'password' element. + # Example: password: qwerty + # Password could be empty. + # + # If you want to specify SHA256, place it in 'password_sha256_hex' element. + # Example: password_sha256_hex: 65e84be33532fb784c48129675f9eff3a682b27168c0ea744b2cf58ee02337c5 + # Restrictions of SHA256: impossibility to connect to ClickHouse using MySQL JS client (as of July 2019). + # + # If you want to specify double SHA1, place it in 'password_double_sha1_hex' element. + # Example: password_double_sha1_hex: e395796d6546b1b65db9d665cd43f0e858dd4303 + # + # If you want to specify a previously defined LDAP server (see 'ldap_servers' in the main config) for authentication, + # place its name in 'server' element inside 'ldap' element. + # Example: ldap: + # server: my_ldap_server + # + # If you want to authenticate the user via Kerberos (assuming Kerberos is enabled, see 'kerberos' in the main config), + # place 'kerberos' element instead of 'password' (and similar) elements. + # The name part of the canonical principal name of the initiator must match the user name for authentication to succeed. + # You can also place 'realm' element inside 'kerberos' element to further restrict authentication to only those requests + # whose initiator's realm matches it. + # Example: kerberos: '' + # Example: kerberos: + # realm: EXAMPLE.COM + # + # How to generate decent password: + # Execute: PASSWORD=$(base64 < /dev/urandom | head -c8); echo "$PASSWORD"; echo -n "$PASSWORD" | sha256sum | tr -d '-' + # In first line will be password and in second - corresponding SHA256. + # + # How to generate double SHA1: + # Execute: PASSWORD=$(base64 < /dev/urandom | head -c8); echo "$PASSWORD"; echo -n "$PASSWORD" | sha1sum | tr -d '-' | xxd -r -p | sha1sum | tr -d '-' + # In first line will be password and in second - corresponding double SHA1. + + password: '' + + # List of networks with open access. + # + # To open access from everywhere, specify: + # - ip: '::/0' + # + # To open access only from localhost, specify: + # - ip: '::1' + # - ip: 127.0.0.1 + # + # Each element of list has one of the following forms: + # ip: IP-address or network mask. Examples: 213.180.204.3 or 10.0.0.1/8 or 10.0.0.1/255.255.255.0 + # 2a02:6b8::3 or 2a02:6b8::3/64 or 2a02:6b8::3/ffff:ffff:ffff:ffff::. + # host: Hostname. Example: server01.yandex.ru. + # To check access, DNS query is performed, and all received addresses compared to peer address. + # host_regexp: Regular expression for host names. Example, ^server\d\d-\d\d-\d\.yandex\.ru$ + # To check access, DNS PTR query is performed for peer address and then regexp is applied. + # Then, for result of PTR query, another DNS query is performed and all received addresses compared to peer address. + # Strongly recommended that regexp is ends with $ and take all expression in '' + # All results of DNS requests are cached till server restart. + + networks: + ip: '::/0' + + # Settings profile for user. + profile: default + + # Quota for user. + quota: default + + # User can create other users and grant rights to them. + # access_management: 1 + +# Quotas. +quotas: + # Name of quota. + default: + # Limits for time interval. You could specify many intervals with different limits. + interval: + # Length of interval. + duration: 3600 + + # No limits. Just calculate resource usage for time interval. + queries: 0 + errors: 0 + result_rows: 0 + read_rows: 0 + execution_time: 0 diff --git a/src/Access/ContextAccess.cpp b/src/Access/ContextAccess.cpp index 0bcaef1e441..90495a83dfc 100644 --- a/src/Access/ContextAccess.cpp +++ b/src/Access/ContextAccess.cpp @@ -143,11 +143,13 @@ ContextAccess::ContextAccess(const AccessControlManager & manager_, const Params : manager(&manager_) , params(params_) { + std::lock_guard lock{mutex}; + subscription_for_user_change = manager->subscribeForChanges( *params.user_id, [this](const UUID &, const AccessEntityPtr & entity) { UserPtr changed_user = entity ? typeid_cast(entity) : nullptr; - std::lock_guard lock{mutex}; + std::lock_guard lock2{mutex}; setUser(changed_user); }); @@ -189,7 +191,7 @@ void ContextAccess::setUser(const UserPtr & user_) const current_roles_with_admin_option = user->granted_roles.findGrantedWithAdminOption(params.current_roles); } - subscription_for_roles_changes = {}; + subscription_for_roles_changes.reset(); enabled_roles = manager->getEnabledRoles(current_roles, current_roles_with_admin_option); subscription_for_roles_changes = enabled_roles->subscribeForChanges([this](const std::shared_ptr & roles_info_) { diff --git a/src/Access/ExternalAuthenticators.cpp b/src/Access/ExternalAuthenticators.cpp index 0c4d2f417c9..d4100c4e520 100644 --- a/src/Access/ExternalAuthenticators.cpp +++ b/src/Access/ExternalAuthenticators.cpp @@ -20,13 +20,42 @@ namespace ErrorCodes namespace { -auto parseLDAPServer(const Poco::Util::AbstractConfiguration & config, const String & name) +void parseLDAPSearchParams(LDAPClient::SearchParams & params, const Poco::Util::AbstractConfiguration & config, const String & prefix) +{ + const bool has_base_dn = config.has(prefix + ".base_dn"); + const bool has_search_filter = config.has(prefix + ".search_filter"); + const bool has_attribute = config.has(prefix + ".attribute"); + const bool has_scope = config.has(prefix + ".scope"); + + if (has_base_dn) + params.base_dn = config.getString(prefix + ".base_dn"); + + if (has_search_filter) + params.search_filter = config.getString(prefix + ".search_filter"); + + if (has_attribute) + params.attribute = config.getString(prefix + ".attribute"); + + if (has_scope) + { + auto scope = config.getString(prefix + ".scope"); + boost::algorithm::to_lower(scope); + + if (scope == "base") params.scope = LDAPClient::SearchParams::Scope::BASE; + else if (scope == "one_level") params.scope = LDAPClient::SearchParams::Scope::ONE_LEVEL; + else if (scope == "subtree") params.scope = LDAPClient::SearchParams::Scope::SUBTREE; + else if (scope == "children") params.scope = LDAPClient::SearchParams::Scope::CHILDREN; + else + throw Exception("Invalid value for 'scope' field of LDAP search parameters in '" + prefix + + "' section, must be one of 'base', 'one_level', 'subtree', or 'children'", ErrorCodes::BAD_ARGUMENTS); + } +} + +void parseLDAPServer(LDAPClient::Params & params, const Poco::Util::AbstractConfiguration & config, const String & name) { if (name.empty()) throw Exception("LDAP server name cannot be empty", ErrorCodes::BAD_ARGUMENTS); - LDAPClient::Params params; - const String ldap_server_config = "ldap_servers." + name; const bool has_host = config.has(ldap_server_config + ".host"); @@ -34,6 +63,7 @@ auto parseLDAPServer(const Poco::Util::AbstractConfiguration & config, const Str const bool has_bind_dn = config.has(ldap_server_config + ".bind_dn"); const bool has_auth_dn_prefix = config.has(ldap_server_config + ".auth_dn_prefix"); const bool has_auth_dn_suffix = config.has(ldap_server_config + ".auth_dn_suffix"); + const bool has_user_dn_detection = config.has(ldap_server_config + ".user_dn_detection"); const bool has_verification_cooldown = config.has(ldap_server_config + ".verification_cooldown"); const bool has_enable_tls = config.has(ldap_server_config + ".enable_tls"); const bool has_tls_minimum_protocol_version = config.has(ldap_server_config + ".tls_minimum_protocol_version"); @@ -66,6 +96,17 @@ auto parseLDAPServer(const Poco::Util::AbstractConfiguration & config, const Str params.bind_dn = auth_dn_prefix + "{user_name}" + auth_dn_suffix; } + if (has_user_dn_detection) + { + if (!params.user_dn_detection) + { + params.user_dn_detection.emplace(); + params.user_dn_detection->attribute = "dn"; + } + + parseLDAPSearchParams(*params.user_dn_detection, config, ldap_server_config + ".user_dn_detection"); + } + if (has_verification_cooldown) params.verification_cooldown = std::chrono::seconds{config.getUInt64(ldap_server_config + ".verification_cooldown")}; @@ -143,14 +184,10 @@ auto parseLDAPServer(const Poco::Util::AbstractConfiguration & config, const Str } else params.port = (params.enable_tls == LDAPClient::Params::TLSEnable::YES ? 636 : 389); - - return params; } -auto parseKerberosParams(const Poco::Util::AbstractConfiguration & config) +void parseKerberosParams(GSSAcceptorContext::Params & params, const Poco::Util::AbstractConfiguration & config) { - GSSAcceptorContext::Params params; - Poco::Util::AbstractConfiguration::Keys keys; config.keys("kerberos", keys); @@ -180,12 +217,20 @@ auto parseKerberosParams(const Poco::Util::AbstractConfiguration & config) params.realm = config.getString("kerberos.realm", ""); params.principal = config.getString("kerberos.principal", ""); - - return params; } } +void parseLDAPRoleSearchParams(LDAPClient::RoleSearchParams & params, const Poco::Util::AbstractConfiguration & config, const String & prefix) +{ + parseLDAPSearchParams(params, config, prefix); + + const bool has_prefix = config.has(prefix + ".prefix"); + + if (has_prefix) + params.prefix = config.getString(prefix + ".prefix"); +} + void ExternalAuthenticators::reset() { std::scoped_lock lock(mutex); @@ -229,7 +274,8 @@ void ExternalAuthenticators::setConfiguration(const Poco::Util::AbstractConfigur { try { - ldap_client_params_blueprint.insert_or_assign(ldap_server_name, parseLDAPServer(config, ldap_server_name)); + ldap_client_params_blueprint.erase(ldap_server_name); + parseLDAPServer(ldap_client_params_blueprint.emplace(ldap_server_name, LDAPClient::Params{}).first->second, config, ldap_server_name); } catch (...) { @@ -240,7 +286,7 @@ void ExternalAuthenticators::setConfiguration(const Poco::Util::AbstractConfigur try { if (kerberos_keys_count > 0) - kerberos_params = parseKerberosParams(config); + parseKerberosParams(kerberos_params.emplace(), config); } catch (...) { @@ -249,7 +295,7 @@ void ExternalAuthenticators::setConfiguration(const Poco::Util::AbstractConfigur } bool ExternalAuthenticators::checkLDAPCredentials(const String & server, const BasicCredentials & credentials, - const LDAPClient::SearchParamsList * search_params, LDAPClient::SearchResultsList * search_results) const + const LDAPClient::RoleSearchParamsList * role_search_params, LDAPClient::SearchResultsList * role_search_results) const { std::optional params; std::size_t params_hash = 0; @@ -267,9 +313,9 @@ bool ExternalAuthenticators::checkLDAPCredentials(const String & server, const B params->password = credentials.getPassword(); params->combineCoreHash(params_hash); - if (search_params) + if (role_search_params) { - for (const auto & params_instance : *search_params) + for (const auto & params_instance : *role_search_params) { params_instance.combineHash(params_hash); } @@ -301,14 +347,14 @@ bool ExternalAuthenticators::checkLDAPCredentials(const String & server, const B // Ensure that search_params are compatible. ( - search_params == nullptr ? - entry.last_successful_search_results.empty() : - search_params->size() == entry.last_successful_search_results.size() + role_search_params == nullptr ? + entry.last_successful_role_search_results.empty() : + role_search_params->size() == entry.last_successful_role_search_results.size() ) ) { - if (search_results) - *search_results = entry.last_successful_search_results; + if (role_search_results) + *role_search_results = entry.last_successful_role_search_results; return true; } @@ -326,7 +372,7 @@ bool ExternalAuthenticators::checkLDAPCredentials(const String & server, const B } LDAPSimpleAuthClient client(params.value()); - const auto result = client.authenticate(search_params, search_results); + const auto result = client.authenticate(role_search_params, role_search_results); const auto current_check_timestamp = std::chrono::steady_clock::now(); // Update the cache, but only if this is the latest check and the server is still configured in a compatible way. @@ -345,9 +391,9 @@ bool ExternalAuthenticators::checkLDAPCredentials(const String & server, const B std::size_t new_params_hash = 0; new_params.combineCoreHash(new_params_hash); - if (search_params) + if (role_search_params) { - for (const auto & params_instance : *search_params) + for (const auto & params_instance : *role_search_params) { params_instance.combineHash(new_params_hash); } @@ -363,17 +409,17 @@ bool ExternalAuthenticators::checkLDAPCredentials(const String & server, const B entry.last_successful_params_hash = params_hash; entry.last_successful_authentication_timestamp = current_check_timestamp; - if (search_results) - entry.last_successful_search_results = *search_results; + if (role_search_results) + entry.last_successful_role_search_results = *role_search_results; else - entry.last_successful_search_results.clear(); + entry.last_successful_role_search_results.clear(); } else if ( entry.last_successful_params_hash != params_hash || ( - search_params == nullptr ? - !entry.last_successful_search_results.empty() : - search_params->size() != entry.last_successful_search_results.size() + role_search_params == nullptr ? + !entry.last_successful_role_search_results.empty() : + role_search_params->size() != entry.last_successful_role_search_results.size() ) ) { diff --git a/src/Access/ExternalAuthenticators.h b/src/Access/ExternalAuthenticators.h index c8feea7eada..24f1f7b6528 100644 --- a/src/Access/ExternalAuthenticators.h +++ b/src/Access/ExternalAuthenticators.h @@ -34,7 +34,7 @@ public: // The name and readiness of the credentials must be verified before calling these. bool checkLDAPCredentials(const String & server, const BasicCredentials & credentials, - const LDAPClient::SearchParamsList * search_params = nullptr, LDAPClient::SearchResultsList * search_results = nullptr) const; + const LDAPClient::RoleSearchParamsList * role_search_params = nullptr, LDAPClient::SearchResultsList * role_search_results = nullptr) const; bool checkKerberosCredentials(const String & realm, const GSSAcceptorContext & credentials) const; GSSAcceptorContext::Params getKerberosParams() const; @@ -44,7 +44,7 @@ private: { std::size_t last_successful_params_hash = 0; std::chrono::steady_clock::time_point last_successful_authentication_timestamp; - LDAPClient::SearchResultsList last_successful_search_results; + LDAPClient::SearchResultsList last_successful_role_search_results; }; using LDAPCache = std::unordered_map; // user name -> cache entry @@ -58,4 +58,6 @@ private: std::optional kerberos_params; }; +void parseLDAPRoleSearchParams(LDAPClient::RoleSearchParams & params, const Poco::Util::AbstractConfiguration & config, const String & prefix); + } diff --git a/src/Access/LDAPAccessStorage.cpp b/src/Access/LDAPAccessStorage.cpp index b47a9b3e041..c1d54e8c9aa 100644 --- a/src/Access/LDAPAccessStorage.cpp +++ b/src/Access/LDAPAccessStorage.cpp @@ -68,34 +68,15 @@ void LDAPAccessStorage::setConfiguration(AccessControlManager * access_control_m common_roles_cfg.insert(role_names.begin(), role_names.end()); } - LDAPClient::SearchParamsList role_search_params_cfg; + LDAPClient::RoleSearchParamsList role_search_params_cfg; if (has_role_mapping) { Poco::Util::AbstractConfiguration::Keys all_keys; config.keys(prefix, all_keys); for (const auto & key : all_keys) { - if (key != "role_mapping" && key.find("role_mapping[") != 0) - continue; - - const String rm_prefix = prefix_str + key; - const String rm_prefix_str = rm_prefix + '.'; - role_search_params_cfg.emplace_back(); - auto & rm_params = role_search_params_cfg.back(); - - rm_params.base_dn = config.getString(rm_prefix_str + "base_dn", ""); - rm_params.search_filter = config.getString(rm_prefix_str + "search_filter", ""); - rm_params.attribute = config.getString(rm_prefix_str + "attribute", "cn"); - rm_params.prefix = config.getString(rm_prefix_str + "prefix", ""); - - auto scope = config.getString(rm_prefix_str + "scope", "subtree"); - boost::algorithm::to_lower(scope); - if (scope == "base") rm_params.scope = LDAPClient::SearchParams::Scope::BASE; - else if (scope == "one_level") rm_params.scope = LDAPClient::SearchParams::Scope::ONE_LEVEL; - else if (scope == "subtree") rm_params.scope = LDAPClient::SearchParams::Scope::SUBTREE; - else if (scope == "children") rm_params.scope = LDAPClient::SearchParams::Scope::CHILDREN; - else - throw Exception("Invalid value of 'scope' field in '" + key + "' section of LDAP user directory, must be one of 'base', 'one_level', 'subtree', or 'children'", ErrorCodes::BAD_ARGUMENTS); + if (key == "role_mapping" || key.find("role_mapping[") == 0) + parseLDAPRoleSearchParams(role_search_params_cfg.emplace_back(), config, prefix_str + key); } } @@ -364,7 +345,7 @@ std::set LDAPAccessStorage::mapExternalRolesNoLock(const LDAPClient::Sea bool LDAPAccessStorage::areLDAPCredentialsValidNoLock(const User & user, const Credentials & credentials, - const ExternalAuthenticators & external_authenticators, LDAPClient::SearchResultsList & search_results) const + const ExternalAuthenticators & external_authenticators, LDAPClient::SearchResultsList & role_search_results) const { if (!credentials.isReady()) return false; @@ -373,7 +354,7 @@ bool LDAPAccessStorage::areLDAPCredentialsValidNoLock(const User & user, const C return false; if (const auto * basic_credentials = dynamic_cast(&credentials)) - return external_authenticators.checkLDAPCredentials(ldap_server_name, *basic_credentials, &role_search_params, &search_results); + return external_authenticators.checkLDAPCredentials(ldap_server_name, *basic_credentials, &role_search_params, &role_search_results); return false; } diff --git a/src/Access/LDAPAccessStorage.h b/src/Access/LDAPAccessStorage.h index ea0ab47c225..33ac9f0a914 100644 --- a/src/Access/LDAPAccessStorage.h +++ b/src/Access/LDAPAccessStorage.h @@ -68,12 +68,12 @@ private: void updateAssignedRolesNoLock(const UUID & id, const String & user_name, const LDAPClient::SearchResultsList & external_roles) const; std::set mapExternalRolesNoLock(const LDAPClient::SearchResultsList & external_roles) const; bool areLDAPCredentialsValidNoLock(const User & user, const Credentials & credentials, - const ExternalAuthenticators & external_authenticators, LDAPClient::SearchResultsList & search_results) const; + const ExternalAuthenticators & external_authenticators, LDAPClient::SearchResultsList & role_search_results) const; mutable std::recursive_mutex mutex; AccessControlManager * access_control_manager = nullptr; String ldap_server_name; - LDAPClient::SearchParamsList role_search_params; + LDAPClient::RoleSearchParamsList role_search_params; std::set common_role_names; // role name that should be granted to all users at all times mutable std::map external_role_hashes; // user name -> LDAPClient::SearchResultsList hash (most recently retrieved and processed) mutable std::map> users_per_roles; // role name -> user names (...it should be granted to; may but don't have to exist for common roles) diff --git a/src/Access/LDAPClient.cpp b/src/Access/LDAPClient.cpp index 5c4b7dd8d99..a8f9675774b 100644 --- a/src/Access/LDAPClient.cpp +++ b/src/Access/LDAPClient.cpp @@ -32,6 +32,11 @@ void LDAPClient::SearchParams::combineHash(std::size_t & seed) const boost::hash_combine(seed, static_cast(scope)); boost::hash_combine(seed, search_filter); boost::hash_combine(seed, attribute); +} + +void LDAPClient::RoleSearchParams::combineHash(std::size_t & seed) const +{ + SearchParams::combineHash(seed); boost::hash_combine(seed, prefix); } @@ -42,6 +47,9 @@ void LDAPClient::Params::combineCoreHash(std::size_t & seed) const boost::hash_combine(seed, bind_dn); boost::hash_combine(seed, user); boost::hash_combine(seed, password); + + if (user_dn_detection) + user_dn_detection->combineHash(seed); } LDAPClient::LDAPClient(const Params & params_) @@ -286,18 +294,33 @@ void LDAPClient::openConnection() if (params.enable_tls == LDAPClient::Params::TLSEnable::YES_STARTTLS) diag(ldap_start_tls_s(handle, nullptr, nullptr)); + final_user_name = escapeForLDAP(params.user); + final_bind_dn = replacePlaceholders(params.bind_dn, { {"{user_name}", final_user_name} }); + final_user_dn = final_bind_dn; // The default value... may be updated right after a successful bind. + switch (params.sasl_mechanism) { case LDAPClient::Params::SASLMechanism::SIMPLE: { - const auto escaped_user_name = escapeForLDAP(params.user); - const auto bind_dn = replacePlaceholders(params.bind_dn, { {"{user_name}", escaped_user_name} }); - ::berval cred; cred.bv_val = const_cast(params.password.c_str()); cred.bv_len = params.password.size(); - diag(ldap_sasl_bind_s(handle, bind_dn.c_str(), LDAP_SASL_SIMPLE, &cred, nullptr, nullptr, nullptr)); + diag(ldap_sasl_bind_s(handle, final_bind_dn.c_str(), LDAP_SASL_SIMPLE, &cred, nullptr, nullptr, nullptr)); + + // Once bound, run the user DN search query and update the default value, if asked. + if (params.user_dn_detection) + { + const auto user_dn_search_results = search(*params.user_dn_detection); + + if (user_dn_search_results.empty()) + throw Exception("Failed to detect user DN: empty search results", ErrorCodes::LDAP_ERROR); + + if (user_dn_search_results.size() > 1) + throw Exception("Failed to detect user DN: more than one entry in the search results", ErrorCodes::LDAP_ERROR); + + final_user_dn = *user_dn_search_results.begin(); + } break; } @@ -316,6 +339,9 @@ void LDAPClient::closeConnection() noexcept ldap_unbind_ext_s(handle, nullptr, nullptr); handle = nullptr; + final_user_name.clear(); + final_bind_dn.clear(); + final_user_dn.clear(); } LDAPClient::SearchResults LDAPClient::search(const SearchParams & search_params) @@ -333,10 +359,19 @@ LDAPClient::SearchResults LDAPClient::search(const SearchParams & search_params) case SearchParams::Scope::CHILDREN: scope = LDAP_SCOPE_CHILDREN; break; } - const auto escaped_user_name = escapeForLDAP(params.user); - const auto bind_dn = replacePlaceholders(params.bind_dn, { {"{user_name}", escaped_user_name} }); - const auto base_dn = replacePlaceholders(search_params.base_dn, { {"{user_name}", escaped_user_name}, {"{bind_dn}", bind_dn} }); - const auto search_filter = replacePlaceholders(search_params.search_filter, { {"{user_name}", escaped_user_name}, {"{bind_dn}", bind_dn}, {"{base_dn}", base_dn} }); + const auto final_base_dn = replacePlaceholders(search_params.base_dn, { + {"{user_name}", final_user_name}, + {"{bind_dn}", final_bind_dn}, + {"{user_dn}", final_user_dn} + }); + + const auto final_search_filter = replacePlaceholders(search_params.search_filter, { + {"{user_name}", final_user_name}, + {"{bind_dn}", final_bind_dn}, + {"{user_dn}", final_user_dn}, + {"{base_dn}", final_base_dn} + }); + char * attrs[] = { const_cast(search_params.attribute.c_str()), nullptr }; ::timeval timeout = { params.search_timeout.count(), 0 }; LDAPMessage* msgs = nullptr; @@ -349,7 +384,7 @@ LDAPClient::SearchResults LDAPClient::search(const SearchParams & search_params) } }); - diag(ldap_search_ext_s(handle, base_dn.c_str(), scope, search_filter.c_str(), attrs, 0, nullptr, nullptr, &timeout, params.search_limit, &msgs)); + diag(ldap_search_ext_s(handle, final_base_dn.c_str(), scope, final_search_filter.c_str(), attrs, 0, nullptr, nullptr, &timeout, params.search_limit, &msgs)); for ( auto * msg = ldap_first_message(handle, msgs); @@ -361,6 +396,27 @@ LDAPClient::SearchResults LDAPClient::search(const SearchParams & search_params) { case LDAP_RES_SEARCH_ENTRY: { + // Extract DN separately, if the requested attribute is DN. + if (boost::iequals("dn", search_params.attribute)) + { + BerElement * ber = nullptr; + + SCOPE_EXIT({ + if (ber) + { + ber_free(ber, 0); + ber = nullptr; + } + }); + + ::berval bv; + + diag(ldap_get_dn_ber(handle, msg, &ber, &bv)); + + if (bv.bv_val && bv.bv_len > 0) + result.emplace(bv.bv_val, bv.bv_len); + } + BerElement * ber = nullptr; SCOPE_EXIT({ @@ -471,12 +527,12 @@ LDAPClient::SearchResults LDAPClient::search(const SearchParams & search_params) return result; } -bool LDAPSimpleAuthClient::authenticate(const SearchParamsList * search_params, SearchResultsList * search_results) +bool LDAPSimpleAuthClient::authenticate(const RoleSearchParamsList * role_search_params, SearchResultsList * role_search_results) { if (params.user.empty()) throw Exception("LDAP authentication of a user with empty name is not allowed", ErrorCodes::BAD_ARGUMENTS); - if (!search_params != !search_results) + if (!role_search_params != !role_search_results) throw Exception("Cannot return LDAP search results", ErrorCodes::BAD_ARGUMENTS); // Silently reject authentication attempt if the password is empty as if it didn't match. @@ -489,21 +545,21 @@ bool LDAPSimpleAuthClient::authenticate(const SearchParamsList * search_params, openConnection(); // While connected, run search queries and save the results, if asked. - if (search_params) + if (role_search_params) { - search_results->clear(); - search_results->reserve(search_params->size()); + role_search_results->clear(); + role_search_results->reserve(role_search_params->size()); try { - for (const auto & single_search_params : *search_params) + for (const auto & params_instance : *role_search_params) { - search_results->emplace_back(search(single_search_params)); + role_search_results->emplace_back(search(params_instance)); } } catch (...) { - search_results->clear(); + role_search_results->clear(); throw; } } @@ -532,7 +588,7 @@ LDAPClient::SearchResults LDAPClient::search(const SearchParams &) throw Exception("ClickHouse was built without LDAP support", ErrorCodes::FEATURE_IS_NOT_ENABLED_AT_BUILD_TIME); } -bool LDAPSimpleAuthClient::authenticate(const SearchParamsList *, SearchResultsList *) +bool LDAPSimpleAuthClient::authenticate(const RoleSearchParamsList *, SearchResultsList *) { throw Exception("ClickHouse was built without LDAP support", ErrorCodes::FEATURE_IS_NOT_ENABLED_AT_BUILD_TIME); } diff --git a/src/Access/LDAPClient.h b/src/Access/LDAPClient.h index 4fc97bb957b..388e7ad0f0d 100644 --- a/src/Access/LDAPClient.h +++ b/src/Access/LDAPClient.h @@ -38,12 +38,20 @@ public: Scope scope = Scope::SUBTREE; String search_filter; String attribute = "cn"; + + void combineHash(std::size_t & seed) const; + }; + + struct RoleSearchParams + : public SearchParams + { String prefix; void combineHash(std::size_t & seed) const; }; - using SearchParamsList = std::vector; + using RoleSearchParamsList = std::vector; + using SearchResults = std::set; using SearchResultsList = std::vector; @@ -105,6 +113,8 @@ public: String user; String password; + std::optional user_dn_detection; + std::chrono::seconds verification_cooldown{0}; std::chrono::seconds operation_timeout{40}; @@ -134,6 +144,9 @@ protected: #if USE_LDAP LDAP * handle = nullptr; #endif + String final_user_name; + String final_bind_dn; + String final_user_dn; }; class LDAPSimpleAuthClient @@ -141,7 +154,7 @@ class LDAPSimpleAuthClient { public: using LDAPClient::LDAPClient; - bool authenticate(const SearchParamsList * search_params, SearchResultsList * search_results); + bool authenticate(const RoleSearchParamsList * role_search_params, SearchResultsList * role_search_results); }; } diff --git a/src/Access/MemoryAccessStorage.h b/src/Access/MemoryAccessStorage.h index 92439342168..512ccff1d1b 100644 --- a/src/Access/MemoryAccessStorage.h +++ b/src/Access/MemoryAccessStorage.h @@ -51,7 +51,7 @@ private: void setAllNoLock(const std::vector> & all_entities, Notifications & notifications); void prepareNotifications(const Entry & entry, bool remove, Notifications & notifications) const; - mutable std::mutex mutex; + mutable std::recursive_mutex mutex; std::unordered_map entries_by_id; /// We want to search entries both by ID and by the pair of name and type. std::unordered_map entries_by_name_and_type[static_cast(EntityType::MAX)]; mutable std::list handlers_by_type[static_cast(EntityType::MAX)]; diff --git a/src/AggregateFunctions/AggregateFunctionAggThrow.cpp b/src/AggregateFunctions/AggregateFunctionAggThrow.cpp index 09e343b2dc5..c9d292f1993 100644 --- a/src/AggregateFunctions/AggregateFunctionAggThrow.cpp +++ b/src/AggregateFunctions/AggregateFunctionAggThrow.cpp @@ -11,6 +11,7 @@ namespace DB { +struct Settings; namespace ErrorCodes { @@ -105,7 +106,7 @@ public: void registerAggregateFunctionAggThrow(AggregateFunctionFactory & factory) { - factory.registerFunction("aggThrow", [](const std::string & name, const DataTypes & argument_types, const Array & parameters) + factory.registerFunction("aggThrow", [](const std::string & name, const DataTypes & argument_types, const Array & parameters, const Settings *) { Float64 throw_probability = 1.0; if (parameters.size() == 1) diff --git a/src/AggregateFunctions/AggregateFunctionAny.cpp b/src/AggregateFunctions/AggregateFunctionAny.cpp index 8b18abae884..9bc6e6af14f 100644 --- a/src/AggregateFunctions/AggregateFunctionAny.cpp +++ b/src/AggregateFunctions/AggregateFunctionAny.cpp @@ -1,28 +1,27 @@ #include #include -#include -#include "registerAggregateFunctions.h" namespace DB { +struct Settings; namespace { -AggregateFunctionPtr createAggregateFunctionAny(const std::string & name, const DataTypes & argument_types, const Array & parameters) +AggregateFunctionPtr createAggregateFunctionAny(const std::string & name, const DataTypes & argument_types, const Array & parameters, const Settings * settings) { - return AggregateFunctionPtr(createAggregateFunctionSingleValue(name, argument_types, parameters)); + return AggregateFunctionPtr(createAggregateFunctionSingleValue(name, argument_types, parameters, settings)); } -AggregateFunctionPtr createAggregateFunctionAnyLast(const std::string & name, const DataTypes & argument_types, const Array & parameters) +AggregateFunctionPtr createAggregateFunctionAnyLast(const std::string & name, const DataTypes & argument_types, const Array & parameters, const Settings * settings) { - return AggregateFunctionPtr(createAggregateFunctionSingleValue(name, argument_types, parameters)); + return AggregateFunctionPtr(createAggregateFunctionSingleValue(name, argument_types, parameters, settings)); } -AggregateFunctionPtr createAggregateFunctionAnyHeavy(const std::string & name, const DataTypes & argument_types, const Array & parameters) +AggregateFunctionPtr createAggregateFunctionAnyHeavy(const std::string & name, const DataTypes & argument_types, const Array & parameters, const Settings * settings) { - return AggregateFunctionPtr(createAggregateFunctionSingleValue(name, argument_types, parameters)); + return AggregateFunctionPtr(createAggregateFunctionSingleValue(name, argument_types, parameters, settings)); } } diff --git a/src/AggregateFunctions/AggregateFunctionArgMinMax.h b/src/AggregateFunctions/AggregateFunctionArgMinMax.h index 77c710e0587..335ee7c8ecb 100644 --- a/src/AggregateFunctions/AggregateFunctionArgMinMax.h +++ b/src/AggregateFunctions/AggregateFunctionArgMinMax.h @@ -8,6 +8,7 @@ namespace DB { +struct Settings; namespace ErrorCodes { diff --git a/src/AggregateFunctions/AggregateFunctionArray.cpp b/src/AggregateFunctions/AggregateFunctionArray.cpp index d0f17da5aa4..3eddbbb3fb2 100644 --- a/src/AggregateFunctions/AggregateFunctionArray.cpp +++ b/src/AggregateFunctions/AggregateFunctionArray.cpp @@ -5,6 +5,7 @@ namespace DB { +struct Settings; namespace ErrorCodes { diff --git a/src/AggregateFunctions/AggregateFunctionArray.h b/src/AggregateFunctions/AggregateFunctionArray.h index ef16fcde87b..f1005e2e43a 100644 --- a/src/AggregateFunctions/AggregateFunctionArray.h +++ b/src/AggregateFunctions/AggregateFunctionArray.h @@ -9,6 +9,7 @@ namespace DB { +struct Settings; namespace ErrorCodes { diff --git a/src/AggregateFunctions/AggregateFunctionAvg.cpp b/src/AggregateFunctions/AggregateFunctionAvg.cpp index 9b1c3d6cef6..a96c8c01407 100644 --- a/src/AggregateFunctions/AggregateFunctionAvg.cpp +++ b/src/AggregateFunctions/AggregateFunctionAvg.cpp @@ -3,10 +3,11 @@ #include #include #include -#include "registerAggregateFunctions.h" namespace DB { +struct Settings; + namespace ErrorCodes { extern const int ILLEGAL_TYPE_OF_ARGUMENT; @@ -20,7 +21,7 @@ bool allowType(const DataTypePtr& type) noexcept return t.isInt() || t.isUInt() || t.isFloat() || t.isDecimal(); } -AggregateFunctionPtr createAggregateFunctionAvg(const std::string & name, const DataTypes & argument_types, const Array & parameters) +AggregateFunctionPtr createAggregateFunctionAvg(const std::string & name, const DataTypes & argument_types, const Array & parameters, const Settings *) { assertNoParameters(name, parameters); assertUnary(name, argument_types); diff --git a/src/AggregateFunctions/AggregateFunctionAvg.h b/src/AggregateFunctions/AggregateFunctionAvg.h index f2ea51ac28d..7cdef3bfe69 100644 --- a/src/AggregateFunctions/AggregateFunctionAvg.h +++ b/src/AggregateFunctions/AggregateFunctionAvg.h @@ -12,6 +12,7 @@ namespace DB { +struct Settings; template using DecimalOrVectorCol = std::conditional_t, ColumnDecimal, ColumnVector>; diff --git a/src/AggregateFunctions/AggregateFunctionAvgWeighted.cpp b/src/AggregateFunctions/AggregateFunctionAvgWeighted.cpp index 983b3bf3d4c..b7fdb3460e3 100644 --- a/src/AggregateFunctions/AggregateFunctionAvgWeighted.cpp +++ b/src/AggregateFunctions/AggregateFunctionAvgWeighted.cpp @@ -4,10 +4,11 @@ #include #include #include -#include "registerAggregateFunctions.h" namespace DB { +struct Settings; + namespace ErrorCodes { extern const int ILLEGAL_TYPE_OF_ARGUMENT; @@ -60,7 +61,8 @@ static IAggregateFunction * create(const IDataType & first_type, const IDataType #undef LINE } -AggregateFunctionPtr createAggregateFunctionAvgWeighted(const std::string & name, const DataTypes & argument_types, const Array & parameters) +AggregateFunctionPtr +createAggregateFunctionAvgWeighted(const std::string & name, const DataTypes & argument_types, const Array & parameters, const Settings *) { assertNoParameters(name, parameters); assertBinary(name, argument_types); diff --git a/src/AggregateFunctions/AggregateFunctionAvgWeighted.h b/src/AggregateFunctions/AggregateFunctionAvgWeighted.h index 8b932918aa5..5842e7311e9 100644 --- a/src/AggregateFunctions/AggregateFunctionAvgWeighted.h +++ b/src/AggregateFunctions/AggregateFunctionAvgWeighted.h @@ -5,6 +5,8 @@ namespace DB { +struct Settings; + template using AvgWeightedFieldType = std::conditional_t, std::conditional_t, Decimal256, Decimal128>, diff --git a/src/AggregateFunctions/AggregateFunctionBitwise.cpp b/src/AggregateFunctions/AggregateFunctionBitwise.cpp index cfcef81243a..320231e09ab 100644 --- a/src/AggregateFunctions/AggregateFunctionBitwise.cpp +++ b/src/AggregateFunctions/AggregateFunctionBitwise.cpp @@ -7,6 +7,8 @@ namespace DB { +struct Settings; + namespace ErrorCodes { extern const int ILLEGAL_TYPE_OF_ARGUMENT; @@ -16,7 +18,7 @@ namespace { template