diff --git a/.github/ISSUE_TEMPLATE/bug-report-or-unexpected-behaviour.md b/.github/ISSUE_TEMPLATE/bug-report-or-unexpected-behaviour.md
index 9526b99b22b..542442e2856 100644
--- a/.github/ISSUE_TEMPLATE/bug-report-or-unexpected-behaviour.md
+++ b/.github/ISSUE_TEMPLATE/bug-report-or-unexpected-behaviour.md
@@ -17,7 +17,7 @@ A clear and concise description of what works not as it is supposed to.
* Which interface to use, if matters
* Non-default settings, if any
* `CREATE TABLE` statements for all tables involved
-* Sample data for all these tables, use [clickhouse-obfuscator](https://github.com/yandex/ClickHouse/blob/master/dbms/programs/obfuscator/Obfuscator.cpp#L42-L80) if necessary
+* Sample data for all these tables, use [clickhouse-obfuscator](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/programs/obfuscator/Obfuscator.cpp#L42-L80) if necessary
* Queries to run that lead to unexpected result
**Expected behavior**
diff --git a/.github/ISSUE_TEMPLATE/performance-issue.md b/.github/ISSUE_TEMPLATE/performance-issue.md
index 402617d00f7..96c8cb77afb 100644
--- a/.github/ISSUE_TEMPLATE/performance-issue.md
+++ b/.github/ISSUE_TEMPLATE/performance-issue.md
@@ -17,7 +17,7 @@ What exactly works slower than expected?
* Which interface to use, if matters
* Non-default settings, if any
* `CREATE TABLE` statements for all tables involved
-* Sample data for all these tables, use [clickhouse-obfuscator](https://github.com/yandex/ClickHouse/blob/master/dbms/programs/obfuscator/Obfuscator.cpp#L42-L80) if necessary
+* Sample data for all these tables, use [clickhouse-obfuscator](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/programs/obfuscator/Obfuscator.cpp#L42-L80) if necessary
* Queries to run that lead to slow performance
**Expected performance**
diff --git a/CHANGELOG.md b/CHANGELOG.md
index c385831af85..9b03364e2d3 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -2,401 +2,401 @@
### Bug Fix
* This release also contains all bug fixes from 19.14.6.12.
-* Fixed possible inconsistent state of table while executing `DROP` query for replicated table while zookeeper is not accessible. [#6045](https://github.com/yandex/ClickHouse/issues/6045) [#6413](https://github.com/yandex/ClickHouse/pull/6413) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov))
-* Fix for data race in StorageMerge [#6717](https://github.com/yandex/ClickHouse/pull/6717) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix bug introduced in query profiler which leads to endless recv from socket. [#6386](https://github.com/yandex/ClickHouse/pull/6386) ([alesapin](https://github.com/alesapin))
-* Fix excessive CPU usage while executing `JSONExtractRaw` function over a boolean value. [#6208](https://github.com/yandex/ClickHouse/pull/6208) ([Vitaly Baranov](https://github.com/vitlibar))
-* Fixes the regression while pushing to materialized view. [#6415](https://github.com/yandex/ClickHouse/pull/6415) ([Ivan](https://github.com/abyss7))
-* Table function `url` had the vulnerability allowed the attacker to inject arbitrary HTTP headers in the request. This issue was found by [Nikita Tikhomirov](https://github.com/NSTikhomirov). [#6466](https://github.com/yandex/ClickHouse/pull/6466) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix useless `AST` check in Set index. [#6510](https://github.com/yandex/ClickHouse/issues/6510) [#6651](https://github.com/yandex/ClickHouse/pull/6651) ([Nikita Vasilev](https://github.com/nikvas0))
-* Fixed parsing of `AggregateFunction` values embedded in query. [#6575](https://github.com/yandex/ClickHouse/issues/6575) [#6773](https://github.com/yandex/ClickHouse/pull/6773) ([Zhichang Yu](https://github.com/yuzhichang))
-* Fixed wrong behaviour of `trim` functions family. [#6647](https://github.com/yandex/ClickHouse/pull/6647) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed possible inconsistent state of table while executing `DROP` query for replicated table while zookeeper is not accessible. [#6045](https://github.com/ClickHouse/ClickHouse/issues/6045) [#6413](https://github.com/ClickHouse/ClickHouse/pull/6413) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov))
+* Fix for data race in StorageMerge [#6717](https://github.com/ClickHouse/ClickHouse/pull/6717) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix bug introduced in query profiler which leads to endless recv from socket. [#6386](https://github.com/ClickHouse/ClickHouse/pull/6386) ([alesapin](https://github.com/alesapin))
+* Fix excessive CPU usage while executing `JSONExtractRaw` function over a boolean value. [#6208](https://github.com/ClickHouse/ClickHouse/pull/6208) ([Vitaly Baranov](https://github.com/vitlibar))
+* Fixes the regression while pushing to materialized view. [#6415](https://github.com/ClickHouse/ClickHouse/pull/6415) ([Ivan](https://github.com/abyss7))
+* Table function `url` had the vulnerability allowed the attacker to inject arbitrary HTTP headers in the request. This issue was found by [Nikita Tikhomirov](https://github.com/NSTikhomirov). [#6466](https://github.com/ClickHouse/ClickHouse/pull/6466) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix useless `AST` check in Set index. [#6510](https://github.com/ClickHouse/ClickHouse/issues/6510) [#6651](https://github.com/ClickHouse/ClickHouse/pull/6651) ([Nikita Vasilev](https://github.com/nikvas0))
+* Fixed parsing of `AggregateFunction` values embedded in query. [#6575](https://github.com/ClickHouse/ClickHouse/issues/6575) [#6773](https://github.com/ClickHouse/ClickHouse/pull/6773) ([Zhichang Yu](https://github.com/yuzhichang))
+* Fixed wrong behaviour of `trim` functions family. [#6647](https://github.com/ClickHouse/ClickHouse/pull/6647) ([alexey-milovidov](https://github.com/alexey-milovidov))
## ClickHouse release 19.14.6.12, 2019-09-19
### Bug Fix
-* Fix for function `АrrayEnumerateUniqRanked` with empty arrays in params. [#6928](https://github.com/yandex/ClickHouse/pull/6928) ([proller](https://github.com/proller))
-* Fixed subquery name in queries with `ARRAY JOIN` and `GLOBAL IN subquery` with alias. Use subquery alias for external table name if it is specified. [#6934](https://github.com/yandex/ClickHouse/pull/6934) ([Ivan](https://github.com/abyss7))
+* Fix for function `АrrayEnumerateUniqRanked` with empty arrays in params. [#6928](https://github.com/ClickHouse/ClickHouse/pull/6928) ([proller](https://github.com/proller))
+* Fixed subquery name in queries with `ARRAY JOIN` and `GLOBAL IN subquery` with alias. Use subquery alias for external table name if it is specified. [#6934](https://github.com/ClickHouse/ClickHouse/pull/6934) ([Ivan](https://github.com/abyss7))
### Build/Testing/Packaging Improvement
-* Fix [flapping](https://clickhouse-test-reports.s3.yandex.net/6944/aab95fd5175a513413c7395a73a82044bdafb906/functional_stateless_tests_(debug).html) test `00715_fetch_merged_or_mutated_part_zookeeper` by rewriting it to a shell scripts because it needs to wait for mutations to apply. [#6977](https://github.com/yandex/ClickHouse/pull/6977) ([Alexander Kazakov](https://github.com/Akazz))
-* Fixed UBSan and MemSan failure in function `groupUniqArray` with emtpy array argument. It was caused by placing of empty `PaddedPODArray` into hash table zero cell because constructor for zero cell value was not called. [#6937](https://github.com/yandex/ClickHouse/pull/6937) ([Amos Bird](https://github.com/amosbird))
+* Fix [flapping](https://clickhouse-test-reports.s3.yandex.net/6944/aab95fd5175a513413c7395a73a82044bdafb906/functional_stateless_tests_(debug).html) test `00715_fetch_merged_or_mutated_part_zookeeper` by rewriting it to a shell scripts because it needs to wait for mutations to apply. [#6977](https://github.com/ClickHouse/ClickHouse/pull/6977) ([Alexander Kazakov](https://github.com/Akazz))
+* Fixed UBSan and MemSan failure in function `groupUniqArray` with emtpy array argument. It was caused by placing of empty `PaddedPODArray` into hash table zero cell because constructor for zero cell value was not called. [#6937](https://github.com/ClickHouse/ClickHouse/pull/6937) ([Amos Bird](https://github.com/amosbird))
## ClickHouse release 19.14.3.3, 2019-09-10
### New Feature
-* `WITH FILL` modifier for `ORDER BY`. (continuation of [#5069](https://github.com/yandex/ClickHouse/issues/5069)) [#6610](https://github.com/yandex/ClickHouse/pull/6610) ([Anton Popov](https://github.com/CurtizJ))
-* `WITH TIES` modifier for `LIMIT`. (continuation of [#5069](https://github.com/yandex/ClickHouse/issues/5069)) [#6610](https://github.com/yandex/ClickHouse/pull/6610) ([Anton Popov](https://github.com/CurtizJ))
-* Parse unquoted `NULL` literal as NULL (if setting `format_csv_unquoted_null_literal_as_null=1`). Initialize null fields with default values if data type of this field is not nullable (if setting `input_format_null_as_default=1`). [#5990](https://github.com/yandex/ClickHouse/issues/5990) [#6055](https://github.com/yandex/ClickHouse/pull/6055) ([tavplubix](https://github.com/tavplubix))
-* Support for wildcards in paths of table functions `file` and `hdfs`. If the path contains wildcards, the table will be readonly. Example of usage: `select * from hdfs('hdfs://hdfs1:9000/some_dir/another_dir/*/file{0..9}{0..9}')` and `select * from file('some_dir/{some_file,another_file,yet_another}.tsv', 'TSV', 'value UInt32')`. [#6092](https://github.com/yandex/ClickHouse/pull/6092) ([Olga Khvostikova](https://github.com/stavrolia))
-* New `system.metric_log` table which stores values of `system.events` and `system.metrics` with specified time interval. [#6363](https://github.com/yandex/ClickHouse/issues/6363) [#6467](https://github.com/yandex/ClickHouse/pull/6467) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)) [#6530](https://github.com/yandex/ClickHouse/pull/6530) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Allow to write ClickHouse text logs to `system.text_log` table. [#6037](https://github.com/yandex/ClickHouse/issues/6037) [#6103](https://github.com/yandex/ClickHouse/pull/6103) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)) [#6164](https://github.com/yandex/ClickHouse/pull/6164) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Show private symbols in stack traces (this is done via parsing symbol tables of ELF files). Added information about file and line number in stack traces if debug info is present. Speedup symbol name lookup with indexing symbols present in program. Added new SQL functions for introspection: `demangle` and `addressToLine`. Renamed function `symbolizeAddress` to `addressToSymbol` for consistency. Function `addressToSymbol` will return mangled name for performance reasons and you have to apply `demangle`. Added setting `allow_introspection_functions` which is turned off by default. [#6201](https://github.com/yandex/ClickHouse/pull/6201) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Table function `values` (the name is case-insensitive). It allows to read from `VALUES` list proposed in [#5984](https://github.com/yandex/ClickHouse/issues/5984). Example: `SELECT * FROM VALUES('a UInt64, s String', (1, 'one'), (2, 'two'), (3, 'three'))`. [#6217](https://github.com/yandex/ClickHouse/issues/6217). [#6209](https://github.com/yandex/ClickHouse/pull/6209) ([dimarub2000](https://github.com/dimarub2000))
-* Added an ability to alter storage settings. Syntax: `ALTER TABLE
MODIFY SETTING = `. [#6366](https://github.com/yandex/ClickHouse/pull/6366) [#6669](https://github.com/yandex/ClickHouse/pull/6669) [#6685](https://github.com/yandex/ClickHouse/pull/6685) ([alesapin](https://github.com/alesapin))
-* Support for removing of detached parts. Syntax: `ALTER TABLE DROP DETACHED PART ''`. [#6158](https://github.com/yandex/ClickHouse/pull/6158) ([tavplubix](https://github.com/tavplubix))
-* Table constraints. Allows to add constraint to table definition which will be checked at insert. [#5273](https://github.com/yandex/ClickHouse/pull/5273) ([Gleb Novikov](https://github.com/NanoBjorn)) [#6652](https://github.com/yandex/ClickHouse/pull/6652) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Suppport for cascaded materialized views. [#6324](https://github.com/yandex/ClickHouse/pull/6324) ([Amos Bird](https://github.com/amosbird))
-* Turn on query profiler by default to sample every query execution thread once a second. [#6283](https://github.com/yandex/ClickHouse/pull/6283) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Input format `ORC`. [#6454](https://github.com/yandex/ClickHouse/pull/6454) [#6703](https://github.com/yandex/ClickHouse/pull/6703) ([akonyaev90](https://github.com/akonyaev90))
-* Added two new functions: `sigmoid` and `tanh` (that are useful for machine learning applications). [#6254](https://github.com/yandex/ClickHouse/pull/6254) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Function `hasToken(haystack, token)`, `hasTokenCaseInsensitive(haystack, token)` to check if given token is in haystack. Token is a maximal length substring between two non alphanumeric ASCII characters (or boundaries of haystack). Token must be a constant string. Supported by tokenbf_v1 index specialization. [#6596](https://github.com/yandex/ClickHouse/pull/6596), [#6662](https://github.com/yandex/ClickHouse/pull/6662) ([Vasily Nemkov](https://github.com/Enmk))
-* New function `neighbor(value, offset[, default_value])`. Allows to reach prev/next value within column in a block of data. [#5925](https://github.com/yandex/ClickHouse/pull/5925) ([Alex Krash](https://github.com/alex-krash)) [6685365ab8c5b74f9650492c88a012596eb1b0c6](https://github.com/yandex/ClickHouse/commit/6685365ab8c5b74f9650492c88a012596eb1b0c6) [341e2e4587a18065c2da1ca888c73389f48ce36c](https://github.com/yandex/ClickHouse/commit/341e2e4587a18065c2da1ca888c73389f48ce36c) [Alexey Milovidov](https://github.com/alexey-milovidov)
-* Created a function `currentUser()`, returning login of authorized user. Added alias `user()` for compatibility with MySQL. [#6470](https://github.com/yandex/ClickHouse/pull/6470) ([Alex Krash](https://github.com/alex-krash))
-* New aggregate functions `quantilesExactInclusive` and `quantilesExactExclusive` which were proposed in [#5885](https://github.com/yandex/ClickHouse/issues/5885). [#6477](https://github.com/yandex/ClickHouse/pull/6477) ([dimarub2000](https://github.com/dimarub2000))
-* Function `bitmapRange(bitmap, range_begin, range_end)` which returns new set with specified range (not include the `range_end`). [#6314](https://github.com/yandex/ClickHouse/pull/6314) ([Zhichang Yu](https://github.com/yuzhichang))
-* Function `geohashesInBox(longitude_min, latitude_min, longitude_max, latitude_max, precision)` which creates array of precision-long strings of geohash-boxes covering provided area. [#6127](https://github.com/yandex/ClickHouse/pull/6127) ([Vasily Nemkov](https://github.com/Enmk))
-* Implement support for INSERT query with `Kafka` tables. [#6012](https://github.com/yandex/ClickHouse/pull/6012) ([Ivan](https://github.com/abyss7))
-* Added support for `_partition` and `_timestamp` virtual columns to Kafka engine. [#6400](https://github.com/yandex/ClickHouse/pull/6400) ([Ivan](https://github.com/abyss7))
-* Possibility to remove sensitive data from `query_log`, server logs, process list with regexp-based rules. [#5710](https://github.com/yandex/ClickHouse/pull/5710) ([filimonov](https://github.com/filimonov))
+* `WITH FILL` modifier for `ORDER BY`. (continuation of [#5069](https://github.com/ClickHouse/ClickHouse/issues/5069)) [#6610](https://github.com/ClickHouse/ClickHouse/pull/6610) ([Anton Popov](https://github.com/CurtizJ))
+* `WITH TIES` modifier for `LIMIT`. (continuation of [#5069](https://github.com/ClickHouse/ClickHouse/issues/5069)) [#6610](https://github.com/ClickHouse/ClickHouse/pull/6610) ([Anton Popov](https://github.com/CurtizJ))
+* Parse unquoted `NULL` literal as NULL (if setting `format_csv_unquoted_null_literal_as_null=1`). Initialize null fields with default values if data type of this field is not nullable (if setting `input_format_null_as_default=1`). [#5990](https://github.com/ClickHouse/ClickHouse/issues/5990) [#6055](https://github.com/ClickHouse/ClickHouse/pull/6055) ([tavplubix](https://github.com/tavplubix))
+* Support for wildcards in paths of table functions `file` and `hdfs`. If the path contains wildcards, the table will be readonly. Example of usage: `select * from hdfs('hdfs://hdfs1:9000/some_dir/another_dir/*/file{0..9}{0..9}')` and `select * from file('some_dir/{some_file,another_file,yet_another}.tsv', 'TSV', 'value UInt32')`. [#6092](https://github.com/ClickHouse/ClickHouse/pull/6092) ([Olga Khvostikova](https://github.com/stavrolia))
+* New `system.metric_log` table which stores values of `system.events` and `system.metrics` with specified time interval. [#6363](https://github.com/ClickHouse/ClickHouse/issues/6363) [#6467](https://github.com/ClickHouse/ClickHouse/pull/6467) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)) [#6530](https://github.com/ClickHouse/ClickHouse/pull/6530) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Allow to write ClickHouse text logs to `system.text_log` table. [#6037](https://github.com/ClickHouse/ClickHouse/issues/6037) [#6103](https://github.com/ClickHouse/ClickHouse/pull/6103) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)) [#6164](https://github.com/ClickHouse/ClickHouse/pull/6164) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Show private symbols in stack traces (this is done via parsing symbol tables of ELF files). Added information about file and line number in stack traces if debug info is present. Speedup symbol name lookup with indexing symbols present in program. Added new SQL functions for introspection: `demangle` and `addressToLine`. Renamed function `symbolizeAddress` to `addressToSymbol` for consistency. Function `addressToSymbol` will return mangled name for performance reasons and you have to apply `demangle`. Added setting `allow_introspection_functions` which is turned off by default. [#6201](https://github.com/ClickHouse/ClickHouse/pull/6201) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Table function `values` (the name is case-insensitive). It allows to read from `VALUES` list proposed in [#5984](https://github.com/ClickHouse/ClickHouse/issues/5984). Example: `SELECT * FROM VALUES('a UInt64, s String', (1, 'one'), (2, 'two'), (3, 'three'))`. [#6217](https://github.com/ClickHouse/ClickHouse/issues/6217). [#6209](https://github.com/ClickHouse/ClickHouse/pull/6209) ([dimarub2000](https://github.com/dimarub2000))
+* Added an ability to alter storage settings. Syntax: `ALTER TABLE MODIFY SETTING = `. [#6366](https://github.com/ClickHouse/ClickHouse/pull/6366) [#6669](https://github.com/ClickHouse/ClickHouse/pull/6669) [#6685](https://github.com/ClickHouse/ClickHouse/pull/6685) ([alesapin](https://github.com/alesapin))
+* Support for removing of detached parts. Syntax: `ALTER TABLE DROP DETACHED PART ''`. [#6158](https://github.com/ClickHouse/ClickHouse/pull/6158) ([tavplubix](https://github.com/tavplubix))
+* Table constraints. Allows to add constraint to table definition which will be checked at insert. [#5273](https://github.com/ClickHouse/ClickHouse/pull/5273) ([Gleb Novikov](https://github.com/NanoBjorn)) [#6652](https://github.com/ClickHouse/ClickHouse/pull/6652) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Suppport for cascaded materialized views. [#6324](https://github.com/ClickHouse/ClickHouse/pull/6324) ([Amos Bird](https://github.com/amosbird))
+* Turn on query profiler by default to sample every query execution thread once a second. [#6283](https://github.com/ClickHouse/ClickHouse/pull/6283) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Input format `ORC`. [#6454](https://github.com/ClickHouse/ClickHouse/pull/6454) [#6703](https://github.com/ClickHouse/ClickHouse/pull/6703) ([akonyaev90](https://github.com/akonyaev90))
+* Added two new functions: `sigmoid` and `tanh` (that are useful for machine learning applications). [#6254](https://github.com/ClickHouse/ClickHouse/pull/6254) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Function `hasToken(haystack, token)`, `hasTokenCaseInsensitive(haystack, token)` to check if given token is in haystack. Token is a maximal length substring between two non alphanumeric ASCII characters (or boundaries of haystack). Token must be a constant string. Supported by tokenbf_v1 index specialization. [#6596](https://github.com/ClickHouse/ClickHouse/pull/6596), [#6662](https://github.com/ClickHouse/ClickHouse/pull/6662) ([Vasily Nemkov](https://github.com/Enmk))
+* New function `neighbor(value, offset[, default_value])`. Allows to reach prev/next value within column in a block of data. [#5925](https://github.com/ClickHouse/ClickHouse/pull/5925) ([Alex Krash](https://github.com/alex-krash)) [6685365ab8c5b74f9650492c88a012596eb1b0c6](https://github.com/ClickHouse/ClickHouse/commit/6685365ab8c5b74f9650492c88a012596eb1b0c6) [341e2e4587a18065c2da1ca888c73389f48ce36c](https://github.com/ClickHouse/ClickHouse/commit/341e2e4587a18065c2da1ca888c73389f48ce36c) [Alexey Milovidov](https://github.com/alexey-milovidov)
+* Created a function `currentUser()`, returning login of authorized user. Added alias `user()` for compatibility with MySQL. [#6470](https://github.com/ClickHouse/ClickHouse/pull/6470) ([Alex Krash](https://github.com/alex-krash))
+* New aggregate functions `quantilesExactInclusive` and `quantilesExactExclusive` which were proposed in [#5885](https://github.com/ClickHouse/ClickHouse/issues/5885). [#6477](https://github.com/ClickHouse/ClickHouse/pull/6477) ([dimarub2000](https://github.com/dimarub2000))
+* Function `bitmapRange(bitmap, range_begin, range_end)` which returns new set with specified range (not include the `range_end`). [#6314](https://github.com/ClickHouse/ClickHouse/pull/6314) ([Zhichang Yu](https://github.com/yuzhichang))
+* Function `geohashesInBox(longitude_min, latitude_min, longitude_max, latitude_max, precision)` which creates array of precision-long strings of geohash-boxes covering provided area. [#6127](https://github.com/ClickHouse/ClickHouse/pull/6127) ([Vasily Nemkov](https://github.com/Enmk))
+* Implement support for INSERT query with `Kafka` tables. [#6012](https://github.com/ClickHouse/ClickHouse/pull/6012) ([Ivan](https://github.com/abyss7))
+* Added support for `_partition` and `_timestamp` virtual columns to Kafka engine. [#6400](https://github.com/ClickHouse/ClickHouse/pull/6400) ([Ivan](https://github.com/abyss7))
+* Possibility to remove sensitive data from `query_log`, server logs, process list with regexp-based rules. [#5710](https://github.com/ClickHouse/ClickHouse/pull/5710) ([filimonov](https://github.com/filimonov))
### Experimental Feature
-* Input and output data format `Template`. It allows to specify custom format string for input and output. [#4354](https://github.com/yandex/ClickHouse/issues/4354) [#6727](https://github.com/yandex/ClickHouse/pull/6727) ([tavplubix](https://github.com/tavplubix))
-* Implementation of `LIVE VIEW` tables that were originally proposed in [#2898](https://github.com/yandex/ClickHouse/pull/2898), prepared in [#3925](https://github.com/yandex/ClickHouse/issues/3925), and then updated in [#5541](https://github.com/yandex/ClickHouse/issues/5541). See [#5541](https://github.com/yandex/ClickHouse/issues/5541) for detailed description. [#5541](https://github.com/yandex/ClickHouse/issues/5541) ([vzakaznikov](https://github.com/vzakaznikov)) [#6425](https://github.com/yandex/ClickHouse/pull/6425) ([Nikolai Kochetov](https://github.com/KochetovNicolai)) [#6656](https://github.com/yandex/ClickHouse/pull/6656) ([vzakaznikov](https://github.com/vzakaznikov)) Note that `LIVE VIEW` feature may be removed in next versions.
+* Input and output data format `Template`. It allows to specify custom format string for input and output. [#4354](https://github.com/ClickHouse/ClickHouse/issues/4354) [#6727](https://github.com/ClickHouse/ClickHouse/pull/6727) ([tavplubix](https://github.com/tavplubix))
+* Implementation of `LIVE VIEW` tables that were originally proposed in [#2898](https://github.com/ClickHouse/ClickHouse/pull/2898), prepared in [#3925](https://github.com/ClickHouse/ClickHouse/issues/3925), and then updated in [#5541](https://github.com/ClickHouse/ClickHouse/issues/5541). See [#5541](https://github.com/ClickHouse/ClickHouse/issues/5541) for detailed description. [#5541](https://github.com/ClickHouse/ClickHouse/issues/5541) ([vzakaznikov](https://github.com/vzakaznikov)) [#6425](https://github.com/ClickHouse/ClickHouse/pull/6425) ([Nikolai Kochetov](https://github.com/KochetovNicolai)) [#6656](https://github.com/ClickHouse/ClickHouse/pull/6656) ([vzakaznikov](https://github.com/vzakaznikov)) Note that `LIVE VIEW` feature may be removed in next versions.
### Bug Fix
* This release also contains all bug fixes from 19.13 and 19.11.
-* Fix segmentation fault when the table has skip indices and vertical merge happens. [#6723](https://github.com/yandex/ClickHouse/pull/6723) ([alesapin](https://github.com/alesapin))
-* Fix per-column TTL with non-trivial column defaults. Previously in case of force TTL merge with `OPTIMIZE ... FINAL` query, expired values was replaced by type defaults instead of user-specified column defaults. [#6796](https://github.com/yandex/ClickHouse/pull/6796) ([Anton Popov](https://github.com/CurtizJ))
-* Fix Kafka messages duplication problem on normal server restart. [#6597](https://github.com/yandex/ClickHouse/pull/6597) ([Ivan](https://github.com/abyss7))
-* Fixed infinite loop when reading Kafka messages. Do not pause/resume consumer on subscription at all - otherwise it may get paused indefinitely in some scenarios. [#6354](https://github.com/yandex/ClickHouse/pull/6354) ([Ivan](https://github.com/abyss7))
-* Fix `Key expression contains comparison between inconvertible types` exception in `bitmapContains` function. [#6136](https://github.com/yandex/ClickHouse/issues/6136) [#6146](https://github.com/yandex/ClickHouse/issues/6146) [#6156](https://github.com/yandex/ClickHouse/pull/6156) ([dimarub2000](https://github.com/dimarub2000))
-* Fix segfault with enabled `optimize_skip_unused_shards` and missing sharding key. [#6384](https://github.com/yandex/ClickHouse/pull/6384) ([Anton Popov](https://github.com/CurtizJ))
-* Fixed wrong code in mutations that may lead to memory corruption. Fixed segfault with read of address `0x14c0` that may happed due to concurrent `DROP TABLE` and `SELECT` from `system.parts` or `system.parts_columns`. Fixed race condition in preparation of mutation queries. Fixed deadlock caused by `OPTIMIZE` of Replicated tables and concurrent modification operations like ALTERs. [#6514](https://github.com/yandex/ClickHouse/pull/6514) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Removed extra verbose logging in MySQL interface [#6389](https://github.com/yandex/ClickHouse/pull/6389) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Return ability to parse boolean settings from 'true' and 'false' in configuration file. [#6278](https://github.com/yandex/ClickHouse/pull/6278) ([alesapin](https://github.com/alesapin))
-* Fix crash in `quantile` and `median` function over `Nullable(Decimal128)`. [#6378](https://github.com/yandex/ClickHouse/pull/6378) ([Artem Zuikov](https://github.com/4ertus2))
-* Fixed possible incomplete result returned by `SELECT` query with `WHERE` condition on primary key contained conversion to Float type. It was caused by incorrect checking of monotonicity in `toFloat` function. [#6248](https://github.com/yandex/ClickHouse/issues/6248) [#6374](https://github.com/yandex/ClickHouse/pull/6374) ([dimarub2000](https://github.com/dimarub2000))
-* Check `max_expanded_ast_elements` setting for mutations. Clear mutations after `TRUNCATE TABLE`. [#6205](https://github.com/yandex/ClickHouse/pull/6205) ([Winter Zhang](https://github.com/zhang2014))
-* Fix JOIN results for key columns when used with `join_use_nulls`. Attach Nulls instead of columns defaults. [#6249](https://github.com/yandex/ClickHouse/pull/6249) ([Artem Zuikov](https://github.com/4ertus2))
-* Fix for skip indices with vertical merge and alter. Fix for `Bad size of marks file` exception. [#6594](https://github.com/yandex/ClickHouse/issues/6594) [#6713](https://github.com/yandex/ClickHouse/pull/6713) ([alesapin](https://github.com/alesapin))
-* Fix rare crash in `ALTER MODIFY COLUMN` and vertical merge when one of merged/altered parts is empty (0 rows) [#6746](https://github.com/yandex/ClickHouse/issues/6746) [#6780](https://github.com/yandex/ClickHouse/pull/6780) ([alesapin](https://github.com/alesapin))
-* Fixed bug in conversion of `LowCardinality` types in `AggregateFunctionFactory`. This fixes [#6257](https://github.com/yandex/ClickHouse/issues/6257). [#6281](https://github.com/yandex/ClickHouse/pull/6281) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
-* Fix wrong behavior and possible segfaults in `topK` and `topKWeighted` aggregated functions. [#6404](https://github.com/yandex/ClickHouse/pull/6404) ([Anton Popov](https://github.com/CurtizJ))
-* Fixed unsafe code around `getIdentifier` function. [#6401](https://github.com/yandex/ClickHouse/issues/6401) [#6409](https://github.com/yandex/ClickHouse/pull/6409) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed bug in MySQL wire protocol (is used while connecting to ClickHouse form MySQL client). Caused by heap buffer overflow in `PacketPayloadWriteBuffer`. [#6212](https://github.com/yandex/ClickHouse/pull/6212) ([Yuriy Baranov](https://github.com/yurriy))
-* Fixed memory leak in `bitmapSubsetInRange` function. [#6819](https://github.com/yandex/ClickHouse/pull/6819) ([Zhichang Yu](https://github.com/yuzhichang))
-* Fix rare bug when mutation executed after granularity change. [#6816](https://github.com/yandex/ClickHouse/pull/6816) ([alesapin](https://github.com/alesapin))
-* Allow protobuf message with all fields by default. [#6132](https://github.com/yandex/ClickHouse/pull/6132) ([Vitaly Baranov](https://github.com/vitlibar))
-* Resolve a bug with `nullIf` function when we send a `NULL` argument on the second argument. [#6446](https://github.com/yandex/ClickHouse/pull/6446) ([Guillaume Tassery](https://github.com/YiuRULE))
-* Fix rare bug with wrong memory allocation/deallocation in complex key cache dictionaries with string fields which leads to infinite memory consumption (looks like memory leak). Bug reproduces when string size was a power of two starting from eight (8, 16, 32, etc). [#6447](https://github.com/yandex/ClickHouse/pull/6447) ([alesapin](https://github.com/alesapin))
-* Fixed Gorilla encoding on small sequences which caused exception `Cannot write after end of buffer`. [#6398](https://github.com/yandex/ClickHouse/issues/6398) [#6444](https://github.com/yandex/ClickHouse/pull/6444) ([Vasily Nemkov](https://github.com/Enmk))
-* Allow to use not nullable types in JOINs with `join_use_nulls` enabled. [#6705](https://github.com/yandex/ClickHouse/pull/6705) ([Artem Zuikov](https://github.com/4ertus2))
-* Disable `Poco::AbstractConfiguration` substitutions in query in `clickhouse-client`. [#6706](https://github.com/yandex/ClickHouse/pull/6706) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Avoid deadlock in `REPLACE PARTITION`. [#6677](https://github.com/yandex/ClickHouse/pull/6677) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Using `arrayReduce` for constant arguments may lead to segfault. [#6242](https://github.com/yandex/ClickHouse/issues/6242) [#6326](https://github.com/yandex/ClickHouse/pull/6326) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix inconsistent parts which can appear if replica was restored after `DROP PARTITION`. [#6522](https://github.com/yandex/ClickHouse/issues/6522) [#6523](https://github.com/yandex/ClickHouse/pull/6523) ([tavplubix](https://github.com/tavplubix))
-* Fixed hang in `JSONExtractRaw` function. [#6195](https://github.com/yandex/ClickHouse/issues/6195) [#6198](https://github.com/yandex/ClickHouse/pull/6198) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix bug with incorrect skip indices serialization and aggregation with adaptive granularity. [#6594](https://github.com/yandex/ClickHouse/issues/6594). [#6748](https://github.com/yandex/ClickHouse/pull/6748) ([alesapin](https://github.com/alesapin))
-* Fix `WITH ROLLUP` and `WITH CUBE` modifiers of `GROUP BY` with two-level aggregation. [#6225](https://github.com/yandex/ClickHouse/pull/6225) ([Anton Popov](https://github.com/CurtizJ))
-* Fix bug with writing secondary indices marks with adaptive granularity. [#6126](https://github.com/yandex/ClickHouse/pull/6126) ([alesapin](https://github.com/alesapin))
-* Fix initialization order while server startup. Since `StorageMergeTree::background_task_handle` is initialized in `startup()` the `MergeTreeBlockOutputStream::write()` may try to use it before initialization. Just check if it is initialized. [#6080](https://github.com/yandex/ClickHouse/pull/6080) ([Ivan](https://github.com/abyss7))
-* Clearing the data buffer from the previous read operation that was completed with an error. [#6026](https://github.com/yandex/ClickHouse/pull/6026) ([Nikolay](https://github.com/bopohaa))
-* Fix bug with enabling adaptive granularity when creating new replica for Replicated*MergeTree table. [#6394](https://github.com/yandex/ClickHouse/issues/6394) [#6452](https://github.com/yandex/ClickHouse/pull/6452) ([alesapin](https://github.com/alesapin))
-* Fixed possible crash during server startup in case of exception happened in `libunwind` during exception at access to uninitialised `ThreadStatus` structure. [#6456](https://github.com/yandex/ClickHouse/pull/6456) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov))
-* Fix crash in `yandexConsistentHash` function. Found by fuzz test. [#6304](https://github.com/yandex/ClickHouse/issues/6304) [#6305](https://github.com/yandex/ClickHouse/pull/6305) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed the possibility of hanging queries when server is overloaded and global thread pool becomes near full. This have higher chance to happen on clusters with large number of shards (hundreds), because distributed queries allocate a thread per connection to each shard. For example, this issue may reproduce if a cluster of 330 shards is processing 30 concurrent distributed queries. This issue affects all versions starting from 19.2. [#6301](https://github.com/yandex/ClickHouse/pull/6301) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed logic of `arrayEnumerateUniqRanked` function. [#6423](https://github.com/yandex/ClickHouse/pull/6423) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix segfault when decoding symbol table. [#6603](https://github.com/yandex/ClickHouse/pull/6603) ([Amos Bird](https://github.com/amosbird))
-* Fixed irrelevant exception in cast of `LowCardinality(Nullable)` to not-Nullable column in case if it doesn't contain Nulls (e.g. in query like `SELECT CAST(CAST('Hello' AS LowCardinality(Nullable(String))) AS String)`. [#6094](https://github.com/yandex/ClickHouse/issues/6094) [#6119](https://github.com/yandex/ClickHouse/pull/6119) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
-* Removed extra quoting of description in `system.settings` table. [#6696](https://github.com/yandex/ClickHouse/issues/6696) [#6699](https://github.com/yandex/ClickHouse/pull/6699) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Avoid possible deadlock in `TRUNCATE` of Replicated table. [#6695](https://github.com/yandex/ClickHouse/pull/6695) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix reading in order of sorting key. [#6189](https://github.com/yandex/ClickHouse/pull/6189) ([Anton Popov](https://github.com/CurtizJ))
-* Fix `ALTER TABLE ... UPDATE` query for tables with `enable_mixed_granularity_parts=1`. [#6543](https://github.com/yandex/ClickHouse/pull/6543) ([alesapin](https://github.com/alesapin))
-* Fixed the case when server may close listening sockets but not shutdown and continue serving remaining queries. You may end up with two running clickhouse-server processes. Sometimes, the server may return an error `bad_function_call` for remaining queries. [#6231](https://github.com/yandex/ClickHouse/pull/6231) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix bug opened by [#4405](https://github.com/yandex/ClickHouse/pull/4405) (since 19.4.0). Reproduces in queries to Distributed tables over MergeTree tables when we doesn't query any columns (`SELECT 1`). [#6236](https://github.com/yandex/ClickHouse/pull/6236) ([alesapin](https://github.com/alesapin))
-* Fixed overflow in integer division of signed type to unsigned type. The behaviour was exactly as in C or C++ language (integer promotion rules) that may be surprising. Please note that the overflow is still possible when dividing large signed number to large unsigned number or vice-versa (but that case is less usual). The issue existed in all server versions. [#6214](https://github.com/yandex/ClickHouse/issues/6214) [#6233](https://github.com/yandex/ClickHouse/pull/6233) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Limit maximum sleep time for throttling when `max_execution_speed` or `max_execution_speed_bytes` is set. Fixed false errors like `Estimated query execution time (inf seconds) is too long`. [#5547](https://github.com/yandex/ClickHouse/issues/5547) [#6232](https://github.com/yandex/ClickHouse/pull/6232) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed issues about using `MATERIALIZED` columns and aliases in `MaterializedView`. [#448](https://github.com/yandex/ClickHouse/issues/448) [#3484](https://github.com/yandex/ClickHouse/issues/3484) [#3450](https://github.com/yandex/ClickHouse/issues/3450) [#2878](https://github.com/yandex/ClickHouse/issues/2878) [#2285](https://github.com/yandex/ClickHouse/issues/2285) [#3796](https://github.com/yandex/ClickHouse/pull/3796) ([Amos Bird](https://github.com/amosbird)) [#6316](https://github.com/yandex/ClickHouse/pull/6316) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix `FormatFactory` behaviour for input streams which are not implemented as processor. [#6495](https://github.com/yandex/ClickHouse/pull/6495) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
-* Fixed typo. [#6631](https://github.com/yandex/ClickHouse/pull/6631) ([Alex Ryndin](https://github.com/alexryndin))
-* Typo in the error message ( is -> are ). [#6839](https://github.com/yandex/ClickHouse/pull/6839) ([Denis Zhuravlev](https://github.com/den-crane))
-* Fixed error while parsing of columns list from string if type contained a comma (this issue was relevant for `File`, `URL`, `HDFS` storages) [#6217](https://github.com/yandex/ClickHouse/issues/6217). [#6209](https://github.com/yandex/ClickHouse/pull/6209) ([dimarub2000](https://github.com/dimarub2000))
+* Fix segmentation fault when the table has skip indices and vertical merge happens. [#6723](https://github.com/ClickHouse/ClickHouse/pull/6723) ([alesapin](https://github.com/alesapin))
+* Fix per-column TTL with non-trivial column defaults. Previously in case of force TTL merge with `OPTIMIZE ... FINAL` query, expired values was replaced by type defaults instead of user-specified column defaults. [#6796](https://github.com/ClickHouse/ClickHouse/pull/6796) ([Anton Popov](https://github.com/CurtizJ))
+* Fix Kafka messages duplication problem on normal server restart. [#6597](https://github.com/ClickHouse/ClickHouse/pull/6597) ([Ivan](https://github.com/abyss7))
+* Fixed infinite loop when reading Kafka messages. Do not pause/resume consumer on subscription at all - otherwise it may get paused indefinitely in some scenarios. [#6354](https://github.com/ClickHouse/ClickHouse/pull/6354) ([Ivan](https://github.com/abyss7))
+* Fix `Key expression contains comparison between inconvertible types` exception in `bitmapContains` function. [#6136](https://github.com/ClickHouse/ClickHouse/issues/6136) [#6146](https://github.com/ClickHouse/ClickHouse/issues/6146) [#6156](https://github.com/ClickHouse/ClickHouse/pull/6156) ([dimarub2000](https://github.com/dimarub2000))
+* Fix segfault with enabled `optimize_skip_unused_shards` and missing sharding key. [#6384](https://github.com/ClickHouse/ClickHouse/pull/6384) ([Anton Popov](https://github.com/CurtizJ))
+* Fixed wrong code in mutations that may lead to memory corruption. Fixed segfault with read of address `0x14c0` that may happed due to concurrent `DROP TABLE` and `SELECT` from `system.parts` or `system.parts_columns`. Fixed race condition in preparation of mutation queries. Fixed deadlock caused by `OPTIMIZE` of Replicated tables and concurrent modification operations like ALTERs. [#6514](https://github.com/ClickHouse/ClickHouse/pull/6514) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Removed extra verbose logging in MySQL interface [#6389](https://github.com/ClickHouse/ClickHouse/pull/6389) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Return ability to parse boolean settings from 'true' and 'false' in configuration file. [#6278](https://github.com/ClickHouse/ClickHouse/pull/6278) ([alesapin](https://github.com/alesapin))
+* Fix crash in `quantile` and `median` function over `Nullable(Decimal128)`. [#6378](https://github.com/ClickHouse/ClickHouse/pull/6378) ([Artem Zuikov](https://github.com/4ertus2))
+* Fixed possible incomplete result returned by `SELECT` query with `WHERE` condition on primary key contained conversion to Float type. It was caused by incorrect checking of monotonicity in `toFloat` function. [#6248](https://github.com/ClickHouse/ClickHouse/issues/6248) [#6374](https://github.com/ClickHouse/ClickHouse/pull/6374) ([dimarub2000](https://github.com/dimarub2000))
+* Check `max_expanded_ast_elements` setting for mutations. Clear mutations after `TRUNCATE TABLE`. [#6205](https://github.com/ClickHouse/ClickHouse/pull/6205) ([Winter Zhang](https://github.com/zhang2014))
+* Fix JOIN results for key columns when used with `join_use_nulls`. Attach Nulls instead of columns defaults. [#6249](https://github.com/ClickHouse/ClickHouse/pull/6249) ([Artem Zuikov](https://github.com/4ertus2))
+* Fix for skip indices with vertical merge and alter. Fix for `Bad size of marks file` exception. [#6594](https://github.com/ClickHouse/ClickHouse/issues/6594) [#6713](https://github.com/ClickHouse/ClickHouse/pull/6713) ([alesapin](https://github.com/alesapin))
+* Fix rare crash in `ALTER MODIFY COLUMN` and vertical merge when one of merged/altered parts is empty (0 rows) [#6746](https://github.com/ClickHouse/ClickHouse/issues/6746) [#6780](https://github.com/ClickHouse/ClickHouse/pull/6780) ([alesapin](https://github.com/alesapin))
+* Fixed bug in conversion of `LowCardinality` types in `AggregateFunctionFactory`. This fixes [#6257](https://github.com/ClickHouse/ClickHouse/issues/6257). [#6281](https://github.com/ClickHouse/ClickHouse/pull/6281) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* Fix wrong behavior and possible segfaults in `topK` and `topKWeighted` aggregated functions. [#6404](https://github.com/ClickHouse/ClickHouse/pull/6404) ([Anton Popov](https://github.com/CurtizJ))
+* Fixed unsafe code around `getIdentifier` function. [#6401](https://github.com/ClickHouse/ClickHouse/issues/6401) [#6409](https://github.com/ClickHouse/ClickHouse/pull/6409) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed bug in MySQL wire protocol (is used while connecting to ClickHouse form MySQL client). Caused by heap buffer overflow in `PacketPayloadWriteBuffer`. [#6212](https://github.com/ClickHouse/ClickHouse/pull/6212) ([Yuriy Baranov](https://github.com/yurriy))
+* Fixed memory leak in `bitmapSubsetInRange` function. [#6819](https://github.com/ClickHouse/ClickHouse/pull/6819) ([Zhichang Yu](https://github.com/yuzhichang))
+* Fix rare bug when mutation executed after granularity change. [#6816](https://github.com/ClickHouse/ClickHouse/pull/6816) ([alesapin](https://github.com/alesapin))
+* Allow protobuf message with all fields by default. [#6132](https://github.com/ClickHouse/ClickHouse/pull/6132) ([Vitaly Baranov](https://github.com/vitlibar))
+* Resolve a bug with `nullIf` function when we send a `NULL` argument on the second argument. [#6446](https://github.com/ClickHouse/ClickHouse/pull/6446) ([Guillaume Tassery](https://github.com/YiuRULE))
+* Fix rare bug with wrong memory allocation/deallocation in complex key cache dictionaries with string fields which leads to infinite memory consumption (looks like memory leak). Bug reproduces when string size was a power of two starting from eight (8, 16, 32, etc). [#6447](https://github.com/ClickHouse/ClickHouse/pull/6447) ([alesapin](https://github.com/alesapin))
+* Fixed Gorilla encoding on small sequences which caused exception `Cannot write after end of buffer`. [#6398](https://github.com/ClickHouse/ClickHouse/issues/6398) [#6444](https://github.com/ClickHouse/ClickHouse/pull/6444) ([Vasily Nemkov](https://github.com/Enmk))
+* Allow to use not nullable types in JOINs with `join_use_nulls` enabled. [#6705](https://github.com/ClickHouse/ClickHouse/pull/6705) ([Artem Zuikov](https://github.com/4ertus2))
+* Disable `Poco::AbstractConfiguration` substitutions in query in `clickhouse-client`. [#6706](https://github.com/ClickHouse/ClickHouse/pull/6706) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Avoid deadlock in `REPLACE PARTITION`. [#6677](https://github.com/ClickHouse/ClickHouse/pull/6677) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Using `arrayReduce` for constant arguments may lead to segfault. [#6242](https://github.com/ClickHouse/ClickHouse/issues/6242) [#6326](https://github.com/ClickHouse/ClickHouse/pull/6326) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix inconsistent parts which can appear if replica was restored after `DROP PARTITION`. [#6522](https://github.com/ClickHouse/ClickHouse/issues/6522) [#6523](https://github.com/ClickHouse/ClickHouse/pull/6523) ([tavplubix](https://github.com/tavplubix))
+* Fixed hang in `JSONExtractRaw` function. [#6195](https://github.com/ClickHouse/ClickHouse/issues/6195) [#6198](https://github.com/ClickHouse/ClickHouse/pull/6198) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix bug with incorrect skip indices serialization and aggregation with adaptive granularity. [#6594](https://github.com/ClickHouse/ClickHouse/issues/6594). [#6748](https://github.com/ClickHouse/ClickHouse/pull/6748) ([alesapin](https://github.com/alesapin))
+* Fix `WITH ROLLUP` and `WITH CUBE` modifiers of `GROUP BY` with two-level aggregation. [#6225](https://github.com/ClickHouse/ClickHouse/pull/6225) ([Anton Popov](https://github.com/CurtizJ))
+* Fix bug with writing secondary indices marks with adaptive granularity. [#6126](https://github.com/ClickHouse/ClickHouse/pull/6126) ([alesapin](https://github.com/alesapin))
+* Fix initialization order while server startup. Since `StorageMergeTree::background_task_handle` is initialized in `startup()` the `MergeTreeBlockOutputStream::write()` may try to use it before initialization. Just check if it is initialized. [#6080](https://github.com/ClickHouse/ClickHouse/pull/6080) ([Ivan](https://github.com/abyss7))
+* Clearing the data buffer from the previous read operation that was completed with an error. [#6026](https://github.com/ClickHouse/ClickHouse/pull/6026) ([Nikolay](https://github.com/bopohaa))
+* Fix bug with enabling adaptive granularity when creating new replica for Replicated*MergeTree table. [#6394](https://github.com/ClickHouse/ClickHouse/issues/6394) [#6452](https://github.com/ClickHouse/ClickHouse/pull/6452) ([alesapin](https://github.com/alesapin))
+* Fixed possible crash during server startup in case of exception happened in `libunwind` during exception at access to uninitialised `ThreadStatus` structure. [#6456](https://github.com/ClickHouse/ClickHouse/pull/6456) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov))
+* Fix crash in `yandexConsistentHash` function. Found by fuzz test. [#6304](https://github.com/ClickHouse/ClickHouse/issues/6304) [#6305](https://github.com/ClickHouse/ClickHouse/pull/6305) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed the possibility of hanging queries when server is overloaded and global thread pool becomes near full. This have higher chance to happen on clusters with large number of shards (hundreds), because distributed queries allocate a thread per connection to each shard. For example, this issue may reproduce if a cluster of 330 shards is processing 30 concurrent distributed queries. This issue affects all versions starting from 19.2. [#6301](https://github.com/ClickHouse/ClickHouse/pull/6301) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed logic of `arrayEnumerateUniqRanked` function. [#6423](https://github.com/ClickHouse/ClickHouse/pull/6423) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix segfault when decoding symbol table. [#6603](https://github.com/ClickHouse/ClickHouse/pull/6603) ([Amos Bird](https://github.com/amosbird))
+* Fixed irrelevant exception in cast of `LowCardinality(Nullable)` to not-Nullable column in case if it doesn't contain Nulls (e.g. in query like `SELECT CAST(CAST('Hello' AS LowCardinality(Nullable(String))) AS String)`. [#6094](https://github.com/ClickHouse/ClickHouse/issues/6094) [#6119](https://github.com/ClickHouse/ClickHouse/pull/6119) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* Removed extra quoting of description in `system.settings` table. [#6696](https://github.com/ClickHouse/ClickHouse/issues/6696) [#6699](https://github.com/ClickHouse/ClickHouse/pull/6699) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Avoid possible deadlock in `TRUNCATE` of Replicated table. [#6695](https://github.com/ClickHouse/ClickHouse/pull/6695) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix reading in order of sorting key. [#6189](https://github.com/ClickHouse/ClickHouse/pull/6189) ([Anton Popov](https://github.com/CurtizJ))
+* Fix `ALTER TABLE ... UPDATE` query for tables with `enable_mixed_granularity_parts=1`. [#6543](https://github.com/ClickHouse/ClickHouse/pull/6543) ([alesapin](https://github.com/alesapin))
+* Fixed the case when server may close listening sockets but not shutdown and continue serving remaining queries. You may end up with two running clickhouse-server processes. Sometimes, the server may return an error `bad_function_call` for remaining queries. [#6231](https://github.com/ClickHouse/ClickHouse/pull/6231) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix bug opened by [#4405](https://github.com/ClickHouse/ClickHouse/pull/4405) (since 19.4.0). Reproduces in queries to Distributed tables over MergeTree tables when we doesn't query any columns (`SELECT 1`). [#6236](https://github.com/ClickHouse/ClickHouse/pull/6236) ([alesapin](https://github.com/alesapin))
+* Fixed overflow in integer division of signed type to unsigned type. The behaviour was exactly as in C or C++ language (integer promotion rules) that may be surprising. Please note that the overflow is still possible when dividing large signed number to large unsigned number or vice-versa (but that case is less usual). The issue existed in all server versions. [#6214](https://github.com/ClickHouse/ClickHouse/issues/6214) [#6233](https://github.com/ClickHouse/ClickHouse/pull/6233) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Limit maximum sleep time for throttling when `max_execution_speed` or `max_execution_speed_bytes` is set. Fixed false errors like `Estimated query execution time (inf seconds) is too long`. [#5547](https://github.com/ClickHouse/ClickHouse/issues/5547) [#6232](https://github.com/ClickHouse/ClickHouse/pull/6232) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed issues about using `MATERIALIZED` columns and aliases in `MaterializedView`. [#448](https://github.com/ClickHouse/ClickHouse/issues/448) [#3484](https://github.com/ClickHouse/ClickHouse/issues/3484) [#3450](https://github.com/ClickHouse/ClickHouse/issues/3450) [#2878](https://github.com/ClickHouse/ClickHouse/issues/2878) [#2285](https://github.com/ClickHouse/ClickHouse/issues/2285) [#3796](https://github.com/ClickHouse/ClickHouse/pull/3796) ([Amos Bird](https://github.com/amosbird)) [#6316](https://github.com/ClickHouse/ClickHouse/pull/6316) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix `FormatFactory` behaviour for input streams which are not implemented as processor. [#6495](https://github.com/ClickHouse/ClickHouse/pull/6495) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* Fixed typo. [#6631](https://github.com/ClickHouse/ClickHouse/pull/6631) ([Alex Ryndin](https://github.com/alexryndin))
+* Typo in the error message ( is -> are ). [#6839](https://github.com/ClickHouse/ClickHouse/pull/6839) ([Denis Zhuravlev](https://github.com/den-crane))
+* Fixed error while parsing of columns list from string if type contained a comma (this issue was relevant for `File`, `URL`, `HDFS` storages) [#6217](https://github.com/ClickHouse/ClickHouse/issues/6217). [#6209](https://github.com/ClickHouse/ClickHouse/pull/6209) ([dimarub2000](https://github.com/dimarub2000))
### Security Fix
* This release also contains all bug security fixes from 19.13 and 19.11.
-* Fixed the possibility of a fabricated query to cause server crash due to stack overflow in SQL parser. Fixed the possibility of stack overflow in Merge and Distributed tables, materialized views and conditions for row-level security that involve subqueries. [#6433](https://github.com/yandex/ClickHouse/pull/6433) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed the possibility of a fabricated query to cause server crash due to stack overflow in SQL parser. Fixed the possibility of stack overflow in Merge and Distributed tables, materialized views and conditions for row-level security that involve subqueries. [#6433](https://github.com/ClickHouse/ClickHouse/pull/6433) ([alexey-milovidov](https://github.com/alexey-milovidov))
### Improvement
-* Correct implementation of ternary logic for `AND/OR`. [#6048](https://github.com/yandex/ClickHouse/pull/6048) ([Alexander Kazakov](https://github.com/Akazz))
-* Now values and rows with expired TTL will be removed after `OPTIMIZE ... FINAL` query from old parts without TTL infos or with outdated TTL infos, e.g. after `ALTER ... MODIFY TTL` query. Added queries `SYSTEM STOP/START TTL MERGES` to disallow/allow assign merges with TTL and filter expired values in all merges. [#6274](https://github.com/yandex/ClickHouse/pull/6274) ([Anton Popov](https://github.com/CurtizJ))
-* Possibility to change the location of ClickHouse history file for client using `CLICKHOUSE_HISTORY_FILE` env. [#6840](https://github.com/yandex/ClickHouse/pull/6840) ([filimonov](https://github.com/filimonov))
-* Remove `dry_run` flag from `InterpreterSelectQuery`. ... [#6375](https://github.com/yandex/ClickHouse/pull/6375) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
-* Support `ASOF JOIN` with `ON` section. [#6211](https://github.com/yandex/ClickHouse/pull/6211) ([Artem Zuikov](https://github.com/4ertus2))
-* Better support of skip indexes for mutations and replication. Support for `MATERIALIZE/CLEAR INDEX ... IN PARTITION` query. `UPDATE x = x` recalculates all indices that use column `x`. [#5053](https://github.com/yandex/ClickHouse/pull/5053) ([Nikita Vasilev](https://github.com/nikvas0))
-* Allow to `ATTACH` live views (for example, at the server startup) regardless to `allow_experimental_live_view` setting. [#6754](https://github.com/yandex/ClickHouse/pull/6754) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* For stack traces gathered by query profiler, do not include stack frames generated by the query profiler itself. [#6250](https://github.com/yandex/ClickHouse/pull/6250) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Now table functions `values`, `file`, `url`, `hdfs` have support for ALIAS columns. [#6255](https://github.com/yandex/ClickHouse/pull/6255) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Throw an exception if `config.d` file doesn't have the corresponding root element as the config file. [#6123](https://github.com/yandex/ClickHouse/pull/6123) ([dimarub2000](https://github.com/dimarub2000))
-* Print extra info in exception message for `no space left on device`. [#6182](https://github.com/yandex/ClickHouse/issues/6182), [#6252](https://github.com/yandex/ClickHouse/issues/6252) [#6352](https://github.com/yandex/ClickHouse/pull/6352) ([tavplubix](https://github.com/tavplubix))
-* When determining shards of a `Distributed` table to be covered by a read query (for `optimize_skip_unused_shards` = 1) ClickHouse now checks conditions from both `prewhere` and `where` clauses of select statement. [#6521](https://github.com/yandex/ClickHouse/pull/6521) ([Alexander Kazakov](https://github.com/Akazz))
-* Enabled `SIMDJSON` for machines without AVX2 but with SSE 4.2 and PCLMUL instruction set. [#6285](https://github.com/yandex/ClickHouse/issues/6285) [#6320](https://github.com/yandex/ClickHouse/pull/6320) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* ClickHouse can work on filesystems without `O_DIRECT` support (such as ZFS and BtrFS) without additional tuning. [#4449](https://github.com/yandex/ClickHouse/issues/4449) [#6730](https://github.com/yandex/ClickHouse/pull/6730) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Support push down predicate for final subquery. [#6120](https://github.com/yandex/ClickHouse/pull/6120) ([TCeason](https://github.com/TCeason)) [#6162](https://github.com/yandex/ClickHouse/pull/6162) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Better `JOIN ON` keys extraction [#6131](https://github.com/yandex/ClickHouse/pull/6131) ([Artem Zuikov](https://github.com/4ertus2))
-* Upated `SIMDJSON`. [#6285](https://github.com/yandex/ClickHouse/issues/6285). [#6306](https://github.com/yandex/ClickHouse/pull/6306) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Optimize selecting of smallest column for `SELECT count()` query. [#6344](https://github.com/yandex/ClickHouse/pull/6344) ([Amos Bird](https://github.com/amosbird))
-* Added `strict` parameter in `windowFunnel()`. When the `strict` is set, the `windowFunnel()` applies conditions only for the unique values. [#6548](https://github.com/yandex/ClickHouse/pull/6548) ([achimbab](https://github.com/achimbab))
-* Safer interface of `mysqlxx::Pool`. [#6150](https://github.com/yandex/ClickHouse/pull/6150) ([avasiliev](https://github.com/avasiliev))
-* Options line size when executing with `--help` option now corresponds with terminal size. [#6590](https://github.com/yandex/ClickHouse/pull/6590) ([dimarub2000](https://github.com/dimarub2000))
-* Disable "read in order" optimization for aggregation without keys. [#6599](https://github.com/yandex/ClickHouse/pull/6599) ([Anton Popov](https://github.com/CurtizJ))
-* HTTP status code for `INCORRECT_DATA` and `TYPE_MISMATCH` error codes was changed from default `500 Internal Server Error` to `400 Bad Request`. [#6271](https://github.com/yandex/ClickHouse/pull/6271) ([Alexander Rodin](https://github.com/a-rodin))
-* Move Join object from `ExpressionAction` into `AnalyzedJoin`. `ExpressionAnalyzer` and `ExpressionAction` do not know about `Join` class anymore. Its logic is hidden by `AnalyzedJoin` iface. [#6801](https://github.com/yandex/ClickHouse/pull/6801) ([Artem Zuikov](https://github.com/4ertus2))
-* Fixed possible deadlock of distributed queries when one of shards is localhost but the query is sent via network connection. [#6759](https://github.com/yandex/ClickHouse/pull/6759) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Changed semantic of multiple tables `RENAME` to avoid possible deadlocks. [#6757](https://github.com/yandex/ClickHouse/issues/6757). [#6756](https://github.com/yandex/ClickHouse/pull/6756) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Rewritten MySQL compatibility server to prevent loading full packet payload in memory. Decreased memory consumption for each connection to approximately `2 * DBMS_DEFAULT_BUFFER_SIZE` (read/write buffers). [#5811](https://github.com/yandex/ClickHouse/pull/5811) ([Yuriy Baranov](https://github.com/yurriy))
-* Move AST alias interpreting logic out of parser that doesn't have to know anything about query semantics. [#6108](https://github.com/yandex/ClickHouse/pull/6108) ([Artem Zuikov](https://github.com/4ertus2))
-* Slightly more safe parsing of `NamesAndTypesList`. [#6408](https://github.com/yandex/ClickHouse/issues/6408). [#6410](https://github.com/yandex/ClickHouse/pull/6410) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* `clickhouse-copier`: Allow use `where_condition` from config with `partition_key` alias in query for checking partition existence (Earlier it was used only in reading data queries). [#6577](https://github.com/yandex/ClickHouse/pull/6577) ([proller](https://github.com/proller))
-* Added optional message argument in `throwIf`. ([#5772](https://github.com/yandex/ClickHouse/issues/5772)) [#6329](https://github.com/yandex/ClickHouse/pull/6329) ([Vdimir](https://github.com/Vdimir))
-* Server exception got while sending insertion data is now being processed in client as well. [#5891](https://github.com/yandex/ClickHouse/issues/5891) [#6711](https://github.com/yandex/ClickHouse/pull/6711) ([dimarub2000](https://github.com/dimarub2000))
-* Added a metric `DistributedFilesToInsert` that shows the total number of files in filesystem that are selected to send to remote servers by Distributed tables. The number is summed across all shards. [#6600](https://github.com/yandex/ClickHouse/pull/6600) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Move most of JOINs prepare logic from `ExpressionAction/ExpressionAnalyzer` to `AnalyzedJoin`. [#6785](https://github.com/yandex/ClickHouse/pull/6785) ([Artem Zuikov](https://github.com/4ertus2))
-* Fix TSan [warning](https://clickhouse-test-reports.s3.yandex.net/6399/c1c1d1daa98e199e620766f1bd06a5921050a00d/functional_stateful_tests_(thread).html) 'lock-order-inversion'. [#6740](https://github.com/yandex/ClickHouse/pull/6740) ([Vasily Nemkov](https://github.com/Enmk))
-* Better information messages about lack of Linux capabilities. Logging fatal errors with "fatal" level, that will make it easier to find in `system.text_log`. [#6441](https://github.com/yandex/ClickHouse/pull/6441) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* When enable dumping temporary data to the disk to restrict memory usage during `GROUP BY`, `ORDER BY`, it didn't check the free disk space. The fix add a new setting `min_free_disk_space`, when the free disk space it smaller then the threshold, the query will stop and throw `ErrorCodes::NOT_ENOUGH_SPACE`. [#6678](https://github.com/yandex/ClickHouse/pull/6678) ([Weiqing Xu](https://github.com/weiqxu)) [#6691](https://github.com/yandex/ClickHouse/pull/6691) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Removed recursive rwlock by thread. It makes no sense, because threads are reused between queries. `SELECT` query may acquire a lock in one thread, hold a lock from another thread and exit from first thread. In the same time, first thread can be reused by `DROP` query. This will lead to false "Attempt to acquire exclusive lock recursively" messages. [#6771](https://github.com/yandex/ClickHouse/pull/6771) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Split `ExpressionAnalyzer.appendJoin()`. Prepare a place in `ExpressionAnalyzer` for `MergeJoin`. [#6524](https://github.com/yandex/ClickHouse/pull/6524) ([Artem Zuikov](https://github.com/4ertus2))
-* Added `mysql_native_password` authentication plugin to MySQL compatibility server. [#6194](https://github.com/yandex/ClickHouse/pull/6194) ([Yuriy Baranov](https://github.com/yurriy))
-* Less number of `clock_gettime` calls; fixed ABI compatibility between debug/release in `Allocator` (insignificant issue). [#6197](https://github.com/yandex/ClickHouse/pull/6197) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Move `collectUsedColumns` from `ExpressionAnalyzer` to `SyntaxAnalyzer`. `SyntaxAnalyzer` makes `required_source_columns` itself now. [#6416](https://github.com/yandex/ClickHouse/pull/6416) ([Artem Zuikov](https://github.com/4ertus2))
-* Add setting `joined_subquery_requires_alias` to require aliases for subselects and table functions in `FROM` that more than one table is present (i.e. queries with JOINs). [#6733](https://github.com/yandex/ClickHouse/pull/6733) ([Artem Zuikov](https://github.com/4ertus2))
-* Extract `GetAggregatesVisitor` class from `ExpressionAnalyzer`. [#6458](https://github.com/yandex/ClickHouse/pull/6458) ([Artem Zuikov](https://github.com/4ertus2))
-* `system.query_log`: change data type of `type` column to `Enum`. [#6265](https://github.com/yandex/ClickHouse/pull/6265) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov))
-* Static linking of `sha256_password` authentication plugin. [#6512](https://github.com/yandex/ClickHouse/pull/6512) ([Yuriy Baranov](https://github.com/yurriy))
-* Avoid extra dependency for the setting `compile` to work. In previous versions, the user may get error like `cannot open crti.o`, `unable to find library -lc` etc. [#6309](https://github.com/yandex/ClickHouse/pull/6309) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* More validation of the input that may come from malicious replica. [#6303](https://github.com/yandex/ClickHouse/pull/6303) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Now `clickhouse-obfuscator` file is available in `clickhouse-client` package. In previous versions it was available as `clickhouse obfuscator` (with whitespace). [#5816](https://github.com/yandex/ClickHouse/issues/5816) [#6609](https://github.com/yandex/ClickHouse/pull/6609) ([dimarub2000](https://github.com/dimarub2000))
-* Fixed deadlock when we have at least two queries that read at least two tables in different order and another query that performs DDL operation on one of tables. Fixed another very rare deadlock. [#6764](https://github.com/yandex/ClickHouse/pull/6764) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Added `os_thread_ids` column to `system.processes` and `system.query_log` for better debugging possibilities. [#6763](https://github.com/yandex/ClickHouse/pull/6763) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* A workaround for PHP mysqlnd extension bugs which occur when `sha256_password` is used as a default authentication plugin (described in [#6031](https://github.com/yandex/ClickHouse/issues/6031)). [#6113](https://github.com/yandex/ClickHouse/pull/6113) ([Yuriy Baranov](https://github.com/yurriy))
-* Remove unneeded place with changed nullability columns. [#6693](https://github.com/yandex/ClickHouse/pull/6693) ([Artem Zuikov](https://github.com/4ertus2))
-* Set default value of `queue_max_wait_ms` to zero, because current value (five seconds) makes no sense. There are rare circumstances when this settings has any use. Added settings `replace_running_query_max_wait_ms`, `kafka_max_wait_ms` and `connection_pool_max_wait_ms` for disambiguation. [#6692](https://github.com/yandex/ClickHouse/pull/6692) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Extract `SelectQueryExpressionAnalyzer` from `ExpressionAnalyzer`. Keep the last one for non-select queries. [#6499](https://github.com/yandex/ClickHouse/pull/6499) ([Artem Zuikov](https://github.com/4ertus2))
-* Removed duplicating input and output formats. [#6239](https://github.com/yandex/ClickHouse/pull/6239) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
-* Allow user to override `poll_interval` and `idle_connection_timeout` settings on connection. [#6230](https://github.com/yandex/ClickHouse/pull/6230) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* `MergeTree` now has an additional option `ttl_only_drop_parts` (disabled by default) to avoid partial pruning of parts, so that they dropped completely when all the rows in a part are expired. [#6191](https://github.com/yandex/ClickHouse/pull/6191) ([Sergi Vladykin](https://github.com/svladykin))
-* Type checks for set index functions. Throw exception if function got a wrong type. This fixes fuzz test with UBSan. [#6511](https://github.com/yandex/ClickHouse/pull/6511) ([Nikita Vasilev](https://github.com/nikvas0))
+* Correct implementation of ternary logic for `AND/OR`. [#6048](https://github.com/ClickHouse/ClickHouse/pull/6048) ([Alexander Kazakov](https://github.com/Akazz))
+* Now values and rows with expired TTL will be removed after `OPTIMIZE ... FINAL` query from old parts without TTL infos or with outdated TTL infos, e.g. after `ALTER ... MODIFY TTL` query. Added queries `SYSTEM STOP/START TTL MERGES` to disallow/allow assign merges with TTL and filter expired values in all merges. [#6274](https://github.com/ClickHouse/ClickHouse/pull/6274) ([Anton Popov](https://github.com/CurtizJ))
+* Possibility to change the location of ClickHouse history file for client using `CLICKHOUSE_HISTORY_FILE` env. [#6840](https://github.com/ClickHouse/ClickHouse/pull/6840) ([filimonov](https://github.com/filimonov))
+* Remove `dry_run` flag from `InterpreterSelectQuery`. ... [#6375](https://github.com/ClickHouse/ClickHouse/pull/6375) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* Support `ASOF JOIN` with `ON` section. [#6211](https://github.com/ClickHouse/ClickHouse/pull/6211) ([Artem Zuikov](https://github.com/4ertus2))
+* Better support of skip indexes for mutations and replication. Support for `MATERIALIZE/CLEAR INDEX ... IN PARTITION` query. `UPDATE x = x` recalculates all indices that use column `x`. [#5053](https://github.com/ClickHouse/ClickHouse/pull/5053) ([Nikita Vasilev](https://github.com/nikvas0))
+* Allow to `ATTACH` live views (for example, at the server startup) regardless to `allow_experimental_live_view` setting. [#6754](https://github.com/ClickHouse/ClickHouse/pull/6754) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* For stack traces gathered by query profiler, do not include stack frames generated by the query profiler itself. [#6250](https://github.com/ClickHouse/ClickHouse/pull/6250) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Now table functions `values`, `file`, `url`, `hdfs` have support for ALIAS columns. [#6255](https://github.com/ClickHouse/ClickHouse/pull/6255) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Throw an exception if `config.d` file doesn't have the corresponding root element as the config file. [#6123](https://github.com/ClickHouse/ClickHouse/pull/6123) ([dimarub2000](https://github.com/dimarub2000))
+* Print extra info in exception message for `no space left on device`. [#6182](https://github.com/ClickHouse/ClickHouse/issues/6182), [#6252](https://github.com/ClickHouse/ClickHouse/issues/6252) [#6352](https://github.com/ClickHouse/ClickHouse/pull/6352) ([tavplubix](https://github.com/tavplubix))
+* When determining shards of a `Distributed` table to be covered by a read query (for `optimize_skip_unused_shards` = 1) ClickHouse now checks conditions from both `prewhere` and `where` clauses of select statement. [#6521](https://github.com/ClickHouse/ClickHouse/pull/6521) ([Alexander Kazakov](https://github.com/Akazz))
+* Enabled `SIMDJSON` for machines without AVX2 but with SSE 4.2 and PCLMUL instruction set. [#6285](https://github.com/ClickHouse/ClickHouse/issues/6285) [#6320](https://github.com/ClickHouse/ClickHouse/pull/6320) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* ClickHouse can work on filesystems without `O_DIRECT` support (such as ZFS and BtrFS) without additional tuning. [#4449](https://github.com/ClickHouse/ClickHouse/issues/4449) [#6730](https://github.com/ClickHouse/ClickHouse/pull/6730) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Support push down predicate for final subquery. [#6120](https://github.com/ClickHouse/ClickHouse/pull/6120) ([TCeason](https://github.com/TCeason)) [#6162](https://github.com/ClickHouse/ClickHouse/pull/6162) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Better `JOIN ON` keys extraction [#6131](https://github.com/ClickHouse/ClickHouse/pull/6131) ([Artem Zuikov](https://github.com/4ertus2))
+* Upated `SIMDJSON`. [#6285](https://github.com/ClickHouse/ClickHouse/issues/6285). [#6306](https://github.com/ClickHouse/ClickHouse/pull/6306) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Optimize selecting of smallest column for `SELECT count()` query. [#6344](https://github.com/ClickHouse/ClickHouse/pull/6344) ([Amos Bird](https://github.com/amosbird))
+* Added `strict` parameter in `windowFunnel()`. When the `strict` is set, the `windowFunnel()` applies conditions only for the unique values. [#6548](https://github.com/ClickHouse/ClickHouse/pull/6548) ([achimbab](https://github.com/achimbab))
+* Safer interface of `mysqlxx::Pool`. [#6150](https://github.com/ClickHouse/ClickHouse/pull/6150) ([avasiliev](https://github.com/avasiliev))
+* Options line size when executing with `--help` option now corresponds with terminal size. [#6590](https://github.com/ClickHouse/ClickHouse/pull/6590) ([dimarub2000](https://github.com/dimarub2000))
+* Disable "read in order" optimization for aggregation without keys. [#6599](https://github.com/ClickHouse/ClickHouse/pull/6599) ([Anton Popov](https://github.com/CurtizJ))
+* HTTP status code for `INCORRECT_DATA` and `TYPE_MISMATCH` error codes was changed from default `500 Internal Server Error` to `400 Bad Request`. [#6271](https://github.com/ClickHouse/ClickHouse/pull/6271) ([Alexander Rodin](https://github.com/a-rodin))
+* Move Join object from `ExpressionAction` into `AnalyzedJoin`. `ExpressionAnalyzer` and `ExpressionAction` do not know about `Join` class anymore. Its logic is hidden by `AnalyzedJoin` iface. [#6801](https://github.com/ClickHouse/ClickHouse/pull/6801) ([Artem Zuikov](https://github.com/4ertus2))
+* Fixed possible deadlock of distributed queries when one of shards is localhost but the query is sent via network connection. [#6759](https://github.com/ClickHouse/ClickHouse/pull/6759) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Changed semantic of multiple tables `RENAME` to avoid possible deadlocks. [#6757](https://github.com/ClickHouse/ClickHouse/issues/6757). [#6756](https://github.com/ClickHouse/ClickHouse/pull/6756) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Rewritten MySQL compatibility server to prevent loading full packet payload in memory. Decreased memory consumption for each connection to approximately `2 * DBMS_DEFAULT_BUFFER_SIZE` (read/write buffers). [#5811](https://github.com/ClickHouse/ClickHouse/pull/5811) ([Yuriy Baranov](https://github.com/yurriy))
+* Move AST alias interpreting logic out of parser that doesn't have to know anything about query semantics. [#6108](https://github.com/ClickHouse/ClickHouse/pull/6108) ([Artem Zuikov](https://github.com/4ertus2))
+* Slightly more safe parsing of `NamesAndTypesList`. [#6408](https://github.com/ClickHouse/ClickHouse/issues/6408). [#6410](https://github.com/ClickHouse/ClickHouse/pull/6410) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* `clickhouse-copier`: Allow use `where_condition` from config with `partition_key` alias in query for checking partition existence (Earlier it was used only in reading data queries). [#6577](https://github.com/ClickHouse/ClickHouse/pull/6577) ([proller](https://github.com/proller))
+* Added optional message argument in `throwIf`. ([#5772](https://github.com/ClickHouse/ClickHouse/issues/5772)) [#6329](https://github.com/ClickHouse/ClickHouse/pull/6329) ([Vdimir](https://github.com/Vdimir))
+* Server exception got while sending insertion data is now being processed in client as well. [#5891](https://github.com/ClickHouse/ClickHouse/issues/5891) [#6711](https://github.com/ClickHouse/ClickHouse/pull/6711) ([dimarub2000](https://github.com/dimarub2000))
+* Added a metric `DistributedFilesToInsert` that shows the total number of files in filesystem that are selected to send to remote servers by Distributed tables. The number is summed across all shards. [#6600](https://github.com/ClickHouse/ClickHouse/pull/6600) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Move most of JOINs prepare logic from `ExpressionAction/ExpressionAnalyzer` to `AnalyzedJoin`. [#6785](https://github.com/ClickHouse/ClickHouse/pull/6785) ([Artem Zuikov](https://github.com/4ertus2))
+* Fix TSan [warning](https://clickhouse-test-reports.s3.yandex.net/6399/c1c1d1daa98e199e620766f1bd06a5921050a00d/functional_stateful_tests_(thread).html) 'lock-order-inversion'. [#6740](https://github.com/ClickHouse/ClickHouse/pull/6740) ([Vasily Nemkov](https://github.com/Enmk))
+* Better information messages about lack of Linux capabilities. Logging fatal errors with "fatal" level, that will make it easier to find in `system.text_log`. [#6441](https://github.com/ClickHouse/ClickHouse/pull/6441) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* When enable dumping temporary data to the disk to restrict memory usage during `GROUP BY`, `ORDER BY`, it didn't check the free disk space. The fix add a new setting `min_free_disk_space`, when the free disk space it smaller then the threshold, the query will stop and throw `ErrorCodes::NOT_ENOUGH_SPACE`. [#6678](https://github.com/ClickHouse/ClickHouse/pull/6678) ([Weiqing Xu](https://github.com/weiqxu)) [#6691](https://github.com/ClickHouse/ClickHouse/pull/6691) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Removed recursive rwlock by thread. It makes no sense, because threads are reused between queries. `SELECT` query may acquire a lock in one thread, hold a lock from another thread and exit from first thread. In the same time, first thread can be reused by `DROP` query. This will lead to false "Attempt to acquire exclusive lock recursively" messages. [#6771](https://github.com/ClickHouse/ClickHouse/pull/6771) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Split `ExpressionAnalyzer.appendJoin()`. Prepare a place in `ExpressionAnalyzer` for `MergeJoin`. [#6524](https://github.com/ClickHouse/ClickHouse/pull/6524) ([Artem Zuikov](https://github.com/4ertus2))
+* Added `mysql_native_password` authentication plugin to MySQL compatibility server. [#6194](https://github.com/ClickHouse/ClickHouse/pull/6194) ([Yuriy Baranov](https://github.com/yurriy))
+* Less number of `clock_gettime` calls; fixed ABI compatibility between debug/release in `Allocator` (insignificant issue). [#6197](https://github.com/ClickHouse/ClickHouse/pull/6197) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Move `collectUsedColumns` from `ExpressionAnalyzer` to `SyntaxAnalyzer`. `SyntaxAnalyzer` makes `required_source_columns` itself now. [#6416](https://github.com/ClickHouse/ClickHouse/pull/6416) ([Artem Zuikov](https://github.com/4ertus2))
+* Add setting `joined_subquery_requires_alias` to require aliases for subselects and table functions in `FROM` that more than one table is present (i.e. queries with JOINs). [#6733](https://github.com/ClickHouse/ClickHouse/pull/6733) ([Artem Zuikov](https://github.com/4ertus2))
+* Extract `GetAggregatesVisitor` class from `ExpressionAnalyzer`. [#6458](https://github.com/ClickHouse/ClickHouse/pull/6458) ([Artem Zuikov](https://github.com/4ertus2))
+* `system.query_log`: change data type of `type` column to `Enum`. [#6265](https://github.com/ClickHouse/ClickHouse/pull/6265) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov))
+* Static linking of `sha256_password` authentication plugin. [#6512](https://github.com/ClickHouse/ClickHouse/pull/6512) ([Yuriy Baranov](https://github.com/yurriy))
+* Avoid extra dependency for the setting `compile` to work. In previous versions, the user may get error like `cannot open crti.o`, `unable to find library -lc` etc. [#6309](https://github.com/ClickHouse/ClickHouse/pull/6309) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* More validation of the input that may come from malicious replica. [#6303](https://github.com/ClickHouse/ClickHouse/pull/6303) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Now `clickhouse-obfuscator` file is available in `clickhouse-client` package. In previous versions it was available as `clickhouse obfuscator` (with whitespace). [#5816](https://github.com/ClickHouse/ClickHouse/issues/5816) [#6609](https://github.com/ClickHouse/ClickHouse/pull/6609) ([dimarub2000](https://github.com/dimarub2000))
+* Fixed deadlock when we have at least two queries that read at least two tables in different order and another query that performs DDL operation on one of tables. Fixed another very rare deadlock. [#6764](https://github.com/ClickHouse/ClickHouse/pull/6764) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Added `os_thread_ids` column to `system.processes` and `system.query_log` for better debugging possibilities. [#6763](https://github.com/ClickHouse/ClickHouse/pull/6763) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* A workaround for PHP mysqlnd extension bugs which occur when `sha256_password` is used as a default authentication plugin (described in [#6031](https://github.com/ClickHouse/ClickHouse/issues/6031)). [#6113](https://github.com/ClickHouse/ClickHouse/pull/6113) ([Yuriy Baranov](https://github.com/yurriy))
+* Remove unneeded place with changed nullability columns. [#6693](https://github.com/ClickHouse/ClickHouse/pull/6693) ([Artem Zuikov](https://github.com/4ertus2))
+* Set default value of `queue_max_wait_ms` to zero, because current value (five seconds) makes no sense. There are rare circumstances when this settings has any use. Added settings `replace_running_query_max_wait_ms`, `kafka_max_wait_ms` and `connection_pool_max_wait_ms` for disambiguation. [#6692](https://github.com/ClickHouse/ClickHouse/pull/6692) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Extract `SelectQueryExpressionAnalyzer` from `ExpressionAnalyzer`. Keep the last one for non-select queries. [#6499](https://github.com/ClickHouse/ClickHouse/pull/6499) ([Artem Zuikov](https://github.com/4ertus2))
+* Removed duplicating input and output formats. [#6239](https://github.com/ClickHouse/ClickHouse/pull/6239) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* Allow user to override `poll_interval` and `idle_connection_timeout` settings on connection. [#6230](https://github.com/ClickHouse/ClickHouse/pull/6230) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* `MergeTree` now has an additional option `ttl_only_drop_parts` (disabled by default) to avoid partial pruning of parts, so that they dropped completely when all the rows in a part are expired. [#6191](https://github.com/ClickHouse/ClickHouse/pull/6191) ([Sergi Vladykin](https://github.com/svladykin))
+* Type checks for set index functions. Throw exception if function got a wrong type. This fixes fuzz test with UBSan. [#6511](https://github.com/ClickHouse/ClickHouse/pull/6511) ([Nikita Vasilev](https://github.com/nikvas0))
### Performance Improvement
-* Optimize queries with `ORDER BY expressions` clause, where `expressions` have coinciding prefix with sorting key in `MergeTree` tables. This optimization is controlled by `optimize_read_in_order` setting. [#6054](https://github.com/yandex/ClickHouse/pull/6054) [#6629](https://github.com/yandex/ClickHouse/pull/6629) ([Anton Popov](https://github.com/CurtizJ))
-* Allow to use multiple threads during parts loading and removal. [#6372](https://github.com/yandex/ClickHouse/issues/6372) [#6074](https://github.com/yandex/ClickHouse/issues/6074) [#6438](https://github.com/yandex/ClickHouse/pull/6438) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Implemented batch variant of updating aggregate function states. It may lead to performance benefits. [#6435](https://github.com/yandex/ClickHouse/pull/6435) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Using `FastOps` library for functions `exp`, `log`, `sigmoid`, `tanh`. FastOps is a fast vector math library from Michael Parakhin (Yandex CTO). Improved performance of `exp` and `log` functions more than 6 times. The functions `exp` and `log` from `Float32` argument will return `Float32` (in previous versions they always return `Float64`). Now `exp(nan)` may return `inf`. The result of `exp` and `log` functions may be not the nearest machine representable number to the true answer. [#6254](https://github.com/yandex/ClickHouse/pull/6254) ([alexey-milovidov](https://github.com/alexey-milovidov)) Using Danila Kutenin variant to make fastops working [#6317](https://github.com/yandex/ClickHouse/pull/6317) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Disable consecutive key optimization for `UInt8/16`. [#6298](https://github.com/yandex/ClickHouse/pull/6298) [#6701](https://github.com/yandex/ClickHouse/pull/6701) ([akuzm](https://github.com/akuzm))
-* Improved performance of `simdjson` library by getting rid of dynamic allocation in `ParsedJson::Iterator`. [#6479](https://github.com/yandex/ClickHouse/pull/6479) ([Vitaly Baranov](https://github.com/vitlibar))
-* Pre-fault pages when allocating memory with `mmap()`. [#6667](https://github.com/yandex/ClickHouse/pull/6667) ([akuzm](https://github.com/akuzm))
-* Fix performance bug in `Decimal` comparison. [#6380](https://github.com/yandex/ClickHouse/pull/6380) ([Artem Zuikov](https://github.com/4ertus2))
+* Optimize queries with `ORDER BY expressions` clause, where `expressions` have coinciding prefix with sorting key in `MergeTree` tables. This optimization is controlled by `optimize_read_in_order` setting. [#6054](https://github.com/ClickHouse/ClickHouse/pull/6054) [#6629](https://github.com/ClickHouse/ClickHouse/pull/6629) ([Anton Popov](https://github.com/CurtizJ))
+* Allow to use multiple threads during parts loading and removal. [#6372](https://github.com/ClickHouse/ClickHouse/issues/6372) [#6074](https://github.com/ClickHouse/ClickHouse/issues/6074) [#6438](https://github.com/ClickHouse/ClickHouse/pull/6438) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Implemented batch variant of updating aggregate function states. It may lead to performance benefits. [#6435](https://github.com/ClickHouse/ClickHouse/pull/6435) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Using `FastOps` library for functions `exp`, `log`, `sigmoid`, `tanh`. FastOps is a fast vector math library from Michael Parakhin (Yandex CTO). Improved performance of `exp` and `log` functions more than 6 times. The functions `exp` and `log` from `Float32` argument will return `Float32` (in previous versions they always return `Float64`). Now `exp(nan)` may return `inf`. The result of `exp` and `log` functions may be not the nearest machine representable number to the true answer. [#6254](https://github.com/ClickHouse/ClickHouse/pull/6254) ([alexey-milovidov](https://github.com/alexey-milovidov)) Using Danila Kutenin variant to make fastops working [#6317](https://github.com/ClickHouse/ClickHouse/pull/6317) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Disable consecutive key optimization for `UInt8/16`. [#6298](https://github.com/ClickHouse/ClickHouse/pull/6298) [#6701](https://github.com/ClickHouse/ClickHouse/pull/6701) ([akuzm](https://github.com/akuzm))
+* Improved performance of `simdjson` library by getting rid of dynamic allocation in `ParsedJson::Iterator`. [#6479](https://github.com/ClickHouse/ClickHouse/pull/6479) ([Vitaly Baranov](https://github.com/vitlibar))
+* Pre-fault pages when allocating memory with `mmap()`. [#6667](https://github.com/ClickHouse/ClickHouse/pull/6667) ([akuzm](https://github.com/akuzm))
+* Fix performance bug in `Decimal` comparison. [#6380](https://github.com/ClickHouse/ClickHouse/pull/6380) ([Artem Zuikov](https://github.com/4ertus2))
### Build/Testing/Packaging Improvement
-* Remove Compiler (runtime template instantiation) because we've win over it's performance. [#6646](https://github.com/yandex/ClickHouse/pull/6646) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Added performance test to show degradation of performance in gcc-9 in more isolated way. [#6302](https://github.com/yandex/ClickHouse/pull/6302) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Added table function `numbers_mt`, which is multithreaded version of `numbers`. Updated performance tests with hash functions. [#6554](https://github.com/yandex/ClickHouse/pull/6554) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
-* Comparison mode in `clickhouse-benchmark` [#6220](https://github.com/yandex/ClickHouse/issues/6220) [#6343](https://github.com/yandex/ClickHouse/pull/6343) ([dimarub2000](https://github.com/dimarub2000))
-* Best effort for printing stack traces. Also added `SIGPROF` as a debugging signal to print stack trace of a running thread. [#6529](https://github.com/yandex/ClickHouse/pull/6529) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Every function in its own file, part 10. [#6321](https://github.com/yandex/ClickHouse/pull/6321) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Remove doubled const `TABLE_IS_READ_ONLY`. [#6566](https://github.com/yandex/ClickHouse/pull/6566) ([filimonov](https://github.com/filimonov))
-* Formatting changes for `StringHashMap` PR [#5417](https://github.com/yandex/ClickHouse/issues/5417). [#6700](https://github.com/yandex/ClickHouse/pull/6700) ([akuzm](https://github.com/akuzm))
-* Better subquery for join creation in `ExpressionAnalyzer`. [#6824](https://github.com/yandex/ClickHouse/pull/6824) ([Artem Zuikov](https://github.com/4ertus2))
-* Remove a redundant condition (found by PVS Studio). [#6775](https://github.com/yandex/ClickHouse/pull/6775) ([akuzm](https://github.com/akuzm))
-* Separate the hash table interface for `ReverseIndex`. [#6672](https://github.com/yandex/ClickHouse/pull/6672) ([akuzm](https://github.com/akuzm))
-* Refactoring of settings. [#6689](https://github.com/yandex/ClickHouse/pull/6689) ([alesapin](https://github.com/alesapin))
-* Add comments for `set` index functions. [#6319](https://github.com/yandex/ClickHouse/pull/6319) ([Nikita Vasilev](https://github.com/nikvas0))
-* Increase OOM score in debug version on Linux. [#6152](https://github.com/yandex/ClickHouse/pull/6152) ([akuzm](https://github.com/akuzm))
-* HDFS HA now work in debug build. [#6650](https://github.com/yandex/ClickHouse/pull/6650) ([Weiqing Xu](https://github.com/weiqxu))
-* Added a test to `transform_query_for_external_database`. [#6388](https://github.com/yandex/ClickHouse/pull/6388) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Add test for multiple materialized views for Kafka table. [#6509](https://github.com/yandex/ClickHouse/pull/6509) ([Ivan](https://github.com/abyss7))
-* Make a better build scheme. [#6500](https://github.com/yandex/ClickHouse/pull/6500) ([Ivan](https://github.com/abyss7))
-* Fixed `test_external_dictionaries` integration in case it was executed under non root user. [#6507](https://github.com/yandex/ClickHouse/pull/6507) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
-* The bug reproduces when total size of written packets exceeds `DBMS_DEFAULT_BUFFER_SIZE`. [#6204](https://github.com/yandex/ClickHouse/pull/6204) ([Yuriy Baranov](https://github.com/yurriy))
-* Added a test for `RENAME` table race condition [#6752](https://github.com/yandex/ClickHouse/pull/6752) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Avoid data race on Settings in `KILL QUERY`. [#6753](https://github.com/yandex/ClickHouse/pull/6753) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Add integration test for handling errors by a cache dictionary. [#6755](https://github.com/yandex/ClickHouse/pull/6755) ([Vitaly Baranov](https://github.com/vitlibar))
-* Move `input_format_defaults_for_omitted_fields` to incompatible changes [#6573](https://github.com/yandex/ClickHouse/pull/6573) ([Artem Zuikov](https://github.com/4ertus2))
-* Disable parsing of ELF object files on Mac OS, because it makes no sense. [#6578](https://github.com/yandex/ClickHouse/pull/6578) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Attempt to make changelog generator better. [#6327](https://github.com/yandex/ClickHouse/pull/6327) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Adding `-Wshadow` switch to the GCC. [#6325](https://github.com/yandex/ClickHouse/pull/6325) ([kreuzerkrieg](https://github.com/kreuzerkrieg))
-* Removed obsolete code for `mimalloc` support. [#6715](https://github.com/yandex/ClickHouse/pull/6715) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* `zlib-ng` determines x86 capabilities and saves this info to global variables. This is done in defalteInit call, which may be made by different threads simultaneously. To avoid multithreaded writes, do it on library startup. [#6141](https://github.com/yandex/ClickHouse/pull/6141) ([akuzm](https://github.com/akuzm))
-* Regression test for a bug which in join which was fixed in [#5192](https://github.com/yandex/ClickHouse/issues/5192). [#6147](https://github.com/yandex/ClickHouse/pull/6147) ([Bakhtiyor Ruziev](https://github.com/theruziev))
-* Fixed MSan report. [#6144](https://github.com/yandex/ClickHouse/pull/6144) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix flapping TTL test. [#6782](https://github.com/yandex/ClickHouse/pull/6782) ([Anton Popov](https://github.com/CurtizJ))
-* Fixed false data race in `MergeTreeDataPart::is_frozen` field. [#6583](https://github.com/yandex/ClickHouse/pull/6583) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed timeouts in fuzz test. In previous version, it managed to find false hangup in query `SELECT * FROM numbers_mt(gccMurmurHash(''))`. [#6582](https://github.com/yandex/ClickHouse/pull/6582) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Added debug checks to `static_cast` of columns. [#6581](https://github.com/yandex/ClickHouse/pull/6581) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Support for Oracle Linux in official RPM packages. [#6356](https://github.com/yandex/ClickHouse/issues/6356) [#6585](https://github.com/yandex/ClickHouse/pull/6585) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Changed json perftests from `once` to `loop` type. [#6536](https://github.com/yandex/ClickHouse/pull/6536) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
-* `odbc-bridge.cpp` defines `main()` so it should not be included in `clickhouse-lib`. [#6538](https://github.com/yandex/ClickHouse/pull/6538) ([Orivej Desh](https://github.com/orivej))
-* Test for crash in `FULL|RIGHT JOIN` with nulls in right table's keys. [#6362](https://github.com/yandex/ClickHouse/pull/6362) ([Artem Zuikov](https://github.com/4ertus2))
-* Added a test for the limit on expansion of aliases just in case. [#6442](https://github.com/yandex/ClickHouse/pull/6442) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Switched from `boost::filesystem` to `std::filesystem` where appropriate. [#6253](https://github.com/yandex/ClickHouse/pull/6253) [#6385](https://github.com/yandex/ClickHouse/pull/6385) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Added RPM packages to website. [#6251](https://github.com/yandex/ClickHouse/pull/6251) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Add a test for fixed `Unknown identifier` exception in `IN` section. [#6708](https://github.com/yandex/ClickHouse/pull/6708) ([Artem Zuikov](https://github.com/4ertus2))
-* Simplify `shared_ptr_helper` because people facing difficulties understanding it. [#6675](https://github.com/yandex/ClickHouse/pull/6675) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Added performance tests for fixed Gorilla and DoubleDelta codec. [#6179](https://github.com/yandex/ClickHouse/pull/6179) ([Vasily Nemkov](https://github.com/Enmk))
-* Split the integration test `test_dictionaries` into 4 separate tests. [#6776](https://github.com/yandex/ClickHouse/pull/6776) ([Vitaly Baranov](https://github.com/vitlibar))
-* Fix PVS-Studio warning in `PipelineExecutor`. [#6777](https://github.com/yandex/ClickHouse/pull/6777) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
-* Allow to use `library` dictionary source with ASan. [#6482](https://github.com/yandex/ClickHouse/pull/6482) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Added option to generate changelog from a list of PRs. [#6350](https://github.com/yandex/ClickHouse/pull/6350) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Lock the `TinyLog` storage when reading. [#6226](https://github.com/yandex/ClickHouse/pull/6226) ([akuzm](https://github.com/akuzm))
-* Check for broken symlinks in CI. [#6634](https://github.com/yandex/ClickHouse/pull/6634) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Increase timeout for "stack overflow" test because it may take a long time in debug build. [#6637](https://github.com/yandex/ClickHouse/pull/6637) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Added a check for double whitespaces. [#6643](https://github.com/yandex/ClickHouse/pull/6643) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix `new/delete` memory tracking when build with sanitizers. Tracking is not clear. It only prevents memory limit exceptions in tests. [#6450](https://github.com/yandex/ClickHouse/pull/6450) ([Artem Zuikov](https://github.com/4ertus2))
-* Enable back the check of undefined symbols while linking. [#6453](https://github.com/yandex/ClickHouse/pull/6453) ([Ivan](https://github.com/abyss7))
-* Avoid rebuilding `hyperscan` every day. [#6307](https://github.com/yandex/ClickHouse/pull/6307) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed UBSan report in `ProtobufWriter`. [#6163](https://github.com/yandex/ClickHouse/pull/6163) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Don't allow to use query profiler with sanitizers because it is not compatible. [#6769](https://github.com/yandex/ClickHouse/pull/6769) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Add test for reloading a dictionary after fail by timer. [#6114](https://github.com/yandex/ClickHouse/pull/6114) ([Vitaly Baranov](https://github.com/vitlibar))
-* Fix inconsistency in `PipelineExecutor::prepareProcessor` argument type. [#6494](https://github.com/yandex/ClickHouse/pull/6494) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
-* Added a test for bad URIs. [#6493](https://github.com/yandex/ClickHouse/pull/6493) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Added more checks to `CAST` function. This should get more information about segmentation fault in fuzzy test. [#6346](https://github.com/yandex/ClickHouse/pull/6346) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
-* Added `gcc-9` support to `docker/builder` container that builds image locally. [#6333](https://github.com/yandex/ClickHouse/pull/6333) ([Gleb Novikov](https://github.com/NanoBjorn))
-* Test for primary key with `LowCardinality(String)`. [#5044](https://github.com/yandex/ClickHouse/issues/5044) [#6219](https://github.com/yandex/ClickHouse/pull/6219) ([dimarub2000](https://github.com/dimarub2000))
-* Fixed tests affected by slow stack traces printing. [#6315](https://github.com/yandex/ClickHouse/pull/6315) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Add a test case for crash in `groupUniqArray` fixed in [#6029](https://github.com/yandex/ClickHouse/pull/6029). [#4402](https://github.com/yandex/ClickHouse/issues/4402) [#6129](https://github.com/yandex/ClickHouse/pull/6129) ([akuzm](https://github.com/akuzm))
-* Fixed indices mutations tests. [#6645](https://github.com/yandex/ClickHouse/pull/6645) ([Nikita Vasilev](https://github.com/nikvas0))
-* In performance test, do not read query log for queries we didn't run. [#6427](https://github.com/yandex/ClickHouse/pull/6427) ([akuzm](https://github.com/akuzm))
-* Materialized view now could be created with any low cardinality types regardless to the setting about suspicious low cardinality types. [#6428](https://github.com/yandex/ClickHouse/pull/6428) ([Olga Khvostikova](https://github.com/stavrolia))
-* Updated tests for `send_logs_level` setting. [#6207](https://github.com/yandex/ClickHouse/pull/6207) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
-* Fix build under gcc-8.2. [#6196](https://github.com/yandex/ClickHouse/pull/6196) ([Max Akhmedov](https://github.com/zlobober))
-* Fix build with internal libc++. [#6724](https://github.com/yandex/ClickHouse/pull/6724) ([Ivan](https://github.com/abyss7))
-* Fix shared build with `rdkafka` library [#6101](https://github.com/yandex/ClickHouse/pull/6101) ([Ivan](https://github.com/abyss7))
-* Fixes for Mac OS build (incomplete). [#6390](https://github.com/yandex/ClickHouse/pull/6390) ([alexey-milovidov](https://github.com/alexey-milovidov)) [#6429](https://github.com/yandex/ClickHouse/pull/6429) ([alex-zaitsev](https://github.com/alex-zaitsev))
-* Fix "splitted" build. [#6618](https://github.com/yandex/ClickHouse/pull/6618) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Other build fixes: [#6186](https://github.com/yandex/ClickHouse/pull/6186) ([Amos Bird](https://github.com/amosbird)) [#6486](https://github.com/yandex/ClickHouse/pull/6486) [#6348](https://github.com/yandex/ClickHouse/pull/6348) ([vxider](https://github.com/Vxider)) [#6744](https://github.com/yandex/ClickHouse/pull/6744) ([Ivan](https://github.com/abyss7)) [#6016](https://github.com/yandex/ClickHouse/pull/6016) [#6421](https://github.com/yandex/ClickHouse/pull/6421) [#6491](https://github.com/yandex/ClickHouse/pull/6491) ([proller](https://github.com/proller))
+* Remove Compiler (runtime template instantiation) because we've win over it's performance. [#6646](https://github.com/ClickHouse/ClickHouse/pull/6646) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Added performance test to show degradation of performance in gcc-9 in more isolated way. [#6302](https://github.com/ClickHouse/ClickHouse/pull/6302) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Added table function `numbers_mt`, which is multithreaded version of `numbers`. Updated performance tests with hash functions. [#6554](https://github.com/ClickHouse/ClickHouse/pull/6554) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* Comparison mode in `clickhouse-benchmark` [#6220](https://github.com/ClickHouse/ClickHouse/issues/6220) [#6343](https://github.com/ClickHouse/ClickHouse/pull/6343) ([dimarub2000](https://github.com/dimarub2000))
+* Best effort for printing stack traces. Also added `SIGPROF` as a debugging signal to print stack trace of a running thread. [#6529](https://github.com/ClickHouse/ClickHouse/pull/6529) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Every function in its own file, part 10. [#6321](https://github.com/ClickHouse/ClickHouse/pull/6321) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Remove doubled const `TABLE_IS_READ_ONLY`. [#6566](https://github.com/ClickHouse/ClickHouse/pull/6566) ([filimonov](https://github.com/filimonov))
+* Formatting changes for `StringHashMap` PR [#5417](https://github.com/ClickHouse/ClickHouse/issues/5417). [#6700](https://github.com/ClickHouse/ClickHouse/pull/6700) ([akuzm](https://github.com/akuzm))
+* Better subquery for join creation in `ExpressionAnalyzer`. [#6824](https://github.com/ClickHouse/ClickHouse/pull/6824) ([Artem Zuikov](https://github.com/4ertus2))
+* Remove a redundant condition (found by PVS Studio). [#6775](https://github.com/ClickHouse/ClickHouse/pull/6775) ([akuzm](https://github.com/akuzm))
+* Separate the hash table interface for `ReverseIndex`. [#6672](https://github.com/ClickHouse/ClickHouse/pull/6672) ([akuzm](https://github.com/akuzm))
+* Refactoring of settings. [#6689](https://github.com/ClickHouse/ClickHouse/pull/6689) ([alesapin](https://github.com/alesapin))
+* Add comments for `set` index functions. [#6319](https://github.com/ClickHouse/ClickHouse/pull/6319) ([Nikita Vasilev](https://github.com/nikvas0))
+* Increase OOM score in debug version on Linux. [#6152](https://github.com/ClickHouse/ClickHouse/pull/6152) ([akuzm](https://github.com/akuzm))
+* HDFS HA now work in debug build. [#6650](https://github.com/ClickHouse/ClickHouse/pull/6650) ([Weiqing Xu](https://github.com/weiqxu))
+* Added a test to `transform_query_for_external_database`. [#6388](https://github.com/ClickHouse/ClickHouse/pull/6388) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Add test for multiple materialized views for Kafka table. [#6509](https://github.com/ClickHouse/ClickHouse/pull/6509) ([Ivan](https://github.com/abyss7))
+* Make a better build scheme. [#6500](https://github.com/ClickHouse/ClickHouse/pull/6500) ([Ivan](https://github.com/abyss7))
+* Fixed `test_external_dictionaries` integration in case it was executed under non root user. [#6507](https://github.com/ClickHouse/ClickHouse/pull/6507) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* The bug reproduces when total size of written packets exceeds `DBMS_DEFAULT_BUFFER_SIZE`. [#6204](https://github.com/ClickHouse/ClickHouse/pull/6204) ([Yuriy Baranov](https://github.com/yurriy))
+* Added a test for `RENAME` table race condition [#6752](https://github.com/ClickHouse/ClickHouse/pull/6752) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Avoid data race on Settings in `KILL QUERY`. [#6753](https://github.com/ClickHouse/ClickHouse/pull/6753) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Add integration test for handling errors by a cache dictionary. [#6755](https://github.com/ClickHouse/ClickHouse/pull/6755) ([Vitaly Baranov](https://github.com/vitlibar))
+* Move `input_format_defaults_for_omitted_fields` to incompatible changes [#6573](https://github.com/ClickHouse/ClickHouse/pull/6573) ([Artem Zuikov](https://github.com/4ertus2))
+* Disable parsing of ELF object files on Mac OS, because it makes no sense. [#6578](https://github.com/ClickHouse/ClickHouse/pull/6578) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Attempt to make changelog generator better. [#6327](https://github.com/ClickHouse/ClickHouse/pull/6327) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Adding `-Wshadow` switch to the GCC. [#6325](https://github.com/ClickHouse/ClickHouse/pull/6325) ([kreuzerkrieg](https://github.com/kreuzerkrieg))
+* Removed obsolete code for `mimalloc` support. [#6715](https://github.com/ClickHouse/ClickHouse/pull/6715) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* `zlib-ng` determines x86 capabilities and saves this info to global variables. This is done in defalteInit call, which may be made by different threads simultaneously. To avoid multithreaded writes, do it on library startup. [#6141](https://github.com/ClickHouse/ClickHouse/pull/6141) ([akuzm](https://github.com/akuzm))
+* Regression test for a bug which in join which was fixed in [#5192](https://github.com/ClickHouse/ClickHouse/issues/5192). [#6147](https://github.com/ClickHouse/ClickHouse/pull/6147) ([Bakhtiyor Ruziev](https://github.com/theruziev))
+* Fixed MSan report. [#6144](https://github.com/ClickHouse/ClickHouse/pull/6144) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix flapping TTL test. [#6782](https://github.com/ClickHouse/ClickHouse/pull/6782) ([Anton Popov](https://github.com/CurtizJ))
+* Fixed false data race in `MergeTreeDataPart::is_frozen` field. [#6583](https://github.com/ClickHouse/ClickHouse/pull/6583) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed timeouts in fuzz test. In previous version, it managed to find false hangup in query `SELECT * FROM numbers_mt(gccMurmurHash(''))`. [#6582](https://github.com/ClickHouse/ClickHouse/pull/6582) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Added debug checks to `static_cast` of columns. [#6581](https://github.com/ClickHouse/ClickHouse/pull/6581) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Support for Oracle Linux in official RPM packages. [#6356](https://github.com/ClickHouse/ClickHouse/issues/6356) [#6585](https://github.com/ClickHouse/ClickHouse/pull/6585) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Changed json perftests from `once` to `loop` type. [#6536](https://github.com/ClickHouse/ClickHouse/pull/6536) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* `odbc-bridge.cpp` defines `main()` so it should not be included in `clickhouse-lib`. [#6538](https://github.com/ClickHouse/ClickHouse/pull/6538) ([Orivej Desh](https://github.com/orivej))
+* Test for crash in `FULL|RIGHT JOIN` with nulls in right table's keys. [#6362](https://github.com/ClickHouse/ClickHouse/pull/6362) ([Artem Zuikov](https://github.com/4ertus2))
+* Added a test for the limit on expansion of aliases just in case. [#6442](https://github.com/ClickHouse/ClickHouse/pull/6442) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Switched from `boost::filesystem` to `std::filesystem` where appropriate. [#6253](https://github.com/ClickHouse/ClickHouse/pull/6253) [#6385](https://github.com/ClickHouse/ClickHouse/pull/6385) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Added RPM packages to website. [#6251](https://github.com/ClickHouse/ClickHouse/pull/6251) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Add a test for fixed `Unknown identifier` exception in `IN` section. [#6708](https://github.com/ClickHouse/ClickHouse/pull/6708) ([Artem Zuikov](https://github.com/4ertus2))
+* Simplify `shared_ptr_helper` because people facing difficulties understanding it. [#6675](https://github.com/ClickHouse/ClickHouse/pull/6675) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Added performance tests for fixed Gorilla and DoubleDelta codec. [#6179](https://github.com/ClickHouse/ClickHouse/pull/6179) ([Vasily Nemkov](https://github.com/Enmk))
+* Split the integration test `test_dictionaries` into 4 separate tests. [#6776](https://github.com/ClickHouse/ClickHouse/pull/6776) ([Vitaly Baranov](https://github.com/vitlibar))
+* Fix PVS-Studio warning in `PipelineExecutor`. [#6777](https://github.com/ClickHouse/ClickHouse/pull/6777) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* Allow to use `library` dictionary source with ASan. [#6482](https://github.com/ClickHouse/ClickHouse/pull/6482) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Added option to generate changelog from a list of PRs. [#6350](https://github.com/ClickHouse/ClickHouse/pull/6350) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Lock the `TinyLog` storage when reading. [#6226](https://github.com/ClickHouse/ClickHouse/pull/6226) ([akuzm](https://github.com/akuzm))
+* Check for broken symlinks in CI. [#6634](https://github.com/ClickHouse/ClickHouse/pull/6634) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Increase timeout for "stack overflow" test because it may take a long time in debug build. [#6637](https://github.com/ClickHouse/ClickHouse/pull/6637) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Added a check for double whitespaces. [#6643](https://github.com/ClickHouse/ClickHouse/pull/6643) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix `new/delete` memory tracking when build with sanitizers. Tracking is not clear. It only prevents memory limit exceptions in tests. [#6450](https://github.com/ClickHouse/ClickHouse/pull/6450) ([Artem Zuikov](https://github.com/4ertus2))
+* Enable back the check of undefined symbols while linking. [#6453](https://github.com/ClickHouse/ClickHouse/pull/6453) ([Ivan](https://github.com/abyss7))
+* Avoid rebuilding `hyperscan` every day. [#6307](https://github.com/ClickHouse/ClickHouse/pull/6307) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed UBSan report in `ProtobufWriter`. [#6163](https://github.com/ClickHouse/ClickHouse/pull/6163) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Don't allow to use query profiler with sanitizers because it is not compatible. [#6769](https://github.com/ClickHouse/ClickHouse/pull/6769) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Add test for reloading a dictionary after fail by timer. [#6114](https://github.com/ClickHouse/ClickHouse/pull/6114) ([Vitaly Baranov](https://github.com/vitlibar))
+* Fix inconsistency in `PipelineExecutor::prepareProcessor` argument type. [#6494](https://github.com/ClickHouse/ClickHouse/pull/6494) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* Added a test for bad URIs. [#6493](https://github.com/ClickHouse/ClickHouse/pull/6493) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Added more checks to `CAST` function. This should get more information about segmentation fault in fuzzy test. [#6346](https://github.com/ClickHouse/ClickHouse/pull/6346) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* Added `gcc-9` support to `docker/builder` container that builds image locally. [#6333](https://github.com/ClickHouse/ClickHouse/pull/6333) ([Gleb Novikov](https://github.com/NanoBjorn))
+* Test for primary key with `LowCardinality(String)`. [#5044](https://github.com/ClickHouse/ClickHouse/issues/5044) [#6219](https://github.com/ClickHouse/ClickHouse/pull/6219) ([dimarub2000](https://github.com/dimarub2000))
+* Fixed tests affected by slow stack traces printing. [#6315](https://github.com/ClickHouse/ClickHouse/pull/6315) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Add a test case for crash in `groupUniqArray` fixed in [#6029](https://github.com/ClickHouse/ClickHouse/pull/6029). [#4402](https://github.com/ClickHouse/ClickHouse/issues/4402) [#6129](https://github.com/ClickHouse/ClickHouse/pull/6129) ([akuzm](https://github.com/akuzm))
+* Fixed indices mutations tests. [#6645](https://github.com/ClickHouse/ClickHouse/pull/6645) ([Nikita Vasilev](https://github.com/nikvas0))
+* In performance test, do not read query log for queries we didn't run. [#6427](https://github.com/ClickHouse/ClickHouse/pull/6427) ([akuzm](https://github.com/akuzm))
+* Materialized view now could be created with any low cardinality types regardless to the setting about suspicious low cardinality types. [#6428](https://github.com/ClickHouse/ClickHouse/pull/6428) ([Olga Khvostikova](https://github.com/stavrolia))
+* Updated tests for `send_logs_level` setting. [#6207](https://github.com/ClickHouse/ClickHouse/pull/6207) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* Fix build under gcc-8.2. [#6196](https://github.com/ClickHouse/ClickHouse/pull/6196) ([Max Akhmedov](https://github.com/zlobober))
+* Fix build with internal libc++. [#6724](https://github.com/ClickHouse/ClickHouse/pull/6724) ([Ivan](https://github.com/abyss7))
+* Fix shared build with `rdkafka` library [#6101](https://github.com/ClickHouse/ClickHouse/pull/6101) ([Ivan](https://github.com/abyss7))
+* Fixes for Mac OS build (incomplete). [#6390](https://github.com/ClickHouse/ClickHouse/pull/6390) ([alexey-milovidov](https://github.com/alexey-milovidov)) [#6429](https://github.com/ClickHouse/ClickHouse/pull/6429) ([alex-zaitsev](https://github.com/alex-zaitsev))
+* Fix "splitted" build. [#6618](https://github.com/ClickHouse/ClickHouse/pull/6618) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Other build fixes: [#6186](https://github.com/ClickHouse/ClickHouse/pull/6186) ([Amos Bird](https://github.com/amosbird)) [#6486](https://github.com/ClickHouse/ClickHouse/pull/6486) [#6348](https://github.com/ClickHouse/ClickHouse/pull/6348) ([vxider](https://github.com/Vxider)) [#6744](https://github.com/ClickHouse/ClickHouse/pull/6744) ([Ivan](https://github.com/abyss7)) [#6016](https://github.com/ClickHouse/ClickHouse/pull/6016) [#6421](https://github.com/ClickHouse/ClickHouse/pull/6421) [#6491](https://github.com/ClickHouse/ClickHouse/pull/6491) ([proller](https://github.com/proller))
### Backward Incompatible Change
-* Removed rarely used table function `catBoostPool` and storage `CatBoostPool`. If you have used this table function, please write email to `clickhouse-feedback@yandex-team.com`. Note that CatBoost integration remains and will be supported. [#6279](https://github.com/yandex/ClickHouse/pull/6279) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Disable `ANY RIGHT JOIN` and `ANY FULL JOIN` by default. Set `any_join_get_any_from_right_table` setting to enable them. [#5126](https://github.com/yandex/ClickHouse/issues/5126) [#6351](https://github.com/yandex/ClickHouse/pull/6351) ([Artem Zuikov](https://github.com/4ertus2))
+* Removed rarely used table function `catBoostPool` and storage `CatBoostPool`. If you have used this table function, please write email to `clickhouse-feedback@yandex-team.com`. Note that CatBoost integration remains and will be supported. [#6279](https://github.com/ClickHouse/ClickHouse/pull/6279) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Disable `ANY RIGHT JOIN` and `ANY FULL JOIN` by default. Set `any_join_get_any_from_right_table` setting to enable them. [#5126](https://github.com/ClickHouse/ClickHouse/issues/5126) [#6351](https://github.com/ClickHouse/ClickHouse/pull/6351) ([Artem Zuikov](https://github.com/4ertus2))
## ClickHouse release 19.11.11.57, 2019-09-13
-* Fix logical error causing segfaults when selecting from Kafka empty topic. [#6902](https://github.com/yandex/ClickHouse/issues/6902) [#6909](https://github.com/yandex/ClickHouse/pull/6909) ([Ivan](https://github.com/abyss7))
-* Fix for function `АrrayEnumerateUniqRanked` with empty arrays in params. [#6928](https://github.com/yandex/ClickHouse/pull/6928) ([proller](https://github.com/proller))
+* Fix logical error causing segfaults when selecting from Kafka empty topic. [#6902](https://github.com/ClickHouse/ClickHouse/issues/6902) [#6909](https://github.com/ClickHouse/ClickHouse/pull/6909) ([Ivan](https://github.com/abyss7))
+* Fix for function `АrrayEnumerateUniqRanked` with empty arrays in params. [#6928](https://github.com/ClickHouse/ClickHouse/pull/6928) ([proller](https://github.com/proller))
## ClickHouse release 19.13.4.32, 2019-09-10
### Bug Fix
* This release also contains all bug security fixes from 19.11.9.52 and 19.11.10.54.
-* Fixed data race in `system.parts` table and `ALTER` query. [#6245](https://github.com/yandex/ClickHouse/issues/6245) [#6513](https://github.com/yandex/ClickHouse/pull/6513) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed mismatched header in streams happened in case of reading from empty distributed table with sample and prewhere. [#6167](https://github.com/yandex/ClickHouse/issues/6167) ([Lixiang Qian](https://github.com/fancyqlx)) [#6823](https://github.com/yandex/ClickHouse/pull/6823) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
-* Fixed crash when using `IN` clause with a subquery with a tuple. [#6125](https://github.com/yandex/ClickHouse/issues/6125) [#6550](https://github.com/yandex/ClickHouse/pull/6550) ([tavplubix](https://github.com/tavplubix))
-* Fix case with same column names in `GLOBAL JOIN ON` section. [#6181](https://github.com/yandex/ClickHouse/pull/6181) ([Artem Zuikov](https://github.com/4ertus2))
-* Fix crash when casting types to `Decimal` that do not support it. Throw exception instead. [#6297](https://github.com/yandex/ClickHouse/pull/6297) ([Artem Zuikov](https://github.com/4ertus2))
-* Fixed crash in `extractAll()` function. [#6644](https://github.com/yandex/ClickHouse/pull/6644) ([Artem Zuikov](https://github.com/4ertus2))
-* Query transformation for `MySQL`, `ODBC`, `JDBC` table functions now works properly for `SELECT WHERE` queries with multiple `AND` expressions. [#6381](https://github.com/yandex/ClickHouse/issues/6381) [#6676](https://github.com/yandex/ClickHouse/pull/6676) ([dimarub2000](https://github.com/dimarub2000))
-* Added previous declaration checks for MySQL 8 integration. [#6569](https://github.com/yandex/ClickHouse/pull/6569) ([Rafael David Tinoco](https://github.com/rafaeldtinoco))
+* Fixed data race in `system.parts` table and `ALTER` query. [#6245](https://github.com/ClickHouse/ClickHouse/issues/6245) [#6513](https://github.com/ClickHouse/ClickHouse/pull/6513) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed mismatched header in streams happened in case of reading from empty distributed table with sample and prewhere. [#6167](https://github.com/ClickHouse/ClickHouse/issues/6167) ([Lixiang Qian](https://github.com/fancyqlx)) [#6823](https://github.com/ClickHouse/ClickHouse/pull/6823) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* Fixed crash when using `IN` clause with a subquery with a tuple. [#6125](https://github.com/ClickHouse/ClickHouse/issues/6125) [#6550](https://github.com/ClickHouse/ClickHouse/pull/6550) ([tavplubix](https://github.com/tavplubix))
+* Fix case with same column names in `GLOBAL JOIN ON` section. [#6181](https://github.com/ClickHouse/ClickHouse/pull/6181) ([Artem Zuikov](https://github.com/4ertus2))
+* Fix crash when casting types to `Decimal` that do not support it. Throw exception instead. [#6297](https://github.com/ClickHouse/ClickHouse/pull/6297) ([Artem Zuikov](https://github.com/4ertus2))
+* Fixed crash in `extractAll()` function. [#6644](https://github.com/ClickHouse/ClickHouse/pull/6644) ([Artem Zuikov](https://github.com/4ertus2))
+* Query transformation for `MySQL`, `ODBC`, `JDBC` table functions now works properly for `SELECT WHERE` queries with multiple `AND` expressions. [#6381](https://github.com/ClickHouse/ClickHouse/issues/6381) [#6676](https://github.com/ClickHouse/ClickHouse/pull/6676) ([dimarub2000](https://github.com/dimarub2000))
+* Added previous declaration checks for MySQL 8 integration. [#6569](https://github.com/ClickHouse/ClickHouse/pull/6569) ([Rafael David Tinoco](https://github.com/rafaeldtinoco))
### Security Fix
-* Fix two vulnerabilities in codecs in decompression phase (malicious user can fabricate compressed data that will lead to buffer overflow in decompression). [#6670](https://github.com/yandex/ClickHouse/pull/6670) ([Artem Zuikov](https://github.com/4ertus2))
+* Fix two vulnerabilities in codecs in decompression phase (malicious user can fabricate compressed data that will lead to buffer overflow in decompression). [#6670](https://github.com/ClickHouse/ClickHouse/pull/6670) ([Artem Zuikov](https://github.com/4ertus2))
## ClickHouse release 19.11.10.54, 2019-09-10
### Bug Fix
-* Do store offsets for Kafka messages manually to be able to commit them all at once for all partitions. Fixes potential duplication in "one consumer - many partitions" scenario. [#6872](https://github.com/yandex/ClickHouse/pull/6872) ([Ivan](https://github.com/abyss7))
+* Do store offsets for Kafka messages manually to be able to commit them all at once for all partitions. Fixes potential duplication in "one consumer - many partitions" scenario. [#6872](https://github.com/ClickHouse/ClickHouse/pull/6872) ([Ivan](https://github.com/abyss7))
## ClickHouse release 19.11.9.52, 2019-09-6
-* Improve error handling in cache dictionaries. [#6737](https://github.com/yandex/ClickHouse/pull/6737) ([Vitaly Baranov](https://github.com/vitlibar))
-* Fixed bug in function `arrayEnumerateUniqRanked`. [#6779](https://github.com/yandex/ClickHouse/pull/6779) ([proller](https://github.com/proller))
-* Fix `JSONExtract` function while extracting a `Tuple` from JSON. [#6718](https://github.com/yandex/ClickHouse/pull/6718) ([Vitaly Baranov](https://github.com/vitlibar))
-* Fixed possible data loss after `ALTER DELETE` query on table with skipping index. [#6224](https://github.com/yandex/ClickHouse/issues/6224) [#6282](https://github.com/yandex/ClickHouse/pull/6282) ([Nikita Vasilev](https://github.com/nikvas0))
-* Fixed performance test. [#6392](https://github.com/yandex/ClickHouse/pull/6392) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Parquet: Fix reading boolean columns. [#6579](https://github.com/yandex/ClickHouse/pull/6579) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed wrong behaviour of `nullIf` function for constant arguments. [#6518](https://github.com/yandex/ClickHouse/pull/6518) ([Guillaume Tassery](https://github.com/YiuRULE)) [#6580](https://github.com/yandex/ClickHouse/pull/6580) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix Kafka messages duplication problem on normal server restart. [#6597](https://github.com/yandex/ClickHouse/pull/6597) ([Ivan](https://github.com/abyss7))
-* Fixed an issue when long `ALTER UPDATE` or `ALTER DELETE` may prevent regular merges to run. Prevent mutations from executing if there is no enough free threads available. [#6502](https://github.com/yandex/ClickHouse/issues/6502) [#6617](https://github.com/yandex/ClickHouse/pull/6617) ([tavplubix](https://github.com/tavplubix))
-* Fixed error with processing "timezone" in server configuration file. [#6709](https://github.com/yandex/ClickHouse/pull/6709) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix kafka tests. [#6805](https://github.com/yandex/ClickHouse/pull/6805) ([Ivan](https://github.com/abyss7))
+* Improve error handling in cache dictionaries. [#6737](https://github.com/ClickHouse/ClickHouse/pull/6737) ([Vitaly Baranov](https://github.com/vitlibar))
+* Fixed bug in function `arrayEnumerateUniqRanked`. [#6779](https://github.com/ClickHouse/ClickHouse/pull/6779) ([proller](https://github.com/proller))
+* Fix `JSONExtract` function while extracting a `Tuple` from JSON. [#6718](https://github.com/ClickHouse/ClickHouse/pull/6718) ([Vitaly Baranov](https://github.com/vitlibar))
+* Fixed possible data loss after `ALTER DELETE` query on table with skipping index. [#6224](https://github.com/ClickHouse/ClickHouse/issues/6224) [#6282](https://github.com/ClickHouse/ClickHouse/pull/6282) ([Nikita Vasilev](https://github.com/nikvas0))
+* Fixed performance test. [#6392](https://github.com/ClickHouse/ClickHouse/pull/6392) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Parquet: Fix reading boolean columns. [#6579](https://github.com/ClickHouse/ClickHouse/pull/6579) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed wrong behaviour of `nullIf` function for constant arguments. [#6518](https://github.com/ClickHouse/ClickHouse/pull/6518) ([Guillaume Tassery](https://github.com/YiuRULE)) [#6580](https://github.com/ClickHouse/ClickHouse/pull/6580) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix Kafka messages duplication problem on normal server restart. [#6597](https://github.com/ClickHouse/ClickHouse/pull/6597) ([Ivan](https://github.com/abyss7))
+* Fixed an issue when long `ALTER UPDATE` or `ALTER DELETE` may prevent regular merges to run. Prevent mutations from executing if there is no enough free threads available. [#6502](https://github.com/ClickHouse/ClickHouse/issues/6502) [#6617](https://github.com/ClickHouse/ClickHouse/pull/6617) ([tavplubix](https://github.com/tavplubix))
+* Fixed error with processing "timezone" in server configuration file. [#6709](https://github.com/ClickHouse/ClickHouse/pull/6709) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix kafka tests. [#6805](https://github.com/ClickHouse/ClickHouse/pull/6805) ([Ivan](https://github.com/abyss7))
### Security Fix
-* If the attacker has write access to ZooKeeper and is able to run custom server available from the network where ClickHouse run, it can create custom-built malicious server that will act as ClickHouse replica and register it in ZooKeeper. When another replica will fetch data part from malicious replica, it can force clickhouse-server to write to arbitrary path on filesystem. Found by Eldar Zaitov, information security team at Yandex. [#6247](https://github.com/yandex/ClickHouse/pull/6247) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* If the attacker has write access to ZooKeeper and is able to run custom server available from the network where ClickHouse run, it can create custom-built malicious server that will act as ClickHouse replica and register it in ZooKeeper. When another replica will fetch data part from malicious replica, it can force clickhouse-server to write to arbitrary path on filesystem. Found by Eldar Zaitov, information security team at Yandex. [#6247](https://github.com/ClickHouse/ClickHouse/pull/6247) ([alexey-milovidov](https://github.com/alexey-milovidov))
## ClickHouse release 19.13.3.26, 2019-08-22
### Bug Fix
-* Fix `ALTER TABLE ... UPDATE` query for tables with `enable_mixed_granularity_parts=1`. [#6543](https://github.com/yandex/ClickHouse/pull/6543) ([alesapin](https://github.com/alesapin))
-* Fix NPE when using IN clause with a subquery with a tuple. [#6125](https://github.com/yandex/ClickHouse/issues/6125) [#6550](https://github.com/yandex/ClickHouse/pull/6550) ([tavplubix](https://github.com/tavplubix))
-* Fixed an issue that if a stale replica becomes alive, it may still have data parts that were removed by DROP PARTITION. [#6522](https://github.com/yandex/ClickHouse/issues/6522) [#6523](https://github.com/yandex/ClickHouse/pull/6523) ([tavplubix](https://github.com/tavplubix))
-* Fixed issue with parsing CSV [#6426](https://github.com/yandex/ClickHouse/issues/6426) [#6559](https://github.com/yandex/ClickHouse/pull/6559) ([tavplubix](https://github.com/tavplubix))
-* Fixed data race in system.parts table and ALTER query. This fixes [#6245](https://github.com/yandex/ClickHouse/issues/6245). [#6513](https://github.com/yandex/ClickHouse/pull/6513) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed wrong code in mutations that may lead to memory corruption. Fixed segfault with read of address `0x14c0` that may happed due to concurrent `DROP TABLE` and `SELECT` from `system.parts` or `system.parts_columns`. Fixed race condition in preparation of mutation queries. Fixed deadlock caused by `OPTIMIZE` of Replicated tables and concurrent modification operations like ALTERs. [#6514](https://github.com/yandex/ClickHouse/pull/6514) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed possible data loss after `ALTER DELETE` query on table with skipping index. [#6224](https://github.com/yandex/ClickHouse/issues/6224) [#6282](https://github.com/yandex/ClickHouse/pull/6282) ([Nikita Vasilev](https://github.com/nikvas0))
+* Fix `ALTER TABLE ... UPDATE` query for tables with `enable_mixed_granularity_parts=1`. [#6543](https://github.com/ClickHouse/ClickHouse/pull/6543) ([alesapin](https://github.com/alesapin))
+* Fix NPE when using IN clause with a subquery with a tuple. [#6125](https://github.com/ClickHouse/ClickHouse/issues/6125) [#6550](https://github.com/ClickHouse/ClickHouse/pull/6550) ([tavplubix](https://github.com/tavplubix))
+* Fixed an issue that if a stale replica becomes alive, it may still have data parts that were removed by DROP PARTITION. [#6522](https://github.com/ClickHouse/ClickHouse/issues/6522) [#6523](https://github.com/ClickHouse/ClickHouse/pull/6523) ([tavplubix](https://github.com/tavplubix))
+* Fixed issue with parsing CSV [#6426](https://github.com/ClickHouse/ClickHouse/issues/6426) [#6559](https://github.com/ClickHouse/ClickHouse/pull/6559) ([tavplubix](https://github.com/tavplubix))
+* Fixed data race in system.parts table and ALTER query. This fixes [#6245](https://github.com/ClickHouse/ClickHouse/issues/6245). [#6513](https://github.com/ClickHouse/ClickHouse/pull/6513) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed wrong code in mutations that may lead to memory corruption. Fixed segfault with read of address `0x14c0` that may happed due to concurrent `DROP TABLE` and `SELECT` from `system.parts` or `system.parts_columns`. Fixed race condition in preparation of mutation queries. Fixed deadlock caused by `OPTIMIZE` of Replicated tables and concurrent modification operations like ALTERs. [#6514](https://github.com/ClickHouse/ClickHouse/pull/6514) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed possible data loss after `ALTER DELETE` query on table with skipping index. [#6224](https://github.com/ClickHouse/ClickHouse/issues/6224) [#6282](https://github.com/ClickHouse/ClickHouse/pull/6282) ([Nikita Vasilev](https://github.com/nikvas0))
### Security Fix
-* If the attacker has write access to ZooKeeper and is able to run custom server available from the network where ClickHouse run, it can create custom-built malicious server that will act as ClickHouse replica and register it in ZooKeeper. When another replica will fetch data part from malicious replica, it can force clickhouse-server to write to arbitrary path on filesystem. Found by Eldar Zaitov, information security team at Yandex. [#6247](https://github.com/yandex/ClickHouse/pull/6247) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* If the attacker has write access to ZooKeeper and is able to run custom server available from the network where ClickHouse run, it can create custom-built malicious server that will act as ClickHouse replica and register it in ZooKeeper. When another replica will fetch data part from malicious replica, it can force clickhouse-server to write to arbitrary path on filesystem. Found by Eldar Zaitov, information security team at Yandex. [#6247](https://github.com/ClickHouse/ClickHouse/pull/6247) ([alexey-milovidov](https://github.com/alexey-milovidov))
## ClickHouse release 19.13.2.19, 2019-08-14
### New Feature
-* Sampling profiler on query level. [Example](https://gist.github.com/alexey-milovidov/92758583dd41c24c360fdb8d6a4da194). [#4247](https://github.com/yandex/ClickHouse/issues/4247) ([laplab](https://github.com/laplab)) [#6124](https://github.com/yandex/ClickHouse/pull/6124) ([alexey-milovidov](https://github.com/alexey-milovidov)) [#6250](https://github.com/yandex/ClickHouse/pull/6250) [#6283](https://github.com/yandex/ClickHouse/pull/6283) [#6386](https://github.com/yandex/ClickHouse/pull/6386)
-* Allow to specify a list of columns with `COLUMNS('regexp')` expression that works like a more sophisticated variant of `*` asterisk. [#5951](https://github.com/yandex/ClickHouse/pull/5951) ([mfridental](https://github.com/mfridental)), ([alexey-milovidov](https://github.com/alexey-milovidov))
-* `CREATE TABLE AS table_function()` is now possible [#6057](https://github.com/yandex/ClickHouse/pull/6057) ([dimarub2000](https://github.com/dimarub2000))
-* Adam optimizer for stochastic gradient descent is used by default in `stochasticLinearRegression()` and `stochasticLogisticRegression()` aggregate functions, because it shows good quality without almost any tuning. [#6000](https://github.com/yandex/ClickHouse/pull/6000) ([Quid37](https://github.com/Quid37))
-* Added functions for working with the сustom week number [#5212](https://github.com/yandex/ClickHouse/pull/5212) ([Andy Yang](https://github.com/andyyzh))
-* `RENAME` queries now work with all storages. [#5953](https://github.com/yandex/ClickHouse/pull/5953) ([Ivan](https://github.com/abyss7))
-* Now client receive logs from server with any desired level by setting `send_logs_level` regardless to the log level specified in server settings. [#5964](https://github.com/yandex/ClickHouse/pull/5964) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov))
+* Sampling profiler on query level. [Example](https://gist.github.com/alexey-milovidov/92758583dd41c24c360fdb8d6a4da194). [#4247](https://github.com/ClickHouse/ClickHouse/issues/4247) ([laplab](https://github.com/laplab)) [#6124](https://github.com/ClickHouse/ClickHouse/pull/6124) ([alexey-milovidov](https://github.com/alexey-milovidov)) [#6250](https://github.com/ClickHouse/ClickHouse/pull/6250) [#6283](https://github.com/ClickHouse/ClickHouse/pull/6283) [#6386](https://github.com/ClickHouse/ClickHouse/pull/6386)
+* Allow to specify a list of columns with `COLUMNS('regexp')` expression that works like a more sophisticated variant of `*` asterisk. [#5951](https://github.com/ClickHouse/ClickHouse/pull/5951) ([mfridental](https://github.com/mfridental)), ([alexey-milovidov](https://github.com/alexey-milovidov))
+* `CREATE TABLE AS table_function()` is now possible [#6057](https://github.com/ClickHouse/ClickHouse/pull/6057) ([dimarub2000](https://github.com/dimarub2000))
+* Adam optimizer for stochastic gradient descent is used by default in `stochasticLinearRegression()` and `stochasticLogisticRegression()` aggregate functions, because it shows good quality without almost any tuning. [#6000](https://github.com/ClickHouse/ClickHouse/pull/6000) ([Quid37](https://github.com/Quid37))
+* Added functions for working with the сustom week number [#5212](https://github.com/ClickHouse/ClickHouse/pull/5212) ([Andy Yang](https://github.com/andyyzh))
+* `RENAME` queries now work with all storages. [#5953](https://github.com/ClickHouse/ClickHouse/pull/5953) ([Ivan](https://github.com/abyss7))
+* Now client receive logs from server with any desired level by setting `send_logs_level` regardless to the log level specified in server settings. [#5964](https://github.com/ClickHouse/ClickHouse/pull/5964) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov))
### Backward Incompatible Change
-* The setting `input_format_defaults_for_omitted_fields` is enabled by default. Inserts in Distibuted tables need this setting to be the same on cluster (you need to set it before rolling update). It enables calculation of complex default expressions for omitted fields in `JSONEachRow` and `CSV*` formats. It should be the expected behaviour but may lead to negligible performance difference. [#6043](https://github.com/yandex/ClickHouse/pull/6043) ([Artem Zuikov](https://github.com/4ertus2)), [#5625](https://github.com/yandex/ClickHouse/pull/5625) ([akuzm](https://github.com/akuzm))
+* The setting `input_format_defaults_for_omitted_fields` is enabled by default. Inserts in Distibuted tables need this setting to be the same on cluster (you need to set it before rolling update). It enables calculation of complex default expressions for omitted fields in `JSONEachRow` and `CSV*` formats. It should be the expected behaviour but may lead to negligible performance difference. [#6043](https://github.com/ClickHouse/ClickHouse/pull/6043) ([Artem Zuikov](https://github.com/4ertus2)), [#5625](https://github.com/ClickHouse/ClickHouse/pull/5625) ([akuzm](https://github.com/akuzm))
### Experimental features
-* New query processing pipeline. Use `experimental_use_processors=1` option to enable it. Use for your own trouble. [#4914](https://github.com/yandex/ClickHouse/pull/4914) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* New query processing pipeline. Use `experimental_use_processors=1` option to enable it. Use for your own trouble. [#4914](https://github.com/ClickHouse/ClickHouse/pull/4914) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
### Bug Fix
* Kafka integration has been fixed in this version.
-* Fixed `DoubleDelta` encoding of `Int64` for large `DoubleDelta` values, improved `DoubleDelta` encoding for random data for `Int32`. [#5998](https://github.com/yandex/ClickHouse/pull/5998) ([Vasily Nemkov](https://github.com/Enmk))
-* Fixed overestimation of `max_rows_to_read` if the setting `merge_tree_uniform_read_distribution` is set to 0. [#6019](https://github.com/yandex/ClickHouse/pull/6019) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed `DoubleDelta` encoding of `Int64` for large `DoubleDelta` values, improved `DoubleDelta` encoding for random data for `Int32`. [#5998](https://github.com/ClickHouse/ClickHouse/pull/5998) ([Vasily Nemkov](https://github.com/Enmk))
+* Fixed overestimation of `max_rows_to_read` if the setting `merge_tree_uniform_read_distribution` is set to 0. [#6019](https://github.com/ClickHouse/ClickHouse/pull/6019) ([alexey-milovidov](https://github.com/alexey-milovidov))
### Improvement
-* Throws an exception if `config.d` file doesn't have the corresponding root element as the config file [#6123](https://github.com/yandex/ClickHouse/pull/6123) ([dimarub2000](https://github.com/dimarub2000))
+* Throws an exception if `config.d` file doesn't have the corresponding root element as the config file [#6123](https://github.com/ClickHouse/ClickHouse/pull/6123) ([dimarub2000](https://github.com/dimarub2000))
### Performance Improvement
-* Optimize `count()`. Now it uses the smallest column (if possible). [#6028](https://github.com/yandex/ClickHouse/pull/6028) ([Amos Bird](https://github.com/amosbird))
+* Optimize `count()`. Now it uses the smallest column (if possible). [#6028](https://github.com/ClickHouse/ClickHouse/pull/6028) ([Amos Bird](https://github.com/amosbird))
### Build/Testing/Packaging Improvement
-* Report memory usage in performance tests. [#5899](https://github.com/yandex/ClickHouse/pull/5899) ([akuzm](https://github.com/akuzm))
-* Fix build with external `libcxx` [#6010](https://github.com/yandex/ClickHouse/pull/6010) ([Ivan](https://github.com/abyss7))
-* Fix shared build with `rdkafka` library [#6101](https://github.com/yandex/ClickHouse/pull/6101) ([Ivan](https://github.com/abyss7))
+* Report memory usage in performance tests. [#5899](https://github.com/ClickHouse/ClickHouse/pull/5899) ([akuzm](https://github.com/akuzm))
+* Fix build with external `libcxx` [#6010](https://github.com/ClickHouse/ClickHouse/pull/6010) ([Ivan](https://github.com/abyss7))
+* Fix shared build with `rdkafka` library [#6101](https://github.com/ClickHouse/ClickHouse/pull/6101) ([Ivan](https://github.com/abyss7))
## ClickHouse release 19.11.8.46, 2019-08-22
### Bug Fix
-* Fix `ALTER TABLE ... UPDATE` query for tables with `enable_mixed_granularity_parts=1`. [#6543](https://github.com/yandex/ClickHouse/pull/6543) ([alesapin](https://github.com/alesapin))
-* Fix NPE when using IN clause with a subquery with a tuple. [#6125](https://github.com/yandex/ClickHouse/issues/6125) [#6550](https://github.com/yandex/ClickHouse/pull/6550) ([tavplubix](https://github.com/tavplubix))
-* Fixed an issue that if a stale replica becomes alive, it may still have data parts that were removed by DROP PARTITION. [#6522](https://github.com/yandex/ClickHouse/issues/6522) [#6523](https://github.com/yandex/ClickHouse/pull/6523) ([tavplubix](https://github.com/tavplubix))
-* Fixed issue with parsing CSV [#6426](https://github.com/yandex/ClickHouse/issues/6426) [#6559](https://github.com/yandex/ClickHouse/pull/6559) ([tavplubix](https://github.com/tavplubix))
-* Fixed data race in system.parts table and ALTER query. This fixes [#6245](https://github.com/yandex/ClickHouse/issues/6245). [#6513](https://github.com/yandex/ClickHouse/pull/6513) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed wrong code in mutations that may lead to memory corruption. Fixed segfault with read of address `0x14c0` that may happed due to concurrent `DROP TABLE` and `SELECT` from `system.parts` or `system.parts_columns`. Fixed race condition in preparation of mutation queries. Fixed deadlock caused by `OPTIMIZE` of Replicated tables and concurrent modification operations like ALTERs. [#6514](https://github.com/yandex/ClickHouse/pull/6514) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix `ALTER TABLE ... UPDATE` query for tables with `enable_mixed_granularity_parts=1`. [#6543](https://github.com/ClickHouse/ClickHouse/pull/6543) ([alesapin](https://github.com/alesapin))
+* Fix NPE when using IN clause with a subquery with a tuple. [#6125](https://github.com/ClickHouse/ClickHouse/issues/6125) [#6550](https://github.com/ClickHouse/ClickHouse/pull/6550) ([tavplubix](https://github.com/tavplubix))
+* Fixed an issue that if a stale replica becomes alive, it may still have data parts that were removed by DROP PARTITION. [#6522](https://github.com/ClickHouse/ClickHouse/issues/6522) [#6523](https://github.com/ClickHouse/ClickHouse/pull/6523) ([tavplubix](https://github.com/tavplubix))
+* Fixed issue with parsing CSV [#6426](https://github.com/ClickHouse/ClickHouse/issues/6426) [#6559](https://github.com/ClickHouse/ClickHouse/pull/6559) ([tavplubix](https://github.com/tavplubix))
+* Fixed data race in system.parts table and ALTER query. This fixes [#6245](https://github.com/ClickHouse/ClickHouse/issues/6245). [#6513](https://github.com/ClickHouse/ClickHouse/pull/6513) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed wrong code in mutations that may lead to memory corruption. Fixed segfault with read of address `0x14c0` that may happed due to concurrent `DROP TABLE` and `SELECT` from `system.parts` or `system.parts_columns`. Fixed race condition in preparation of mutation queries. Fixed deadlock caused by `OPTIMIZE` of Replicated tables and concurrent modification operations like ALTERs. [#6514](https://github.com/ClickHouse/ClickHouse/pull/6514) ([alexey-milovidov](https://github.com/alexey-milovidov))
## ClickHouse release 19.11.7.40, 2019-08-14
### Bug fix
* Kafka integration has been fixed in this version.
-* Fix segfault when using `arrayReduce` for constant arguments. [#6326](https://github.com/yandex/ClickHouse/pull/6326) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed `toFloat()` monotonicity. [#6374](https://github.com/yandex/ClickHouse/pull/6374) ([dimarub2000](https://github.com/dimarub2000))
-* Fix segfault with enabled `optimize_skip_unused_shards` and missing sharding key. [#6384](https://github.com/yandex/ClickHouse/pull/6384) ([CurtizJ](https://github.com/CurtizJ))
-* Fixed logic of `arrayEnumerateUniqRanked` function. [#6423](https://github.com/yandex/ClickHouse/pull/6423) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Removed extra verbose logging from MySQL handler. [#6389](https://github.com/yandex/ClickHouse/pull/6389) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix wrong behavior and possible segfaults in `topK` and `topKWeighted` aggregated functions. [#6404](https://github.com/yandex/ClickHouse/pull/6404) ([CurtizJ](https://github.com/CurtizJ))
-* Do not expose virtual columns in `system.columns` table. This is required for backward compatibility. [#6406](https://github.com/yandex/ClickHouse/pull/6406) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix bug with memory allocation for string fields in complex key cache dictionary. [#6447](https://github.com/yandex/ClickHouse/pull/6447) ([alesapin](https://github.com/alesapin))
-* Fix bug with enabling adaptive granularity when creating new replica for `Replicated*MergeTree` table. [#6452](https://github.com/yandex/ClickHouse/pull/6452) ([alesapin](https://github.com/alesapin))
-* Fix infinite loop when reading Kafka messages. [#6354](https://github.com/yandex/ClickHouse/pull/6354) ([abyss7](https://github.com/abyss7))
-* Fixed the possibility of a fabricated query to cause server crash due to stack overflow in SQL parser and possibility of stack overflow in `Merge` and `Distributed` tables [#6433](https://github.com/yandex/ClickHouse/pull/6433) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed Gorilla encoding error on small sequences. [#6444](https://github.com/yandex/ClickHouse/pull/6444) ([Enmk](https://github.com/Enmk))
+* Fix segfault when using `arrayReduce` for constant arguments. [#6326](https://github.com/ClickHouse/ClickHouse/pull/6326) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed `toFloat()` monotonicity. [#6374](https://github.com/ClickHouse/ClickHouse/pull/6374) ([dimarub2000](https://github.com/dimarub2000))
+* Fix segfault with enabled `optimize_skip_unused_shards` and missing sharding key. [#6384](https://github.com/ClickHouse/ClickHouse/pull/6384) ([CurtizJ](https://github.com/CurtizJ))
+* Fixed logic of `arrayEnumerateUniqRanked` function. [#6423](https://github.com/ClickHouse/ClickHouse/pull/6423) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Removed extra verbose logging from MySQL handler. [#6389](https://github.com/ClickHouse/ClickHouse/pull/6389) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix wrong behavior and possible segfaults in `topK` and `topKWeighted` aggregated functions. [#6404](https://github.com/ClickHouse/ClickHouse/pull/6404) ([CurtizJ](https://github.com/CurtizJ))
+* Do not expose virtual columns in `system.columns` table. This is required for backward compatibility. [#6406](https://github.com/ClickHouse/ClickHouse/pull/6406) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix bug with memory allocation for string fields in complex key cache dictionary. [#6447](https://github.com/ClickHouse/ClickHouse/pull/6447) ([alesapin](https://github.com/alesapin))
+* Fix bug with enabling adaptive granularity when creating new replica for `Replicated*MergeTree` table. [#6452](https://github.com/ClickHouse/ClickHouse/pull/6452) ([alesapin](https://github.com/alesapin))
+* Fix infinite loop when reading Kafka messages. [#6354](https://github.com/ClickHouse/ClickHouse/pull/6354) ([abyss7](https://github.com/abyss7))
+* Fixed the possibility of a fabricated query to cause server crash due to stack overflow in SQL parser and possibility of stack overflow in `Merge` and `Distributed` tables [#6433](https://github.com/ClickHouse/ClickHouse/pull/6433) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed Gorilla encoding error on small sequences. [#6444](https://github.com/ClickHouse/ClickHouse/pull/6444) ([Enmk](https://github.com/Enmk))
### Improvement
-* Allow user to override `poll_interval` and `idle_connection_timeout` settings on connection. [#6230](https://github.com/yandex/ClickHouse/pull/6230) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Allow user to override `poll_interval` and `idle_connection_timeout` settings on connection. [#6230](https://github.com/ClickHouse/ClickHouse/pull/6230) ([alexey-milovidov](https://github.com/alexey-milovidov))
## ClickHouse release 19.11.5.28, 2019-08-05
### Bug fix
-* Fixed the possibility of hanging queries when server is overloaded. [#6301](https://github.com/yandex/ClickHouse/pull/6301) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix FPE in yandexConsistentHash function. This fixes [#6304](https://github.com/yandex/ClickHouse/issues/6304). [#6126](https://github.com/yandex/ClickHouse/pull/6126) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed bug in conversion of `LowCardinality` types in `AggregateFunctionFactory`. This fixes [#6257](https://github.com/yandex/ClickHouse/issues/6257). [#6281](https://github.com/yandex/ClickHouse/pull/6281) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
-* Fix parsing of `bool` settings from `true` and `false` strings in configuration files. [#6278](https://github.com/yandex/ClickHouse/pull/6278) ([alesapin](https://github.com/alesapin))
-* Fix rare bug with incompatible stream headers in queries to `Distributed` table over `MergeTree` table when part of `WHERE` moves to `PREWHERE`. [#6236](https://github.com/yandex/ClickHouse/pull/6236) ([alesapin](https://github.com/alesapin))
-* Fixed overflow in integer division of signed type to unsigned type. This fixes [#6214](https://github.com/yandex/ClickHouse/issues/6214). [#6233](https://github.com/yandex/ClickHouse/pull/6233) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed the possibility of hanging queries when server is overloaded. [#6301](https://github.com/ClickHouse/ClickHouse/pull/6301) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix FPE in yandexConsistentHash function. This fixes [#6304](https://github.com/ClickHouse/ClickHouse/issues/6304). [#6126](https://github.com/ClickHouse/ClickHouse/pull/6126) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed bug in conversion of `LowCardinality` types in `AggregateFunctionFactory`. This fixes [#6257](https://github.com/ClickHouse/ClickHouse/issues/6257). [#6281](https://github.com/ClickHouse/ClickHouse/pull/6281) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* Fix parsing of `bool` settings from `true` and `false` strings in configuration files. [#6278](https://github.com/ClickHouse/ClickHouse/pull/6278) ([alesapin](https://github.com/alesapin))
+* Fix rare bug with incompatible stream headers in queries to `Distributed` table over `MergeTree` table when part of `WHERE` moves to `PREWHERE`. [#6236](https://github.com/ClickHouse/ClickHouse/pull/6236) ([alesapin](https://github.com/alesapin))
+* Fixed overflow in integer division of signed type to unsigned type. This fixes [#6214](https://github.com/ClickHouse/ClickHouse/issues/6214). [#6233](https://github.com/ClickHouse/ClickHouse/pull/6233) ([alexey-milovidov](https://github.com/alexey-milovidov))
### Backward Incompatible Change
* `Kafka` still broken.
@@ -404,24 +404,24 @@
## ClickHouse release 19.11.4.24, 2019-08-01
### Bug Fix
-* Fix bug with writing secondary indices marks with adaptive granularity. [#6126](https://github.com/yandex/ClickHouse/pull/6126) ([alesapin](https://github.com/alesapin))
-* Fix `WITH ROLLUP` and `WITH CUBE` modifiers of `GROUP BY` with two-level aggregation. [#6225](https://github.com/yandex/ClickHouse/pull/6225) ([Anton Popov](https://github.com/CurtizJ))
-* Fixed hang in `JSONExtractRaw` function. Fixed [#6195](https://github.com/yandex/ClickHouse/issues/6195) [#6198](https://github.com/yandex/ClickHouse/pull/6198) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix segfault in ExternalLoader::reloadOutdated(). [#6082](https://github.com/yandex/ClickHouse/pull/6082) ([Vitaly Baranov](https://github.com/vitlibar))
-* Fixed the case when server may close listening sockets but not shutdown and continue serving remaining queries. You may end up with two running clickhouse-server processes. Sometimes, the server may return an error `bad_function_call` for remaining queries. [#6231](https://github.com/yandex/ClickHouse/pull/6231) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed useless and incorrect condition on update field for initial loading of external dictionaries via ODBC, MySQL, ClickHouse and HTTP. This fixes [#6069](https://github.com/yandex/ClickHouse/issues/6069) [#6083](https://github.com/yandex/ClickHouse/pull/6083) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed irrelevant exception in cast of `LowCardinality(Nullable)` to not-Nullable column in case if it doesn't contain Nulls (e.g. in query like `SELECT CAST(CAST('Hello' AS LowCardinality(Nullable(String))) AS String)`. [#6094](https://github.com/yandex/ClickHouse/issues/6094) [#6119](https://github.com/yandex/ClickHouse/pull/6119) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
-* Fix non-deterministic result of "uniq" aggregate function in extreme rare cases. The bug was present in all ClickHouse versions. [#6058](https://github.com/yandex/ClickHouse/pull/6058) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Segfault when we set a little bit too high CIDR on the function `IPv6CIDRToRange`. [#6068](https://github.com/yandex/ClickHouse/pull/6068) ([Guillaume Tassery](https://github.com/YiuRULE))
-* Fixed small memory leak when server throw many exceptions from many different contexts. [#6144](https://github.com/yandex/ClickHouse/pull/6144) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix the situation when consumer got paused before subscription and not resumed afterwards. [#6075](https://github.com/yandex/ClickHouse/pull/6075) ([Ivan](https://github.com/abyss7)) Note that Kafka is broken in this version.
-* Clearing the Kafka data buffer from the previous read operation that was completed with an error [#6026](https://github.com/yandex/ClickHouse/pull/6026) ([Nikolay](https://github.com/bopohaa)) Note that Kafka is broken in this version.
-* Since `StorageMergeTree::background_task_handle` is initialized in `startup()` the `MergeTreeBlockOutputStream::write()` may try to use it before initialization. Just check if it is initialized. [#6080](https://github.com/yandex/ClickHouse/pull/6080) ([Ivan](https://github.com/abyss7))
+* Fix bug with writing secondary indices marks with adaptive granularity. [#6126](https://github.com/ClickHouse/ClickHouse/pull/6126) ([alesapin](https://github.com/alesapin))
+* Fix `WITH ROLLUP` and `WITH CUBE` modifiers of `GROUP BY` with two-level aggregation. [#6225](https://github.com/ClickHouse/ClickHouse/pull/6225) ([Anton Popov](https://github.com/CurtizJ))
+* Fixed hang in `JSONExtractRaw` function. Fixed [#6195](https://github.com/ClickHouse/ClickHouse/issues/6195) [#6198](https://github.com/ClickHouse/ClickHouse/pull/6198) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix segfault in ExternalLoader::reloadOutdated(). [#6082](https://github.com/ClickHouse/ClickHouse/pull/6082) ([Vitaly Baranov](https://github.com/vitlibar))
+* Fixed the case when server may close listening sockets but not shutdown and continue serving remaining queries. You may end up with two running clickhouse-server processes. Sometimes, the server may return an error `bad_function_call` for remaining queries. [#6231](https://github.com/ClickHouse/ClickHouse/pull/6231) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed useless and incorrect condition on update field for initial loading of external dictionaries via ODBC, MySQL, ClickHouse and HTTP. This fixes [#6069](https://github.com/ClickHouse/ClickHouse/issues/6069) [#6083](https://github.com/ClickHouse/ClickHouse/pull/6083) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed irrelevant exception in cast of `LowCardinality(Nullable)` to not-Nullable column in case if it doesn't contain Nulls (e.g. in query like `SELECT CAST(CAST('Hello' AS LowCardinality(Nullable(String))) AS String)`. [#6094](https://github.com/ClickHouse/ClickHouse/issues/6094) [#6119](https://github.com/ClickHouse/ClickHouse/pull/6119) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* Fix non-deterministic result of "uniq" aggregate function in extreme rare cases. The bug was present in all ClickHouse versions. [#6058](https://github.com/ClickHouse/ClickHouse/pull/6058) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Segfault when we set a little bit too high CIDR on the function `IPv6CIDRToRange`. [#6068](https://github.com/ClickHouse/ClickHouse/pull/6068) ([Guillaume Tassery](https://github.com/YiuRULE))
+* Fixed small memory leak when server throw many exceptions from many different contexts. [#6144](https://github.com/ClickHouse/ClickHouse/pull/6144) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix the situation when consumer got paused before subscription and not resumed afterwards. [#6075](https://github.com/ClickHouse/ClickHouse/pull/6075) ([Ivan](https://github.com/abyss7)) Note that Kafka is broken in this version.
+* Clearing the Kafka data buffer from the previous read operation that was completed with an error [#6026](https://github.com/ClickHouse/ClickHouse/pull/6026) ([Nikolay](https://github.com/bopohaa)) Note that Kafka is broken in this version.
+* Since `StorageMergeTree::background_task_handle` is initialized in `startup()` the `MergeTreeBlockOutputStream::write()` may try to use it before initialization. Just check if it is initialized. [#6080](https://github.com/ClickHouse/ClickHouse/pull/6080) ([Ivan](https://github.com/abyss7))
### Build/Testing/Packaging Improvement
-* Added official `rpm` packages. [#5740](https://github.com/yandex/ClickHouse/pull/5740) ([proller](https://github.com/proller)) ([alesapin](https://github.com/alesapin))
-* Add an ability to build `.rpm` and `.tgz` packages with `packager` script. [#5769](https://github.com/yandex/ClickHouse/pull/5769) ([alesapin](https://github.com/alesapin))
-* Fixes for "Arcadia" build system. [#6223](https://github.com/yandex/ClickHouse/pull/6223) ([proller](https://github.com/proller))
+* Added official `rpm` packages. [#5740](https://github.com/ClickHouse/ClickHouse/pull/5740) ([proller](https://github.com/proller)) ([alesapin](https://github.com/alesapin))
+* Add an ability to build `.rpm` and `.tgz` packages with `packager` script. [#5769](https://github.com/ClickHouse/ClickHouse/pull/5769) ([alesapin](https://github.com/alesapin))
+* Fixes for "Arcadia" build system. [#6223](https://github.com/ClickHouse/ClickHouse/pull/6223) ([proller](https://github.com/proller))
### Backward Incompatible Change
* `Kafka` is broken in this version.
@@ -430,841 +430,841 @@
## ClickHouse release 19.11.3.11, 2019-07-18
### New Feature
-* Added support for prepared statements. [#5331](https://github.com/yandex/ClickHouse/pull/5331/) ([Alexander](https://github.com/sanych73)) [#5630](https://github.com/yandex/ClickHouse/pull/5630) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* `DoubleDelta` and `Gorilla` column codecs [#5600](https://github.com/yandex/ClickHouse/pull/5600) ([Vasily Nemkov](https://github.com/Enmk))
-* Added `os_thread_priority` setting that allows to control the "nice" value of query processing threads that is used by OS to adjust dynamic scheduling priority. It requires `CAP_SYS_NICE` capabilities to work. This implements [#5858](https://github.com/yandex/ClickHouse/issues/5858) [#5909](https://github.com/yandex/ClickHouse/pull/5909) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Implement `_topic`, `_offset`, `_key` columns for Kafka engine [#5382](https://github.com/yandex/ClickHouse/pull/5382) ([Ivan](https://github.com/abyss7)) Note that Kafka is broken in this version.
-* Add aggregate function combinator `-Resample` [#5590](https://github.com/yandex/ClickHouse/pull/5590) ([hcz](https://github.com/hczhcz))
-* Aggregate functions `groupArrayMovingSum(win_size)(x)` and `groupArrayMovingAvg(win_size)(x)`, which calculate moving sum/avg with or without window-size limitation. [#5595](https://github.com/yandex/ClickHouse/pull/5595) ([inv2004](https://github.com/inv2004))
-* Add synonim `arrayFlatten` <-> `flatten` [#5764](https://github.com/yandex/ClickHouse/pull/5764) ([hcz](https://github.com/hczhcz))
-* Intergate H3 function `geoToH3` from Uber. [#4724](https://github.com/yandex/ClickHouse/pull/4724) ([Remen Ivan](https://github.com/BHYCHIK)) [#5805](https://github.com/yandex/ClickHouse/pull/5805) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Added support for prepared statements. [#5331](https://github.com/ClickHouse/ClickHouse/pull/5331/) ([Alexander](https://github.com/sanych73)) [#5630](https://github.com/ClickHouse/ClickHouse/pull/5630) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* `DoubleDelta` and `Gorilla` column codecs [#5600](https://github.com/ClickHouse/ClickHouse/pull/5600) ([Vasily Nemkov](https://github.com/Enmk))
+* Added `os_thread_priority` setting that allows to control the "nice" value of query processing threads that is used by OS to adjust dynamic scheduling priority. It requires `CAP_SYS_NICE` capabilities to work. This implements [#5858](https://github.com/ClickHouse/ClickHouse/issues/5858) [#5909](https://github.com/ClickHouse/ClickHouse/pull/5909) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Implement `_topic`, `_offset`, `_key` columns for Kafka engine [#5382](https://github.com/ClickHouse/ClickHouse/pull/5382) ([Ivan](https://github.com/abyss7)) Note that Kafka is broken in this version.
+* Add aggregate function combinator `-Resample` [#5590](https://github.com/ClickHouse/ClickHouse/pull/5590) ([hcz](https://github.com/hczhcz))
+* Aggregate functions `groupArrayMovingSum(win_size)(x)` and `groupArrayMovingAvg(win_size)(x)`, which calculate moving sum/avg with or without window-size limitation. [#5595](https://github.com/ClickHouse/ClickHouse/pull/5595) ([inv2004](https://github.com/inv2004))
+* Add synonim `arrayFlatten` <-> `flatten` [#5764](https://github.com/ClickHouse/ClickHouse/pull/5764) ([hcz](https://github.com/hczhcz))
+* Intergate H3 function `geoToH3` from Uber. [#4724](https://github.com/ClickHouse/ClickHouse/pull/4724) ([Remen Ivan](https://github.com/BHYCHIK)) [#5805](https://github.com/ClickHouse/ClickHouse/pull/5805) ([alexey-milovidov](https://github.com/alexey-milovidov))
### Bug Fix
-* Implement DNS cache with asynchronous update. Separate thread resolves all hosts and updates DNS cache with period (setting `dns_cache_update_period`). It should help, when ip of hosts changes frequently. [#5857](https://github.com/yandex/ClickHouse/pull/5857) ([Anton Popov](https://github.com/CurtizJ))
-* Fix segfault in `Delta` codec which affects columns with values less than 32 bits size. The bug led to random memory corruption. [#5786](https://github.com/yandex/ClickHouse/pull/5786) ([alesapin](https://github.com/alesapin))
-* Fix segfault in TTL merge with non-physical columns in block. [#5819](https://github.com/yandex/ClickHouse/pull/5819) ([Anton Popov](https://github.com/CurtizJ))
-* Fix rare bug in checking of part with `LowCardinality` column. Previously `checkDataPart` always fails for part with `LowCardinality` column. [#5832](https://github.com/yandex/ClickHouse/pull/5832) ([alesapin](https://github.com/alesapin))
-* Avoid hanging connections when server thread pool is full. It is important for connections from `remote` table function or connections to a shard without replicas when there is long connection timeout. This fixes [#5878](https://github.com/yandex/ClickHouse/issues/5878) [#5881](https://github.com/yandex/ClickHouse/pull/5881) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Support for constant arguments to `evalMLModel` function. This fixes [#5817](https://github.com/yandex/ClickHouse/issues/5817) [#5820](https://github.com/yandex/ClickHouse/pull/5820) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed the issue when ClickHouse determines default time zone as `UCT` instead of `UTC`. This fixes [#5804](https://github.com/yandex/ClickHouse/issues/5804). [#5828](https://github.com/yandex/ClickHouse/pull/5828) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed buffer underflow in `visitParamExtractRaw`. This fixes [#5901](https://github.com/yandex/ClickHouse/issues/5901) [#5902](https://github.com/yandex/ClickHouse/pull/5902) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Now distributed `DROP/ALTER/TRUNCATE/OPTIMIZE ON CLUSTER` queries will be executed directly on leader replica. [#5757](https://github.com/yandex/ClickHouse/pull/5757) ([alesapin](https://github.com/alesapin))
-* Fix `coalesce` for `ColumnConst` with `ColumnNullable` + related changes. [#5755](https://github.com/yandex/ClickHouse/pull/5755) ([Artem Zuikov](https://github.com/4ertus2))
-* Fix the `ReadBufferFromKafkaConsumer` so that it keeps reading new messages after `commit()` even if it was stalled before [#5852](https://github.com/yandex/ClickHouse/pull/5852) ([Ivan](https://github.com/abyss7))
-* Fix `FULL` and `RIGHT` JOIN results when joining on `Nullable` keys in right table. [#5859](https://github.com/yandex/ClickHouse/pull/5859) ([Artem Zuikov](https://github.com/4ertus2))
-* Possible fix of infinite sleeping of low-priority queries. [#5842](https://github.com/yandex/ClickHouse/pull/5842) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix race condition, which cause that some queries may not appear in query_log after `SYSTEM FLUSH LOGS` query. [#5456](https://github.com/yandex/ClickHouse/issues/5456) [#5685](https://github.com/yandex/ClickHouse/pull/5685) ([Anton Popov](https://github.com/CurtizJ))
-* Fixed `heap-use-after-free` ASan warning in ClusterCopier caused by watch which try to use already removed copier object. [#5871](https://github.com/yandex/ClickHouse/pull/5871) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
-* Fixed wrong `StringRef` pointer returned by some implementations of `IColumn::deserializeAndInsertFromArena`. This bug affected only unit-tests. [#5973](https://github.com/yandex/ClickHouse/pull/5973) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
-* Prevent source and intermediate array join columns of masking same name columns. [#5941](https://github.com/yandex/ClickHouse/pull/5941) ([Artem Zuikov](https://github.com/4ertus2))
-* Fix insert and select query to MySQL engine with MySQL style identifier quoting. [#5704](https://github.com/yandex/ClickHouse/pull/5704) ([Winter Zhang](https://github.com/zhang2014))
-* Now `CHECK TABLE` query can work with MergeTree engine family. It returns check status and message if any for each part (or file in case of simplier engines). Also, fix bug in fetch of a broken part. [#5865](https://github.com/yandex/ClickHouse/pull/5865) ([alesapin](https://github.com/alesapin))
-* Fix SPLIT_SHARED_LIBRARIES runtime [#5793](https://github.com/yandex/ClickHouse/pull/5793) ([Danila Kutenin](https://github.com/danlark1))
-* Fixed time zone initialization when `/etc/localtime` is a relative symlink like `../usr/share/zoneinfo/Europe/Moscow` [#5922](https://github.com/yandex/ClickHouse/pull/5922) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* clickhouse-copier: Fix use-after free on shutdown [#5752](https://github.com/yandex/ClickHouse/pull/5752) ([proller](https://github.com/proller))
-* Updated `simdjson`. Fixed the issue that some invalid JSONs with zero bytes successfully parse. [#5938](https://github.com/yandex/ClickHouse/pull/5938) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix shutdown of SystemLogs [#5802](https://github.com/yandex/ClickHouse/pull/5802) ([Anton Popov](https://github.com/CurtizJ))
-* Fix hanging when condition in invalidate_query depends on a dictionary. [#6011](https://github.com/yandex/ClickHouse/pull/6011) ([Vitaly Baranov](https://github.com/vitlibar))
+* Implement DNS cache with asynchronous update. Separate thread resolves all hosts and updates DNS cache with period (setting `dns_cache_update_period`). It should help, when ip of hosts changes frequently. [#5857](https://github.com/ClickHouse/ClickHouse/pull/5857) ([Anton Popov](https://github.com/CurtizJ))
+* Fix segfault in `Delta` codec which affects columns with values less than 32 bits size. The bug led to random memory corruption. [#5786](https://github.com/ClickHouse/ClickHouse/pull/5786) ([alesapin](https://github.com/alesapin))
+* Fix segfault in TTL merge with non-physical columns in block. [#5819](https://github.com/ClickHouse/ClickHouse/pull/5819) ([Anton Popov](https://github.com/CurtizJ))
+* Fix rare bug in checking of part with `LowCardinality` column. Previously `checkDataPart` always fails for part with `LowCardinality` column. [#5832](https://github.com/ClickHouse/ClickHouse/pull/5832) ([alesapin](https://github.com/alesapin))
+* Avoid hanging connections when server thread pool is full. It is important for connections from `remote` table function or connections to a shard without replicas when there is long connection timeout. This fixes [#5878](https://github.com/ClickHouse/ClickHouse/issues/5878) [#5881](https://github.com/ClickHouse/ClickHouse/pull/5881) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Support for constant arguments to `evalMLModel` function. This fixes [#5817](https://github.com/ClickHouse/ClickHouse/issues/5817) [#5820](https://github.com/ClickHouse/ClickHouse/pull/5820) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed the issue when ClickHouse determines default time zone as `UCT` instead of `UTC`. This fixes [#5804](https://github.com/ClickHouse/ClickHouse/issues/5804). [#5828](https://github.com/ClickHouse/ClickHouse/pull/5828) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed buffer underflow in `visitParamExtractRaw`. This fixes [#5901](https://github.com/ClickHouse/ClickHouse/issues/5901) [#5902](https://github.com/ClickHouse/ClickHouse/pull/5902) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Now distributed `DROP/ALTER/TRUNCATE/OPTIMIZE ON CLUSTER` queries will be executed directly on leader replica. [#5757](https://github.com/ClickHouse/ClickHouse/pull/5757) ([alesapin](https://github.com/alesapin))
+* Fix `coalesce` for `ColumnConst` with `ColumnNullable` + related changes. [#5755](https://github.com/ClickHouse/ClickHouse/pull/5755) ([Artem Zuikov](https://github.com/4ertus2))
+* Fix the `ReadBufferFromKafkaConsumer` so that it keeps reading new messages after `commit()` even if it was stalled before [#5852](https://github.com/ClickHouse/ClickHouse/pull/5852) ([Ivan](https://github.com/abyss7))
+* Fix `FULL` and `RIGHT` JOIN results when joining on `Nullable` keys in right table. [#5859](https://github.com/ClickHouse/ClickHouse/pull/5859) ([Artem Zuikov](https://github.com/4ertus2))
+* Possible fix of infinite sleeping of low-priority queries. [#5842](https://github.com/ClickHouse/ClickHouse/pull/5842) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix race condition, which cause that some queries may not appear in query_log after `SYSTEM FLUSH LOGS` query. [#5456](https://github.com/ClickHouse/ClickHouse/issues/5456) [#5685](https://github.com/ClickHouse/ClickHouse/pull/5685) ([Anton Popov](https://github.com/CurtizJ))
+* Fixed `heap-use-after-free` ASan warning in ClusterCopier caused by watch which try to use already removed copier object. [#5871](https://github.com/ClickHouse/ClickHouse/pull/5871) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* Fixed wrong `StringRef` pointer returned by some implementations of `IColumn::deserializeAndInsertFromArena`. This bug affected only unit-tests. [#5973](https://github.com/ClickHouse/ClickHouse/pull/5973) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* Prevent source and intermediate array join columns of masking same name columns. [#5941](https://github.com/ClickHouse/ClickHouse/pull/5941) ([Artem Zuikov](https://github.com/4ertus2))
+* Fix insert and select query to MySQL engine with MySQL style identifier quoting. [#5704](https://github.com/ClickHouse/ClickHouse/pull/5704) ([Winter Zhang](https://github.com/zhang2014))
+* Now `CHECK TABLE` query can work with MergeTree engine family. It returns check status and message if any for each part (or file in case of simplier engines). Also, fix bug in fetch of a broken part. [#5865](https://github.com/ClickHouse/ClickHouse/pull/5865) ([alesapin](https://github.com/alesapin))
+* Fix SPLIT_SHARED_LIBRARIES runtime [#5793](https://github.com/ClickHouse/ClickHouse/pull/5793) ([Danila Kutenin](https://github.com/danlark1))
+* Fixed time zone initialization when `/etc/localtime` is a relative symlink like `../usr/share/zoneinfo/Europe/Moscow` [#5922](https://github.com/ClickHouse/ClickHouse/pull/5922) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* clickhouse-copier: Fix use-after free on shutdown [#5752](https://github.com/ClickHouse/ClickHouse/pull/5752) ([proller](https://github.com/proller))
+* Updated `simdjson`. Fixed the issue that some invalid JSONs with zero bytes successfully parse. [#5938](https://github.com/ClickHouse/ClickHouse/pull/5938) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix shutdown of SystemLogs [#5802](https://github.com/ClickHouse/ClickHouse/pull/5802) ([Anton Popov](https://github.com/CurtizJ))
+* Fix hanging when condition in invalidate_query depends on a dictionary. [#6011](https://github.com/ClickHouse/ClickHouse/pull/6011) ([Vitaly Baranov](https://github.com/vitlibar))
### Improvement
-* Allow unresolvable addresses in cluster configuration. They will be considered unavailable and tried to resolve at every connection attempt. This is especially useful for Kubernetes. This fixes [#5714](https://github.com/yandex/ClickHouse/issues/5714) [#5924](https://github.com/yandex/ClickHouse/pull/5924) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Close idle TCP connections (with one hour timeout by default). This is especially important for large clusters with multiple distributed tables on every server, because every server can possibly keep a connection pool to every other server, and after peak query concurrency, connections will stall. This fixes [#5879](https://github.com/yandex/ClickHouse/issues/5879) [#5880](https://github.com/yandex/ClickHouse/pull/5880) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Better quality of `topK` function. Changed the SavingSpace set behavior to remove the last element if the new element have a bigger weight. [#5833](https://github.com/yandex/ClickHouse/issues/5833) [#5850](https://github.com/yandex/ClickHouse/pull/5850) ([Guillaume Tassery](https://github.com/YiuRULE))
-* URL functions to work with domains now can work for incomplete URLs without scheme [#5725](https://github.com/yandex/ClickHouse/pull/5725) ([alesapin](https://github.com/alesapin))
-* Checksums added to the `system.parts_columns` table. [#5874](https://github.com/yandex/ClickHouse/pull/5874) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov))
-* Added `Enum` data type as a synonim for `Enum8` or `Enum16`. [#5886](https://github.com/yandex/ClickHouse/pull/5886) ([dimarub2000](https://github.com/dimarub2000))
-* Full bit transpose variant for `T64` codec. Could lead to better compression with `zstd`. [#5742](https://github.com/yandex/ClickHouse/pull/5742) ([Artem Zuikov](https://github.com/4ertus2))
-* Condition on `startsWith` function now can uses primary key. This fixes [#5310](https://github.com/yandex/ClickHouse/issues/5310) and [#5882](https://github.com/yandex/ClickHouse/issues/5882) [#5919](https://github.com/yandex/ClickHouse/pull/5919) ([dimarub2000](https://github.com/dimarub2000))
-* Allow to use `clickhouse-copier` with cross-replication cluster topology by permitting empty database name. [#5745](https://github.com/yandex/ClickHouse/pull/5745) ([nvartolomei](https://github.com/nvartolomei))
-* Use `UTC` as default timezone on a system without `tzdata` (e.g. bare Docker container). Before this patch, error message `Could not determine local time zone` was printed and server or client refused to start. [#5827](https://github.com/yandex/ClickHouse/pull/5827) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Returned back support for floating point argument in function `quantileTiming` for backward compatibility. [#5911](https://github.com/yandex/ClickHouse/pull/5911) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Show which table is missing column in error messages. [#5768](https://github.com/yandex/ClickHouse/pull/5768) ([Ivan](https://github.com/abyss7))
-* Disallow run query with same query_id by various users [#5430](https://github.com/yandex/ClickHouse/pull/5430) ([proller](https://github.com/proller))
-* More robust code for sending metrics to Graphite. It will work even during long multiple `RENAME TABLE` operation. [#5875](https://github.com/yandex/ClickHouse/pull/5875) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* More informative error messages will be displayed when ThreadPool cannot schedule a task for execution. This fixes [#5305](https://github.com/yandex/ClickHouse/issues/5305) [#5801](https://github.com/yandex/ClickHouse/pull/5801) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Inverting ngramSearch to be more intuitive [#5807](https://github.com/yandex/ClickHouse/pull/5807) ([Danila Kutenin](https://github.com/danlark1))
-* Add user parsing in HDFS engine builder [#5946](https://github.com/yandex/ClickHouse/pull/5946) ([akonyaev90](https://github.com/akonyaev90))
-* Update default value of `max_ast_elements parameter` [#5933](https://github.com/yandex/ClickHouse/pull/5933) ([Artem Konovalov](https://github.com/izebit))
-* Added a notion of obsolete settings. The obsolete setting `allow_experimental_low_cardinality_type` can be used with no effect. [0f15c01c6802f7ce1a1494c12c846be8c98944cd](https://github.com/yandex/ClickHouse/commit/0f15c01c6802f7ce1a1494c12c846be8c98944cd) [Alexey Milovidov](https://github.com/alexey-milovidov)
+* Allow unresolvable addresses in cluster configuration. They will be considered unavailable and tried to resolve at every connection attempt. This is especially useful for Kubernetes. This fixes [#5714](https://github.com/ClickHouse/ClickHouse/issues/5714) [#5924](https://github.com/ClickHouse/ClickHouse/pull/5924) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Close idle TCP connections (with one hour timeout by default). This is especially important for large clusters with multiple distributed tables on every server, because every server can possibly keep a connection pool to every other server, and after peak query concurrency, connections will stall. This fixes [#5879](https://github.com/ClickHouse/ClickHouse/issues/5879) [#5880](https://github.com/ClickHouse/ClickHouse/pull/5880) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Better quality of `topK` function. Changed the SavingSpace set behavior to remove the last element if the new element have a bigger weight. [#5833](https://github.com/ClickHouse/ClickHouse/issues/5833) [#5850](https://github.com/ClickHouse/ClickHouse/pull/5850) ([Guillaume Tassery](https://github.com/YiuRULE))
+* URL functions to work with domains now can work for incomplete URLs without scheme [#5725](https://github.com/ClickHouse/ClickHouse/pull/5725) ([alesapin](https://github.com/alesapin))
+* Checksums added to the `system.parts_columns` table. [#5874](https://github.com/ClickHouse/ClickHouse/pull/5874) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov))
+* Added `Enum` data type as a synonim for `Enum8` or `Enum16`. [#5886](https://github.com/ClickHouse/ClickHouse/pull/5886) ([dimarub2000](https://github.com/dimarub2000))
+* Full bit transpose variant for `T64` codec. Could lead to better compression with `zstd`. [#5742](https://github.com/ClickHouse/ClickHouse/pull/5742) ([Artem Zuikov](https://github.com/4ertus2))
+* Condition on `startsWith` function now can uses primary key. This fixes [#5310](https://github.com/ClickHouse/ClickHouse/issues/5310) and [#5882](https://github.com/ClickHouse/ClickHouse/issues/5882) [#5919](https://github.com/ClickHouse/ClickHouse/pull/5919) ([dimarub2000](https://github.com/dimarub2000))
+* Allow to use `clickhouse-copier` with cross-replication cluster topology by permitting empty database name. [#5745](https://github.com/ClickHouse/ClickHouse/pull/5745) ([nvartolomei](https://github.com/nvartolomei))
+* Use `UTC` as default timezone on a system without `tzdata` (e.g. bare Docker container). Before this patch, error message `Could not determine local time zone` was printed and server or client refused to start. [#5827](https://github.com/ClickHouse/ClickHouse/pull/5827) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Returned back support for floating point argument in function `quantileTiming` for backward compatibility. [#5911](https://github.com/ClickHouse/ClickHouse/pull/5911) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Show which table is missing column in error messages. [#5768](https://github.com/ClickHouse/ClickHouse/pull/5768) ([Ivan](https://github.com/abyss7))
+* Disallow run query with same query_id by various users [#5430](https://github.com/ClickHouse/ClickHouse/pull/5430) ([proller](https://github.com/proller))
+* More robust code for sending metrics to Graphite. It will work even during long multiple `RENAME TABLE` operation. [#5875](https://github.com/ClickHouse/ClickHouse/pull/5875) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* More informative error messages will be displayed when ThreadPool cannot schedule a task for execution. This fixes [#5305](https://github.com/ClickHouse/ClickHouse/issues/5305) [#5801](https://github.com/ClickHouse/ClickHouse/pull/5801) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Inverting ngramSearch to be more intuitive [#5807](https://github.com/ClickHouse/ClickHouse/pull/5807) ([Danila Kutenin](https://github.com/danlark1))
+* Add user parsing in HDFS engine builder [#5946](https://github.com/ClickHouse/ClickHouse/pull/5946) ([akonyaev90](https://github.com/akonyaev90))
+* Update default value of `max_ast_elements parameter` [#5933](https://github.com/ClickHouse/ClickHouse/pull/5933) ([Artem Konovalov](https://github.com/izebit))
+* Added a notion of obsolete settings. The obsolete setting `allow_experimental_low_cardinality_type` can be used with no effect. [0f15c01c6802f7ce1a1494c12c846be8c98944cd](https://github.com/ClickHouse/ClickHouse/commit/0f15c01c6802f7ce1a1494c12c846be8c98944cd) [Alexey Milovidov](https://github.com/alexey-milovidov)
### Performance Improvement
-* Increase number of streams to SELECT from Merge table for more uniform distribution of threads. Added setting `max_streams_multiplier_for_merge_tables`. This fixes [#5797](https://github.com/yandex/ClickHouse/issues/5797) [#5915](https://github.com/yandex/ClickHouse/pull/5915) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Increase number of streams to SELECT from Merge table for more uniform distribution of threads. Added setting `max_streams_multiplier_for_merge_tables`. This fixes [#5797](https://github.com/ClickHouse/ClickHouse/issues/5797) [#5915](https://github.com/ClickHouse/ClickHouse/pull/5915) ([alexey-milovidov](https://github.com/alexey-milovidov))
### Build/Testing/Packaging Improvement
-* Add a backward compatibility test for client-server interaction with different versions of clickhouse. [#5868](https://github.com/yandex/ClickHouse/pull/5868) ([alesapin](https://github.com/alesapin))
-* Test coverage information in every commit and pull request. [#5896](https://github.com/yandex/ClickHouse/pull/5896) ([alesapin](https://github.com/alesapin))
-* Cooperate with address sanitizer to support our custom allocators (`Arena` and `ArenaWithFreeLists`) for better debugging of "use-after-free" errors. [#5728](https://github.com/yandex/ClickHouse/pull/5728) ([akuzm](https://github.com/akuzm))
-* Switch to [LLVM libunwind implementation](https://github.com/llvm-mirror/libunwind) for C++ exception handling and for stack traces printing [#4828](https://github.com/yandex/ClickHouse/pull/4828) ([Nikita Lapkov](https://github.com/laplab))
-* Add two more warnings from -Weverything [#5923](https://github.com/yandex/ClickHouse/pull/5923) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Allow to build ClickHouse with Memory Sanitizer. [#3949](https://github.com/yandex/ClickHouse/pull/3949) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed ubsan report about `bitTest` function in fuzz test. [#5943](https://github.com/yandex/ClickHouse/pull/5943) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Docker: added possibility to init a ClickHouse instance which requires authentication. [#5727](https://github.com/yandex/ClickHouse/pull/5727) ([Korviakov Andrey](https://github.com/shurshun))
-* Update librdkafka to version 1.1.0 [#5872](https://github.com/yandex/ClickHouse/pull/5872) ([Ivan](https://github.com/abyss7))
-* Add global timeout for integration tests and disable some of them in tests code. [#5741](https://github.com/yandex/ClickHouse/pull/5741) ([alesapin](https://github.com/alesapin))
-* Fix some ThreadSanitizer failures. [#5854](https://github.com/yandex/ClickHouse/pull/5854) ([akuzm](https://github.com/akuzm))
-* The `--no-undefined` option forces the linker to check all external names for existence while linking. It's very useful to track real dependencies between libraries in the split build mode. [#5855](https://github.com/yandex/ClickHouse/pull/5855) ([Ivan](https://github.com/abyss7))
-* Added performance test for [#5797](https://github.com/yandex/ClickHouse/issues/5797) [#5914](https://github.com/yandex/ClickHouse/pull/5914) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed compatibility with gcc-7. [#5840](https://github.com/yandex/ClickHouse/pull/5840) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Added support for gcc-9. This fixes [#5717](https://github.com/yandex/ClickHouse/issues/5717) [#5774](https://github.com/yandex/ClickHouse/pull/5774) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed error when libunwind can be linked incorrectly. [#5948](https://github.com/yandex/ClickHouse/pull/5948) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed a few warnings found by PVS-Studio. [#5921](https://github.com/yandex/ClickHouse/pull/5921) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Added initial support for `clang-tidy` static analyzer. [#5806](https://github.com/yandex/ClickHouse/pull/5806) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Convert BSD/Linux endian macros( 'be64toh' and 'htobe64') to the Mac OS X equivalents [#5785](https://github.com/yandex/ClickHouse/pull/5785) ([Fu Chen](https://github.com/fredchenbj))
-* Improved integration tests guide. [#5796](https://github.com/yandex/ClickHouse/pull/5796) ([Vladimir Chebotarev](https://github.com/excitoon))
-* Fixing build at macosx + gcc9 [#5822](https://github.com/yandex/ClickHouse/pull/5822) ([filimonov](https://github.com/filimonov))
-* Fix a hard-to-spot typo: aggreAGte -> aggregate. [#5753](https://github.com/yandex/ClickHouse/pull/5753) ([akuzm](https://github.com/akuzm))
-* Fix freebsd build [#5760](https://github.com/yandex/ClickHouse/pull/5760) ([proller](https://github.com/proller))
-* Add link to experimental YouTube channel to website [#5845](https://github.com/yandex/ClickHouse/pull/5845) ([Ivan Blinkov](https://github.com/blinkov))
-* CMake: add option for coverage flags: WITH_COVERAGE [#5776](https://github.com/yandex/ClickHouse/pull/5776) ([proller](https://github.com/proller))
-* Fix initial size of some inline PODArray's. [#5787](https://github.com/yandex/ClickHouse/pull/5787) ([akuzm](https://github.com/akuzm))
-* clickhouse-server.postinst: fix os detection for centos 6 [#5788](https://github.com/yandex/ClickHouse/pull/5788) ([proller](https://github.com/proller))
-* Added Arch linux package generation. [#5719](https://github.com/yandex/ClickHouse/pull/5719) ([Vladimir Chebotarev](https://github.com/excitoon))
-* Split Common/config.h by libs (dbms) [#5715](https://github.com/yandex/ClickHouse/pull/5715) ([proller](https://github.com/proller))
-* Fixes for "Arcadia" build platform [#5795](https://github.com/yandex/ClickHouse/pull/5795) ([proller](https://github.com/proller))
-* Fixes for unconventional build (gcc9, no submodules) [#5792](https://github.com/yandex/ClickHouse/pull/5792) ([proller](https://github.com/proller))
-* Require explicit type in unalignedStore because it was proven to be bug-prone [#5791](https://github.com/yandex/ClickHouse/pull/5791) ([akuzm](https://github.com/akuzm))
-* Fixes MacOS build [#5830](https://github.com/yandex/ClickHouse/pull/5830) ([filimonov](https://github.com/filimonov))
-* Performance test concerning the new JIT feature with bigger dataset, as requested here [#5263](https://github.com/yandex/ClickHouse/issues/5263) [#5887](https://github.com/yandex/ClickHouse/pull/5887) ([Guillaume Tassery](https://github.com/YiuRULE))
-* Run stateful tests in stress test [12693e568722f11e19859742f56428455501fd2a](https://github.com/yandex/ClickHouse/commit/12693e568722f11e19859742f56428455501fd2a) ([alesapin](https://github.com/alesapin))
+* Add a backward compatibility test for client-server interaction with different versions of clickhouse. [#5868](https://github.com/ClickHouse/ClickHouse/pull/5868) ([alesapin](https://github.com/alesapin))
+* Test coverage information in every commit and pull request. [#5896](https://github.com/ClickHouse/ClickHouse/pull/5896) ([alesapin](https://github.com/alesapin))
+* Cooperate with address sanitizer to support our custom allocators (`Arena` and `ArenaWithFreeLists`) for better debugging of "use-after-free" errors. [#5728](https://github.com/ClickHouse/ClickHouse/pull/5728) ([akuzm](https://github.com/akuzm))
+* Switch to [LLVM libunwind implementation](https://github.com/llvm-mirror/libunwind) for C++ exception handling and for stack traces printing [#4828](https://github.com/ClickHouse/ClickHouse/pull/4828) ([Nikita Lapkov](https://github.com/laplab))
+* Add two more warnings from -Weverything [#5923](https://github.com/ClickHouse/ClickHouse/pull/5923) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Allow to build ClickHouse with Memory Sanitizer. [#3949](https://github.com/ClickHouse/ClickHouse/pull/3949) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed ubsan report about `bitTest` function in fuzz test. [#5943](https://github.com/ClickHouse/ClickHouse/pull/5943) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Docker: added possibility to init a ClickHouse instance which requires authentication. [#5727](https://github.com/ClickHouse/ClickHouse/pull/5727) ([Korviakov Andrey](https://github.com/shurshun))
+* Update librdkafka to version 1.1.0 [#5872](https://github.com/ClickHouse/ClickHouse/pull/5872) ([Ivan](https://github.com/abyss7))
+* Add global timeout for integration tests and disable some of them in tests code. [#5741](https://github.com/ClickHouse/ClickHouse/pull/5741) ([alesapin](https://github.com/alesapin))
+* Fix some ThreadSanitizer failures. [#5854](https://github.com/ClickHouse/ClickHouse/pull/5854) ([akuzm](https://github.com/akuzm))
+* The `--no-undefined` option forces the linker to check all external names for existence while linking. It's very useful to track real dependencies between libraries in the split build mode. [#5855](https://github.com/ClickHouse/ClickHouse/pull/5855) ([Ivan](https://github.com/abyss7))
+* Added performance test for [#5797](https://github.com/ClickHouse/ClickHouse/issues/5797) [#5914](https://github.com/ClickHouse/ClickHouse/pull/5914) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed compatibility with gcc-7. [#5840](https://github.com/ClickHouse/ClickHouse/pull/5840) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Added support for gcc-9. This fixes [#5717](https://github.com/ClickHouse/ClickHouse/issues/5717) [#5774](https://github.com/ClickHouse/ClickHouse/pull/5774) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed error when libunwind can be linked incorrectly. [#5948](https://github.com/ClickHouse/ClickHouse/pull/5948) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed a few warnings found by PVS-Studio. [#5921](https://github.com/ClickHouse/ClickHouse/pull/5921) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Added initial support for `clang-tidy` static analyzer. [#5806](https://github.com/ClickHouse/ClickHouse/pull/5806) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Convert BSD/Linux endian macros( 'be64toh' and 'htobe64') to the Mac OS X equivalents [#5785](https://github.com/ClickHouse/ClickHouse/pull/5785) ([Fu Chen](https://github.com/fredchenbj))
+* Improved integration tests guide. [#5796](https://github.com/ClickHouse/ClickHouse/pull/5796) ([Vladimir Chebotarev](https://github.com/excitoon))
+* Fixing build at macosx + gcc9 [#5822](https://github.com/ClickHouse/ClickHouse/pull/5822) ([filimonov](https://github.com/filimonov))
+* Fix a hard-to-spot typo: aggreAGte -> aggregate. [#5753](https://github.com/ClickHouse/ClickHouse/pull/5753) ([akuzm](https://github.com/akuzm))
+* Fix freebsd build [#5760](https://github.com/ClickHouse/ClickHouse/pull/5760) ([proller](https://github.com/proller))
+* Add link to experimental YouTube channel to website [#5845](https://github.com/ClickHouse/ClickHouse/pull/5845) ([Ivan Blinkov](https://github.com/blinkov))
+* CMake: add option for coverage flags: WITH_COVERAGE [#5776](https://github.com/ClickHouse/ClickHouse/pull/5776) ([proller](https://github.com/proller))
+* Fix initial size of some inline PODArray's. [#5787](https://github.com/ClickHouse/ClickHouse/pull/5787) ([akuzm](https://github.com/akuzm))
+* clickhouse-server.postinst: fix os detection for centos 6 [#5788](https://github.com/ClickHouse/ClickHouse/pull/5788) ([proller](https://github.com/proller))
+* Added Arch linux package generation. [#5719](https://github.com/ClickHouse/ClickHouse/pull/5719) ([Vladimir Chebotarev](https://github.com/excitoon))
+* Split Common/config.h by libs (dbms) [#5715](https://github.com/ClickHouse/ClickHouse/pull/5715) ([proller](https://github.com/proller))
+* Fixes for "Arcadia" build platform [#5795](https://github.com/ClickHouse/ClickHouse/pull/5795) ([proller](https://github.com/proller))
+* Fixes for unconventional build (gcc9, no submodules) [#5792](https://github.com/ClickHouse/ClickHouse/pull/5792) ([proller](https://github.com/proller))
+* Require explicit type in unalignedStore because it was proven to be bug-prone [#5791](https://github.com/ClickHouse/ClickHouse/pull/5791) ([akuzm](https://github.com/akuzm))
+* Fixes MacOS build [#5830](https://github.com/ClickHouse/ClickHouse/pull/5830) ([filimonov](https://github.com/filimonov))
+* Performance test concerning the new JIT feature with bigger dataset, as requested here [#5263](https://github.com/ClickHouse/ClickHouse/issues/5263) [#5887](https://github.com/ClickHouse/ClickHouse/pull/5887) ([Guillaume Tassery](https://github.com/YiuRULE))
+* Run stateful tests in stress test [12693e568722f11e19859742f56428455501fd2a](https://github.com/ClickHouse/ClickHouse/commit/12693e568722f11e19859742f56428455501fd2a) ([alesapin](https://github.com/alesapin))
### Backward Incompatible Change
* `Kafka` is broken in this version.
-* Enable `adaptive_index_granularity` = 10MB by default for new `MergeTree` tables. If you created new MergeTree tables on version 19.11+, downgrade to versions prior to 19.6 will be impossible. [#5628](https://github.com/yandex/ClickHouse/pull/5628) ([alesapin](https://github.com/alesapin))
-* Removed obsolete undocumented embedded dictionaries that were used by Yandex.Metrica. The functions `OSIn`, `SEIn`, `OSToRoot`, `SEToRoot`, `OSHierarchy`, `SEHierarchy` are no longer available. If you are using these functions, write email to clickhouse-feedback@yandex-team.com. Note: at the last moment we decided to keep these functions for a while. [#5780](https://github.com/yandex/ClickHouse/pull/5780) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Enable `adaptive_index_granularity` = 10MB by default for new `MergeTree` tables. If you created new MergeTree tables on version 19.11+, downgrade to versions prior to 19.6 will be impossible. [#5628](https://github.com/ClickHouse/ClickHouse/pull/5628) ([alesapin](https://github.com/alesapin))
+* Removed obsolete undocumented embedded dictionaries that were used by Yandex.Metrica. The functions `OSIn`, `SEIn`, `OSToRoot`, `SEToRoot`, `OSHierarchy`, `SEHierarchy` are no longer available. If you are using these functions, write email to clickhouse-feedback@yandex-team.com. Note: at the last moment we decided to keep these functions for a while. [#5780](https://github.com/ClickHouse/ClickHouse/pull/5780) ([alexey-milovidov](https://github.com/alexey-milovidov))
## ClickHouse release 19.10.1.5, 2019-07-12
### New Feature
-* Add new column codec: `T64`. Made for (U)IntX/EnumX/Data(Time)/DecimalX columns. It should be good for columns with constant or small range values. Codec itself allows enlarge or shrink data type without re-compression. [#5557](https://github.com/yandex/ClickHouse/pull/5557) ([Artem Zuikov](https://github.com/4ertus2))
-* Add database engine `MySQL` that allow to view all the tables in remote MySQL server [#5599](https://github.com/yandex/ClickHouse/pull/5599) ([Winter Zhang](https://github.com/zhang2014))
-* `bitmapContains` implementation. It's 2x faster than `bitmapHasAny` if the second bitmap contains one element. [#5535](https://github.com/yandex/ClickHouse/pull/5535) ([Zhichang Yu](https://github.com/yuzhichang))
-* Support for `crc32` function (with behaviour exactly as in MySQL or PHP). Do not use it if you need a hash function. [#5661](https://github.com/yandex/ClickHouse/pull/5661) ([Remen Ivan](https://github.com/BHYCHIK))
-* Implemented `SYSTEM START/STOP DISTRIBUTED SENDS` queries to control asynchronous inserts into `Distributed` tables. [#4935](https://github.com/yandex/ClickHouse/pull/4935) ([Winter Zhang](https://github.com/zhang2014))
+* Add new column codec: `T64`. Made for (U)IntX/EnumX/Data(Time)/DecimalX columns. It should be good for columns with constant or small range values. Codec itself allows enlarge or shrink data type without re-compression. [#5557](https://github.com/ClickHouse/ClickHouse/pull/5557) ([Artem Zuikov](https://github.com/4ertus2))
+* Add database engine `MySQL` that allow to view all the tables in remote MySQL server [#5599](https://github.com/ClickHouse/ClickHouse/pull/5599) ([Winter Zhang](https://github.com/zhang2014))
+* `bitmapContains` implementation. It's 2x faster than `bitmapHasAny` if the second bitmap contains one element. [#5535](https://github.com/ClickHouse/ClickHouse/pull/5535) ([Zhichang Yu](https://github.com/yuzhichang))
+* Support for `crc32` function (with behaviour exactly as in MySQL or PHP). Do not use it if you need a hash function. [#5661](https://github.com/ClickHouse/ClickHouse/pull/5661) ([Remen Ivan](https://github.com/BHYCHIK))
+* Implemented `SYSTEM START/STOP DISTRIBUTED SENDS` queries to control asynchronous inserts into `Distributed` tables. [#4935](https://github.com/ClickHouse/ClickHouse/pull/4935) ([Winter Zhang](https://github.com/zhang2014))
### Bug Fix
-* Ignore query execution limits and max parts size for merge limits while executing mutations. [#5659](https://github.com/yandex/ClickHouse/pull/5659) ([Anton Popov](https://github.com/CurtizJ))
-* Fix bug which may lead to deduplication of normal blocks (extremely rare) and insertion of duplicate blocks (more often). [#5549](https://github.com/yandex/ClickHouse/pull/5549) ([alesapin](https://github.com/alesapin))
-* Fix of function `arrayEnumerateUniqRanked` for arguments with empty arrays [#5559](https://github.com/yandex/ClickHouse/pull/5559) ([proller](https://github.com/proller))
-* Don't subscribe to Kafka topics without intent to poll any messages. [#5698](https://github.com/yandex/ClickHouse/pull/5698) ([Ivan](https://github.com/abyss7))
-* Make setting `join_use_nulls` get no effect for types that cannot be inside Nullable [#5700](https://github.com/yandex/ClickHouse/pull/5700) ([Olga Khvostikova](https://github.com/stavrolia))
-* Fixed `Incorrect size of index granularity` errors [#5720](https://github.com/yandex/ClickHouse/pull/5720) ([coraxster](https://github.com/coraxster))
-* Fix Float to Decimal convert overflow [#5607](https://github.com/yandex/ClickHouse/pull/5607) ([coraxster](https://github.com/coraxster))
-* Flush buffer when `WriteBufferFromHDFS`'s destructor is called. This fixes writing into `HDFS`. [#5684](https://github.com/yandex/ClickHouse/pull/5684) ([Xindong Peng](https://github.com/eejoin))
+* Ignore query execution limits and max parts size for merge limits while executing mutations. [#5659](https://github.com/ClickHouse/ClickHouse/pull/5659) ([Anton Popov](https://github.com/CurtizJ))
+* Fix bug which may lead to deduplication of normal blocks (extremely rare) and insertion of duplicate blocks (more often). [#5549](https://github.com/ClickHouse/ClickHouse/pull/5549) ([alesapin](https://github.com/alesapin))
+* Fix of function `arrayEnumerateUniqRanked` for arguments with empty arrays [#5559](https://github.com/ClickHouse/ClickHouse/pull/5559) ([proller](https://github.com/proller))
+* Don't subscribe to Kafka topics without intent to poll any messages. [#5698](https://github.com/ClickHouse/ClickHouse/pull/5698) ([Ivan](https://github.com/abyss7))
+* Make setting `join_use_nulls` get no effect for types that cannot be inside Nullable [#5700](https://github.com/ClickHouse/ClickHouse/pull/5700) ([Olga Khvostikova](https://github.com/stavrolia))
+* Fixed `Incorrect size of index granularity` errors [#5720](https://github.com/ClickHouse/ClickHouse/pull/5720) ([coraxster](https://github.com/coraxster))
+* Fix Float to Decimal convert overflow [#5607](https://github.com/ClickHouse/ClickHouse/pull/5607) ([coraxster](https://github.com/coraxster))
+* Flush buffer when `WriteBufferFromHDFS`'s destructor is called. This fixes writing into `HDFS`. [#5684](https://github.com/ClickHouse/ClickHouse/pull/5684) ([Xindong Peng](https://github.com/eejoin))
### Improvement
-* Treat empty cells in `CSV` as default values when the setting `input_format_defaults_for_omitted_fields` is enabled. [#5625](https://github.com/yandex/ClickHouse/pull/5625) ([akuzm](https://github.com/akuzm))
-* Non-blocking loading of external dictionaries. [#5567](https://github.com/yandex/ClickHouse/pull/5567) ([Vitaly Baranov](https://github.com/vitlibar))
-* Network timeouts can be dynamically changed for already established connections according to the settings. [#4558](https://github.com/yandex/ClickHouse/pull/4558) ([Konstantin Podshumok](https://github.com/podshumok))
-* Using "public_suffix_list" for functions `firstSignificantSubdomain`, `cutToFirstSignificantSubdomain`. It's using a perfect hash table generated by `gperf` with a list generated from the file: [https://publicsuffix.org/list/public_suffix_list.dat](https://publicsuffix.org/list/public_suffix_list.dat). (for example, now we recognize the domain `ac.uk` as non-significant). [#5030](https://github.com/yandex/ClickHouse/pull/5030) ([Guillaume Tassery](https://github.com/YiuRULE))
-* Adopted `IPv6` data type in system tables; unified client info columns in `system.processes` and `system.query_log` [#5640](https://github.com/yandex/ClickHouse/pull/5640) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Using sessions for connections with MySQL compatibility protocol. #5476 [#5646](https://github.com/yandex/ClickHouse/pull/5646) ([Yuriy Baranov](https://github.com/yurriy))
-* Support more `ALTER` queries `ON CLUSTER`. [#5593](https://github.com/yandex/ClickHouse/pull/5593) [#5613](https://github.com/yandex/ClickHouse/pull/5613) ([sundyli](https://github.com/sundy-li))
-* Support `` section in `clickhouse-local` config file. [#5540](https://github.com/yandex/ClickHouse/pull/5540) ([proller](https://github.com/proller))
-* Allow run query with `remote` table function in `clickhouse-local` [#5627](https://github.com/yandex/ClickHouse/pull/5627) ([proller](https://github.com/proller))
+* Treat empty cells in `CSV` as default values when the setting `input_format_defaults_for_omitted_fields` is enabled. [#5625](https://github.com/ClickHouse/ClickHouse/pull/5625) ([akuzm](https://github.com/akuzm))
+* Non-blocking loading of external dictionaries. [#5567](https://github.com/ClickHouse/ClickHouse/pull/5567) ([Vitaly Baranov](https://github.com/vitlibar))
+* Network timeouts can be dynamically changed for already established connections according to the settings. [#4558](https://github.com/ClickHouse/ClickHouse/pull/4558) ([Konstantin Podshumok](https://github.com/podshumok))
+* Using "public_suffix_list" for functions `firstSignificantSubdomain`, `cutToFirstSignificantSubdomain`. It's using a perfect hash table generated by `gperf` with a list generated from the file: [https://publicsuffix.org/list/public_suffix_list.dat](https://publicsuffix.org/list/public_suffix_list.dat). (for example, now we recognize the domain `ac.uk` as non-significant). [#5030](https://github.com/ClickHouse/ClickHouse/pull/5030) ([Guillaume Tassery](https://github.com/YiuRULE))
+* Adopted `IPv6` data type in system tables; unified client info columns in `system.processes` and `system.query_log` [#5640](https://github.com/ClickHouse/ClickHouse/pull/5640) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Using sessions for connections with MySQL compatibility protocol. #5476 [#5646](https://github.com/ClickHouse/ClickHouse/pull/5646) ([Yuriy Baranov](https://github.com/yurriy))
+* Support more `ALTER` queries `ON CLUSTER`. [#5593](https://github.com/ClickHouse/ClickHouse/pull/5593) [#5613](https://github.com/ClickHouse/ClickHouse/pull/5613) ([sundyli](https://github.com/sundy-li))
+* Support `` section in `clickhouse-local` config file. [#5540](https://github.com/ClickHouse/ClickHouse/pull/5540) ([proller](https://github.com/proller))
+* Allow run query with `remote` table function in `clickhouse-local` [#5627](https://github.com/ClickHouse/ClickHouse/pull/5627) ([proller](https://github.com/proller))
### Performance Improvement
-* Add the possibility to write the final mark at the end of MergeTree columns. It allows to avoid useless reads for keys that are out of table data range. It is enabled only if adaptive index granularity is in use. [#5624](https://github.com/yandex/ClickHouse/pull/5624) ([alesapin](https://github.com/alesapin))
-* Improved performance of MergeTree tables on very slow filesystems by reducing number of `stat` syscalls. [#5648](https://github.com/yandex/ClickHouse/pull/5648) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed performance degradation in reading from MergeTree tables that was introduced in version 19.6. Fixes #5631. [#5633](https://github.com/yandex/ClickHouse/pull/5633) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Add the possibility to write the final mark at the end of MergeTree columns. It allows to avoid useless reads for keys that are out of table data range. It is enabled only if adaptive index granularity is in use. [#5624](https://github.com/ClickHouse/ClickHouse/pull/5624) ([alesapin](https://github.com/alesapin))
+* Improved performance of MergeTree tables on very slow filesystems by reducing number of `stat` syscalls. [#5648](https://github.com/ClickHouse/ClickHouse/pull/5648) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed performance degradation in reading from MergeTree tables that was introduced in version 19.6. Fixes #5631. [#5633](https://github.com/ClickHouse/ClickHouse/pull/5633) ([alexey-milovidov](https://github.com/alexey-milovidov))
### Build/Testing/Packaging Improvement
-* Implemented `TestKeeper` as an implementation of ZooKeeper interface used for testing [#5643](https://github.com/yandex/ClickHouse/pull/5643) ([alexey-milovidov](https://github.com/alexey-milovidov)) ([levushkin aleksej](https://github.com/alexey-milovidov))
-* From now on `.sql` tests can be run isolated by server, in parallel, with random database. It allows to run them faster, add new tests with custom server configurations, and be sure that different tests doesn't affect each other. [#5554](https://github.com/yandex/ClickHouse/pull/5554) ([Ivan](https://github.com/abyss7))
-* Remove `` and `` from performance tests [#5672](https://github.com/yandex/ClickHouse/pull/5672) ([Olga Khvostikova](https://github.com/stavrolia))
-* Fixed "select_format" performance test for `Pretty` formats [#5642](https://github.com/yandex/ClickHouse/pull/5642) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Implemented `TestKeeper` as an implementation of ZooKeeper interface used for testing [#5643](https://github.com/ClickHouse/ClickHouse/pull/5643) ([alexey-milovidov](https://github.com/alexey-milovidov)) ([levushkin aleksej](https://github.com/alexey-milovidov))
+* From now on `.sql` tests can be run isolated by server, in parallel, with random database. It allows to run them faster, add new tests with custom server configurations, and be sure that different tests doesn't affect each other. [#5554](https://github.com/ClickHouse/ClickHouse/pull/5554) ([Ivan](https://github.com/abyss7))
+* Remove `` and `` from performance tests [#5672](https://github.com/ClickHouse/ClickHouse/pull/5672) ([Olga Khvostikova](https://github.com/stavrolia))
+* Fixed "select_format" performance test for `Pretty` formats [#5642](https://github.com/ClickHouse/ClickHouse/pull/5642) ([alexey-milovidov](https://github.com/alexey-milovidov))
## ClickHouse release 19.9.3.31, 2019-07-05
### Bug Fix
-* Fix segfault in Delta codec which affects columns with values less than 32 bits size. The bug led to random memory corruption. [#5786](https://github.com/yandex/ClickHouse/pull/5786) ([alesapin](https://github.com/alesapin))
-* Fix rare bug in checking of part with LowCardinality column. [#5832](https://github.com/yandex/ClickHouse/pull/5832) ([alesapin](https://github.com/alesapin))
-* Fix segfault in TTL merge with non-physical columns in block. [#5819](https://github.com/yandex/ClickHouse/pull/5819) ([Anton Popov](https://github.com/CurtizJ))
-* Fix potential infinite sleeping of low-priority queries. [#5842](https://github.com/yandex/ClickHouse/pull/5842) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix how ClickHouse determines default time zone as UCT instead of UTC. [#5828](https://github.com/yandex/ClickHouse/pull/5828) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix bug about executing distributed DROP/ALTER/TRUNCATE/OPTIMIZE ON CLUSTER queries on follower replica before leader replica. Now they will be executed directly on leader replica. [#5757](https://github.com/yandex/ClickHouse/pull/5757) ([alesapin](https://github.com/alesapin))
-* Fix race condition, which cause that some queries may not appear in query_log instantly after SYSTEM FLUSH LOGS query. [#5685](https://github.com/yandex/ClickHouse/pull/5685) ([Anton Popov](https://github.com/CurtizJ))
-* Added missing support for constant arguments to `evalMLModel` function. [#5820](https://github.com/yandex/ClickHouse/pull/5820) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix segfault in Delta codec which affects columns with values less than 32 bits size. The bug led to random memory corruption. [#5786](https://github.com/ClickHouse/ClickHouse/pull/5786) ([alesapin](https://github.com/alesapin))
+* Fix rare bug in checking of part with LowCardinality column. [#5832](https://github.com/ClickHouse/ClickHouse/pull/5832) ([alesapin](https://github.com/alesapin))
+* Fix segfault in TTL merge with non-physical columns in block. [#5819](https://github.com/ClickHouse/ClickHouse/pull/5819) ([Anton Popov](https://github.com/CurtizJ))
+* Fix potential infinite sleeping of low-priority queries. [#5842](https://github.com/ClickHouse/ClickHouse/pull/5842) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix how ClickHouse determines default time zone as UCT instead of UTC. [#5828](https://github.com/ClickHouse/ClickHouse/pull/5828) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix bug about executing distributed DROP/ALTER/TRUNCATE/OPTIMIZE ON CLUSTER queries on follower replica before leader replica. Now they will be executed directly on leader replica. [#5757](https://github.com/ClickHouse/ClickHouse/pull/5757) ([alesapin](https://github.com/alesapin))
+* Fix race condition, which cause that some queries may not appear in query_log instantly after SYSTEM FLUSH LOGS query. [#5685](https://github.com/ClickHouse/ClickHouse/pull/5685) ([Anton Popov](https://github.com/CurtizJ))
+* Added missing support for constant arguments to `evalMLModel` function. [#5820](https://github.com/ClickHouse/ClickHouse/pull/5820) ([alexey-milovidov](https://github.com/alexey-milovidov))
## ClickHouse release 19.7.5.29, 2019-07-05
### Bug Fix
-* Fix performance regression in some queries with JOIN. [#5192](https://github.com/yandex/ClickHouse/pull/5192) ([Winter Zhang](https://github.com/zhang2014))
+* Fix performance regression in some queries with JOIN. [#5192](https://github.com/ClickHouse/ClickHouse/pull/5192) ([Winter Zhang](https://github.com/zhang2014))
## ClickHouse release 19.9.2.4, 2019-06-24
### New Feature
-* Print information about frozen parts in `system.parts` table. [#5471](https://github.com/yandex/ClickHouse/pull/5471) ([proller](https://github.com/proller))
-* Ask client password on clickhouse-client start on tty if not set in arguments [#5092](https://github.com/yandex/ClickHouse/pull/5092) ([proller](https://github.com/proller))
-* Implement `dictGet` and `dictGetOrDefault` functions for Decimal types. [#5394](https://github.com/yandex/ClickHouse/pull/5394) ([Artem Zuikov](https://github.com/4ertus2))
+* Print information about frozen parts in `system.parts` table. [#5471](https://github.com/ClickHouse/ClickHouse/pull/5471) ([proller](https://github.com/proller))
+* Ask client password on clickhouse-client start on tty if not set in arguments [#5092](https://github.com/ClickHouse/ClickHouse/pull/5092) ([proller](https://github.com/proller))
+* Implement `dictGet` and `dictGetOrDefault` functions for Decimal types. [#5394](https://github.com/ClickHouse/ClickHouse/pull/5394) ([Artem Zuikov](https://github.com/4ertus2))
### Improvement
-* Debian init: Add service stop timeout [#5522](https://github.com/yandex/ClickHouse/pull/5522) ([proller](https://github.com/proller))
-* Add setting forbidden by default to create table with suspicious types for LowCardinality [#5448](https://github.com/yandex/ClickHouse/pull/5448) ([Olga Khvostikova](https://github.com/stavrolia))
-* Regression functions return model weights when not used as State in function `evalMLMethod`. [#5411](https://github.com/yandex/ClickHouse/pull/5411) ([Quid37](https://github.com/Quid37))
-* Rename and improve regression methods. [#5492](https://github.com/yandex/ClickHouse/pull/5492) ([Quid37](https://github.com/Quid37))
-* Clearer interfaces of string searchers. [#5586](https://github.com/yandex/ClickHouse/pull/5586) ([Danila Kutenin](https://github.com/danlark1))
+* Debian init: Add service stop timeout [#5522](https://github.com/ClickHouse/ClickHouse/pull/5522) ([proller](https://github.com/proller))
+* Add setting forbidden by default to create table with suspicious types for LowCardinality [#5448](https://github.com/ClickHouse/ClickHouse/pull/5448) ([Olga Khvostikova](https://github.com/stavrolia))
+* Regression functions return model weights when not used as State in function `evalMLMethod`. [#5411](https://github.com/ClickHouse/ClickHouse/pull/5411) ([Quid37](https://github.com/Quid37))
+* Rename and improve regression methods. [#5492](https://github.com/ClickHouse/ClickHouse/pull/5492) ([Quid37](https://github.com/Quid37))
+* Clearer interfaces of string searchers. [#5586](https://github.com/ClickHouse/ClickHouse/pull/5586) ([Danila Kutenin](https://github.com/danlark1))
### Bug Fix
-* Fix potential data loss in Kafka [#5445](https://github.com/yandex/ClickHouse/pull/5445) ([Ivan](https://github.com/abyss7))
-* Fix potential infinite loop in `PrettySpace` format when called with zero columns [#5560](https://github.com/yandex/ClickHouse/pull/5560) ([Olga Khvostikova](https://github.com/stavrolia))
-* Fixed UInt32 overflow bug in linear models. Allow eval ML model for non-const model argument. [#5516](https://github.com/yandex/ClickHouse/pull/5516) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
-* `ALTER TABLE ... DROP INDEX IF EXISTS ...` should not raise an exception if provided index does not exist [#5524](https://github.com/yandex/ClickHouse/pull/5524) ([Gleb Novikov](https://github.com/NanoBjorn))
-* Fix segfault with `bitmapHasAny` in scalar subquery [#5528](https://github.com/yandex/ClickHouse/pull/5528) ([Zhichang Yu](https://github.com/yuzhichang))
-* Fixed error when replication connection pool doesn't retry to resolve host, even when DNS cache was dropped. [#5534](https://github.com/yandex/ClickHouse/pull/5534) ([alesapin](https://github.com/alesapin))
-* Fixed `ALTER ... MODIFY TTL` on ReplicatedMergeTree. [#5539](https://github.com/yandex/ClickHouse/pull/5539) ([Anton Popov](https://github.com/CurtizJ))
-* Fix INSERT into Distributed table with MATERIALIZED column [#5429](https://github.com/yandex/ClickHouse/pull/5429) ([Azat Khuzhin](https://github.com/azat))
-* Fix bad alloc when truncate Join storage [#5437](https://github.com/yandex/ClickHouse/pull/5437) ([TCeason](https://github.com/TCeason))
-* In recent versions of package tzdata some of files are symlinks now. The current mechanism for detecting default timezone gets broken and gives wrong names for some timezones. Now at least we force the timezone name to the contents of TZ if provided. [#5443](https://github.com/yandex/ClickHouse/pull/5443) ([Ivan](https://github.com/abyss7))
-* Fix some extremely rare cases with MultiVolnitsky searcher when the constant needles in sum are at least 16KB long. The algorithm missed or overwrote the previous results which can lead to the incorrect result of `multiSearchAny`. [#5588](https://github.com/yandex/ClickHouse/pull/5588) ([Danila Kutenin](https://github.com/danlark1))
-* Fix the issue when settings for ExternalData requests couldn't use ClickHouse settings. Also, for now, settings `date_time_input_format` and `low_cardinality_allow_in_native_format` cannot be used because of the ambiguity of names (in external data it can be interpreted as table format and in the query it can be a setting). [#5455](https://github.com/yandex/ClickHouse/pull/5455) ([Danila Kutenin](https://github.com/danlark1))
-* Fix bug when parts were removed only from FS without dropping them from Zookeeper. [#5520](https://github.com/yandex/ClickHouse/pull/5520) ([alesapin](https://github.com/alesapin))
-* Remove debug logging from MySQL protocol [#5478](https://github.com/yandex/ClickHouse/pull/5478) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Skip ZNONODE during DDL query processing [#5489](https://github.com/yandex/ClickHouse/pull/5489) ([Azat Khuzhin](https://github.com/azat))
-* Fix mix `UNION ALL` result column type. There were cases with inconsistent data and column types of resulting columns. [#5503](https://github.com/yandex/ClickHouse/pull/5503) ([Artem Zuikov](https://github.com/4ertus2))
-* Throw an exception on wrong integers in `dictGetT` functions instead of crash. [#5446](https://github.com/yandex/ClickHouse/pull/5446) ([Artem Zuikov](https://github.com/4ertus2))
-* Fix wrong element_count and load_factor for hashed dictionary in `system.dictionaries` table. [#5440](https://github.com/yandex/ClickHouse/pull/5440) ([Azat Khuzhin](https://github.com/azat))
+* Fix potential data loss in Kafka [#5445](https://github.com/ClickHouse/ClickHouse/pull/5445) ([Ivan](https://github.com/abyss7))
+* Fix potential infinite loop in `PrettySpace` format when called with zero columns [#5560](https://github.com/ClickHouse/ClickHouse/pull/5560) ([Olga Khvostikova](https://github.com/stavrolia))
+* Fixed UInt32 overflow bug in linear models. Allow eval ML model for non-const model argument. [#5516](https://github.com/ClickHouse/ClickHouse/pull/5516) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* `ALTER TABLE ... DROP INDEX IF EXISTS ...` should not raise an exception if provided index does not exist [#5524](https://github.com/ClickHouse/ClickHouse/pull/5524) ([Gleb Novikov](https://github.com/NanoBjorn))
+* Fix segfault with `bitmapHasAny` in scalar subquery [#5528](https://github.com/ClickHouse/ClickHouse/pull/5528) ([Zhichang Yu](https://github.com/yuzhichang))
+* Fixed error when replication connection pool doesn't retry to resolve host, even when DNS cache was dropped. [#5534](https://github.com/ClickHouse/ClickHouse/pull/5534) ([alesapin](https://github.com/alesapin))
+* Fixed `ALTER ... MODIFY TTL` on ReplicatedMergeTree. [#5539](https://github.com/ClickHouse/ClickHouse/pull/5539) ([Anton Popov](https://github.com/CurtizJ))
+* Fix INSERT into Distributed table with MATERIALIZED column [#5429](https://github.com/ClickHouse/ClickHouse/pull/5429) ([Azat Khuzhin](https://github.com/azat))
+* Fix bad alloc when truncate Join storage [#5437](https://github.com/ClickHouse/ClickHouse/pull/5437) ([TCeason](https://github.com/TCeason))
+* In recent versions of package tzdata some of files are symlinks now. The current mechanism for detecting default timezone gets broken and gives wrong names for some timezones. Now at least we force the timezone name to the contents of TZ if provided. [#5443](https://github.com/ClickHouse/ClickHouse/pull/5443) ([Ivan](https://github.com/abyss7))
+* Fix some extremely rare cases with MultiVolnitsky searcher when the constant needles in sum are at least 16KB long. The algorithm missed or overwrote the previous results which can lead to the incorrect result of `multiSearchAny`. [#5588](https://github.com/ClickHouse/ClickHouse/pull/5588) ([Danila Kutenin](https://github.com/danlark1))
+* Fix the issue when settings for ExternalData requests couldn't use ClickHouse settings. Also, for now, settings `date_time_input_format` and `low_cardinality_allow_in_native_format` cannot be used because of the ambiguity of names (in external data it can be interpreted as table format and in the query it can be a setting). [#5455](https://github.com/ClickHouse/ClickHouse/pull/5455) ([Danila Kutenin](https://github.com/danlark1))
+* Fix bug when parts were removed only from FS without dropping them from Zookeeper. [#5520](https://github.com/ClickHouse/ClickHouse/pull/5520) ([alesapin](https://github.com/alesapin))
+* Remove debug logging from MySQL protocol [#5478](https://github.com/ClickHouse/ClickHouse/pull/5478) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Skip ZNONODE during DDL query processing [#5489](https://github.com/ClickHouse/ClickHouse/pull/5489) ([Azat Khuzhin](https://github.com/azat))
+* Fix mix `UNION ALL` result column type. There were cases with inconsistent data and column types of resulting columns. [#5503](https://github.com/ClickHouse/ClickHouse/pull/5503) ([Artem Zuikov](https://github.com/4ertus2))
+* Throw an exception on wrong integers in `dictGetT` functions instead of crash. [#5446](https://github.com/ClickHouse/ClickHouse/pull/5446) ([Artem Zuikov](https://github.com/4ertus2))
+* Fix wrong element_count and load_factor for hashed dictionary in `system.dictionaries` table. [#5440](https://github.com/ClickHouse/ClickHouse/pull/5440) ([Azat Khuzhin](https://github.com/azat))
### Build/Testing/Packaging Improvement
-* Fixed build without `Brotli` HTTP compression support (`ENABLE_BROTLI=OFF` cmake variable). [#5521](https://github.com/yandex/ClickHouse/pull/5521) ([Anton Yuzhaninov](https://github.com/citrin))
-* Include roaring.h as roaring/roaring.h [#5523](https://github.com/yandex/ClickHouse/pull/5523) ([Orivej Desh](https://github.com/orivej))
-* Fix gcc9 warnings in hyperscan (#line directive is evil!) [#5546](https://github.com/yandex/ClickHouse/pull/5546) ([Danila Kutenin](https://github.com/danlark1))
-* Fix all warnings when compiling with gcc-9. Fix some contrib issues. Fix gcc9 ICE and submit it to bugzilla. [#5498](https://github.com/yandex/ClickHouse/pull/5498) ([Danila Kutenin](https://github.com/danlark1))
-* Fixed linking with lld [#5477](https://github.com/yandex/ClickHouse/pull/5477) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Remove unused specializations in dictionaries [#5452](https://github.com/yandex/ClickHouse/pull/5452) ([Artem Zuikov](https://github.com/4ertus2))
-* Improvement performance tests for formatting and parsing tables for different types of files [#5497](https://github.com/yandex/ClickHouse/pull/5497) ([Olga Khvostikova](https://github.com/stavrolia))
-* Fixes for parallel test run [#5506](https://github.com/yandex/ClickHouse/pull/5506) ([proller](https://github.com/proller))
-* Docker: use configs from clickhouse-test [#5531](https://github.com/yandex/ClickHouse/pull/5531) ([proller](https://github.com/proller))
-* Fix compile for FreeBSD [#5447](https://github.com/yandex/ClickHouse/pull/5447) ([proller](https://github.com/proller))
-* Upgrade boost to 1.70 [#5570](https://github.com/yandex/ClickHouse/pull/5570) ([proller](https://github.com/proller))
-* Fix build clickhouse as submodule [#5574](https://github.com/yandex/ClickHouse/pull/5574) ([proller](https://github.com/proller))
-* Improve JSONExtract performance tests [#5444](https://github.com/yandex/ClickHouse/pull/5444) ([Vitaly Baranov](https://github.com/vitlibar))
+* Fixed build without `Brotli` HTTP compression support (`ENABLE_BROTLI=OFF` cmake variable). [#5521](https://github.com/ClickHouse/ClickHouse/pull/5521) ([Anton Yuzhaninov](https://github.com/citrin))
+* Include roaring.h as roaring/roaring.h [#5523](https://github.com/ClickHouse/ClickHouse/pull/5523) ([Orivej Desh](https://github.com/orivej))
+* Fix gcc9 warnings in hyperscan (#line directive is evil!) [#5546](https://github.com/ClickHouse/ClickHouse/pull/5546) ([Danila Kutenin](https://github.com/danlark1))
+* Fix all warnings when compiling with gcc-9. Fix some contrib issues. Fix gcc9 ICE and submit it to bugzilla. [#5498](https://github.com/ClickHouse/ClickHouse/pull/5498) ([Danila Kutenin](https://github.com/danlark1))
+* Fixed linking with lld [#5477](https://github.com/ClickHouse/ClickHouse/pull/5477) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Remove unused specializations in dictionaries [#5452](https://github.com/ClickHouse/ClickHouse/pull/5452) ([Artem Zuikov](https://github.com/4ertus2))
+* Improvement performance tests for formatting and parsing tables for different types of files [#5497](https://github.com/ClickHouse/ClickHouse/pull/5497) ([Olga Khvostikova](https://github.com/stavrolia))
+* Fixes for parallel test run [#5506](https://github.com/ClickHouse/ClickHouse/pull/5506) ([proller](https://github.com/proller))
+* Docker: use configs from clickhouse-test [#5531](https://github.com/ClickHouse/ClickHouse/pull/5531) ([proller](https://github.com/proller))
+* Fix compile for FreeBSD [#5447](https://github.com/ClickHouse/ClickHouse/pull/5447) ([proller](https://github.com/proller))
+* Upgrade boost to 1.70 [#5570](https://github.com/ClickHouse/ClickHouse/pull/5570) ([proller](https://github.com/proller))
+* Fix build clickhouse as submodule [#5574](https://github.com/ClickHouse/ClickHouse/pull/5574) ([proller](https://github.com/proller))
+* Improve JSONExtract performance tests [#5444](https://github.com/ClickHouse/ClickHouse/pull/5444) ([Vitaly Baranov](https://github.com/vitlibar))
## ClickHouse release 19.8.3.8, 2019-06-11
### New Features
-* Added functions to work with JSON [#4686](https://github.com/yandex/ClickHouse/pull/4686) ([hcz](https://github.com/hczhcz)) [#5124](https://github.com/yandex/ClickHouse/pull/5124). ([Vitaly Baranov](https://github.com/vitlibar))
-* Add a function basename, with a similar behaviour to a basename function, which exists in a lot of languages (`os.path.basename` in python, `basename` in PHP, etc...). Work with both an UNIX-like path or a Windows path. [#5136](https://github.com/yandex/ClickHouse/pull/5136) ([Guillaume Tassery](https://github.com/YiuRULE))
-* Added `LIMIT n, m BY` or `LIMIT m OFFSET n BY` syntax to set offset of n for LIMIT BY clause. [#5138](https://github.com/yandex/ClickHouse/pull/5138) ([Anton Popov](https://github.com/CurtizJ))
-* Added new data type `SimpleAggregateFunction`, which allows to have columns with light aggregation in an `AggregatingMergeTree`. This can only be used with simple functions like `any`, `anyLast`, `sum`, `min`, `max`. [#4629](https://github.com/yandex/ClickHouse/pull/4629) ([Boris Granveaud](https://github.com/bgranvea))
-* Added support for non-constant arguments in function `ngramDistance` [#5198](https://github.com/yandex/ClickHouse/pull/5198) ([Danila Kutenin](https://github.com/danlark1))
-* Added functions `skewPop`, `skewSamp`, `kurtPop` and `kurtSamp` to compute for sequence skewness, sample skewness, kurtosis and sample kurtosis respectively. [#5200](https://github.com/yandex/ClickHouse/pull/5200) ([hcz](https://github.com/hczhcz))
-* Support rename operation for `MaterializeView` storage. [#5209](https://github.com/yandex/ClickHouse/pull/5209) ([Guillaume Tassery](https://github.com/YiuRULE))
-* Added server which allows connecting to ClickHouse using MySQL client. [#4715](https://github.com/yandex/ClickHouse/pull/4715) ([Yuriy Baranov](https://github.com/yurriy))
-* Add `toDecimal*OrZero` and `toDecimal*OrNull` functions. [#5291](https://github.com/yandex/ClickHouse/pull/5291) ([Artem Zuikov](https://github.com/4ertus2))
-* Support Decimal types in functions: `quantile`, `quantiles`, `median`, `quantileExactWeighted`, `quantilesExactWeighted`, medianExactWeighted. [#5304](https://github.com/yandex/ClickHouse/pull/5304) ([Artem Zuikov](https://github.com/4ertus2))
-* Added `toValidUTF8` function, which replaces all invalid UTF-8 characters by replacement character � (U+FFFD). [#5322](https://github.com/yandex/ClickHouse/pull/5322) ([Danila Kutenin](https://github.com/danlark1))
-* Added `format` function. Formatting constant pattern (simplified Python format pattern) with the strings listed in the arguments. [#5330](https://github.com/yandex/ClickHouse/pull/5330) ([Danila Kutenin](https://github.com/danlark1))
-* Added `system.detached_parts` table containing information about detached parts of `MergeTree` tables. [#5353](https://github.com/yandex/ClickHouse/pull/5353) ([akuzm](https://github.com/akuzm))
-* Added `ngramSearch` function to calculate the non-symmetric difference between needle and haystack. [#5418](https://github.com/yandex/ClickHouse/pull/5418)[#5422](https://github.com/yandex/ClickHouse/pull/5422) ([Danila Kutenin](https://github.com/danlark1))
-* Implementation of basic machine learning methods (stochastic linear regression and logistic regression) using aggregate functions interface. Has different strategies for updating model weights (simple gradient descent, momentum method, Nesterov method). Also supports mini-batches of custom size. [#4943](https://github.com/yandex/ClickHouse/pull/4943) ([Quid37](https://github.com/Quid37))
-* Implementation of `geohashEncode` and `geohashDecode` functions. [#5003](https://github.com/yandex/ClickHouse/pull/5003) ([Vasily Nemkov](https://github.com/Enmk))
-* Added aggregate function `timeSeriesGroupSum`, which can aggregate different time series that sample timestamp not alignment. It will use linear interpolation between two sample timestamp and then sum time-series together. Added aggregate function `timeSeriesGroupRateSum`, which calculates the rate of time-series and then sum rates together. [#4542](https://github.com/yandex/ClickHouse/pull/4542) ([Yangkuan Liu](https://github.com/LiuYangkuan))
-* Added functions `IPv4CIDRtoIPv4Range` and `IPv6CIDRtoIPv6Range` to calculate the lower and higher bounds for an IP in the subnet using a CIDR. [#5095](https://github.com/yandex/ClickHouse/pull/5095) ([Guillaume Tassery](https://github.com/YiuRULE))
-* Add a X-ClickHouse-Summary header when we send a query using HTTP with enabled setting `send_progress_in_http_headers`. Return the usual information of X-ClickHouse-Progress, with additional information like how many rows and bytes were inserted in the query. [#5116](https://github.com/yandex/ClickHouse/pull/5116) ([Guillaume Tassery](https://github.com/YiuRULE))
+* Added functions to work with JSON [#4686](https://github.com/ClickHouse/ClickHouse/pull/4686) ([hcz](https://github.com/hczhcz)) [#5124](https://github.com/ClickHouse/ClickHouse/pull/5124). ([Vitaly Baranov](https://github.com/vitlibar))
+* Add a function basename, with a similar behaviour to a basename function, which exists in a lot of languages (`os.path.basename` in python, `basename` in PHP, etc...). Work with both an UNIX-like path or a Windows path. [#5136](https://github.com/ClickHouse/ClickHouse/pull/5136) ([Guillaume Tassery](https://github.com/YiuRULE))
+* Added `LIMIT n, m BY` or `LIMIT m OFFSET n BY` syntax to set offset of n for LIMIT BY clause. [#5138](https://github.com/ClickHouse/ClickHouse/pull/5138) ([Anton Popov](https://github.com/CurtizJ))
+* Added new data type `SimpleAggregateFunction`, which allows to have columns with light aggregation in an `AggregatingMergeTree`. This can only be used with simple functions like `any`, `anyLast`, `sum`, `min`, `max`. [#4629](https://github.com/ClickHouse/ClickHouse/pull/4629) ([Boris Granveaud](https://github.com/bgranvea))
+* Added support for non-constant arguments in function `ngramDistance` [#5198](https://github.com/ClickHouse/ClickHouse/pull/5198) ([Danila Kutenin](https://github.com/danlark1))
+* Added functions `skewPop`, `skewSamp`, `kurtPop` and `kurtSamp` to compute for sequence skewness, sample skewness, kurtosis and sample kurtosis respectively. [#5200](https://github.com/ClickHouse/ClickHouse/pull/5200) ([hcz](https://github.com/hczhcz))
+* Support rename operation for `MaterializeView` storage. [#5209](https://github.com/ClickHouse/ClickHouse/pull/5209) ([Guillaume Tassery](https://github.com/YiuRULE))
+* Added server which allows connecting to ClickHouse using MySQL client. [#4715](https://github.com/ClickHouse/ClickHouse/pull/4715) ([Yuriy Baranov](https://github.com/yurriy))
+* Add `toDecimal*OrZero` and `toDecimal*OrNull` functions. [#5291](https://github.com/ClickHouse/ClickHouse/pull/5291) ([Artem Zuikov](https://github.com/4ertus2))
+* Support Decimal types in functions: `quantile`, `quantiles`, `median`, `quantileExactWeighted`, `quantilesExactWeighted`, medianExactWeighted. [#5304](https://github.com/ClickHouse/ClickHouse/pull/5304) ([Artem Zuikov](https://github.com/4ertus2))
+* Added `toValidUTF8` function, which replaces all invalid UTF-8 characters by replacement character � (U+FFFD). [#5322](https://github.com/ClickHouse/ClickHouse/pull/5322) ([Danila Kutenin](https://github.com/danlark1))
+* Added `format` function. Formatting constant pattern (simplified Python format pattern) with the strings listed in the arguments. [#5330](https://github.com/ClickHouse/ClickHouse/pull/5330) ([Danila Kutenin](https://github.com/danlark1))
+* Added `system.detached_parts` table containing information about detached parts of `MergeTree` tables. [#5353](https://github.com/ClickHouse/ClickHouse/pull/5353) ([akuzm](https://github.com/akuzm))
+* Added `ngramSearch` function to calculate the non-symmetric difference between needle and haystack. [#5418](https://github.com/ClickHouse/ClickHouse/pull/5418)[#5422](https://github.com/ClickHouse/ClickHouse/pull/5422) ([Danila Kutenin](https://github.com/danlark1))
+* Implementation of basic machine learning methods (stochastic linear regression and logistic regression) using aggregate functions interface. Has different strategies for updating model weights (simple gradient descent, momentum method, Nesterov method). Also supports mini-batches of custom size. [#4943](https://github.com/ClickHouse/ClickHouse/pull/4943) ([Quid37](https://github.com/Quid37))
+* Implementation of `geohashEncode` and `geohashDecode` functions. [#5003](https://github.com/ClickHouse/ClickHouse/pull/5003) ([Vasily Nemkov](https://github.com/Enmk))
+* Added aggregate function `timeSeriesGroupSum`, which can aggregate different time series that sample timestamp not alignment. It will use linear interpolation between two sample timestamp and then sum time-series together. Added aggregate function `timeSeriesGroupRateSum`, which calculates the rate of time-series and then sum rates together. [#4542](https://github.com/ClickHouse/ClickHouse/pull/4542) ([Yangkuan Liu](https://github.com/LiuYangkuan))
+* Added functions `IPv4CIDRtoIPv4Range` and `IPv6CIDRtoIPv6Range` to calculate the lower and higher bounds for an IP in the subnet using a CIDR. [#5095](https://github.com/ClickHouse/ClickHouse/pull/5095) ([Guillaume Tassery](https://github.com/YiuRULE))
+* Add a X-ClickHouse-Summary header when we send a query using HTTP with enabled setting `send_progress_in_http_headers`. Return the usual information of X-ClickHouse-Progress, with additional information like how many rows and bytes were inserted in the query. [#5116](https://github.com/ClickHouse/ClickHouse/pull/5116) ([Guillaume Tassery](https://github.com/YiuRULE))
### Improvements
-* Added `max_parts_in_total` setting for MergeTree family of tables (default: 100 000) that prevents unsafe specification of partition key #5166. [#5171](https://github.com/yandex/ClickHouse/pull/5171) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* `clickhouse-obfuscator`: derive seed for individual columns by combining initial seed with column name, not column position. This is intended to transform datasets with multiple related tables, so that tables will remain JOINable after transformation. [#5178](https://github.com/yandex/ClickHouse/pull/5178) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Added functions `JSONExtractRaw`, `JSONExtractKeyAndValues`. Renamed functions `jsonExtract` to `JSONExtract`. When something goes wrong these functions return the correspondent values, not `NULL`. Modified function `JSONExtract`, now it gets the return type from its last parameter and doesn't inject nullables. Implemented fallback to RapidJSON in case AVX2 instructions are not available. Simdjson library updated to a new version. [#5235](https://github.com/yandex/ClickHouse/pull/5235) ([Vitaly Baranov](https://github.com/vitlibar))
-* Now `if` and `multiIf` functions don't rely on the condition's `Nullable`, but rely on the branches for sql compatibility. [#5238](https://github.com/yandex/ClickHouse/pull/5238) ([Jian Wu](https://github.com/janplus))
-* `In` predicate now generates `Null` result from `Null` input like the `Equal` function. [#5152](https://github.com/yandex/ClickHouse/pull/5152) ([Jian Wu](https://github.com/janplus))
-* Check the time limit every (flush_interval / poll_timeout) number of rows from Kafka. This allows to break the reading from Kafka consumer more frequently and to check the time limits for the top-level streams [#5249](https://github.com/yandex/ClickHouse/pull/5249) ([Ivan](https://github.com/abyss7))
-* Link rdkafka with bundled SASL. It should allow to use SASL SCRAM authentication [#5253](https://github.com/yandex/ClickHouse/pull/5253) ([Ivan](https://github.com/abyss7))
-* Batched version of RowRefList for ALL JOINS. [#5267](https://github.com/yandex/ClickHouse/pull/5267) ([Artem Zuikov](https://github.com/4ertus2))
-* clickhouse-server: more informative listen error messages. [#5268](https://github.com/yandex/ClickHouse/pull/5268) ([proller](https://github.com/proller))
-* Support dictionaries in clickhouse-copier for functions in `` [#5270](https://github.com/yandex/ClickHouse/pull/5270) ([proller](https://github.com/proller))
+* Added `max_parts_in_total` setting for MergeTree family of tables (default: 100 000) that prevents unsafe specification of partition key #5166. [#5171](https://github.com/ClickHouse/ClickHouse/pull/5171) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* `clickhouse-obfuscator`: derive seed for individual columns by combining initial seed with column name, not column position. This is intended to transform datasets with multiple related tables, so that tables will remain JOINable after transformation. [#5178](https://github.com/ClickHouse/ClickHouse/pull/5178) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Added functions `JSONExtractRaw`, `JSONExtractKeyAndValues`. Renamed functions `jsonExtract` to `JSONExtract`. When something goes wrong these functions return the correspondent values, not `NULL`. Modified function `JSONExtract`, now it gets the return type from its last parameter and doesn't inject nullables. Implemented fallback to RapidJSON in case AVX2 instructions are not available. Simdjson library updated to a new version. [#5235](https://github.com/ClickHouse/ClickHouse/pull/5235) ([Vitaly Baranov](https://github.com/vitlibar))
+* Now `if` and `multiIf` functions don't rely on the condition's `Nullable`, but rely on the branches for sql compatibility. [#5238](https://github.com/ClickHouse/ClickHouse/pull/5238) ([Jian Wu](https://github.com/janplus))
+* `In` predicate now generates `Null` result from `Null` input like the `Equal` function. [#5152](https://github.com/ClickHouse/ClickHouse/pull/5152) ([Jian Wu](https://github.com/janplus))
+* Check the time limit every (flush_interval / poll_timeout) number of rows from Kafka. This allows to break the reading from Kafka consumer more frequently and to check the time limits for the top-level streams [#5249](https://github.com/ClickHouse/ClickHouse/pull/5249) ([Ivan](https://github.com/abyss7))
+* Link rdkafka with bundled SASL. It should allow to use SASL SCRAM authentication [#5253](https://github.com/ClickHouse/ClickHouse/pull/5253) ([Ivan](https://github.com/abyss7))
+* Batched version of RowRefList for ALL JOINS. [#5267](https://github.com/ClickHouse/ClickHouse/pull/5267) ([Artem Zuikov](https://github.com/4ertus2))
+* clickhouse-server: more informative listen error messages. [#5268](https://github.com/ClickHouse/ClickHouse/pull/5268) ([proller](https://github.com/proller))
+* Support dictionaries in clickhouse-copier for functions in `` [#5270](https://github.com/ClickHouse/ClickHouse/pull/5270) ([proller](https://github.com/proller))
* Add new setting `kafka_commit_every_batch` to regulate Kafka committing policy.
-It allows to set commit mode: after every batch of messages is handled, or after the whole block is written to the storage. It's a trade-off between losing some messages or reading them twice in some extreme situations. [#5308](https://github.com/yandex/ClickHouse/pull/5308) ([Ivan](https://github.com/abyss7))
-* Make `windowFunnel` support other Unsigned Integer Types. [#5320](https://github.com/yandex/ClickHouse/pull/5320) ([sundyli](https://github.com/sundy-li))
-* Allow to shadow virtual column `_table` in Merge engine. [#5325](https://github.com/yandex/ClickHouse/pull/5325) ([Ivan](https://github.com/abyss7))
-* Make `sequenceMatch` aggregate functions support other unsigned Integer types [#5339](https://github.com/yandex/ClickHouse/pull/5339) ([sundyli](https://github.com/sundy-li))
-* Better error messages if checksum mismatch is most likely caused by hardware failures. [#5355](https://github.com/yandex/ClickHouse/pull/5355) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Check that underlying tables support sampling for `StorageMerge` [#5366](https://github.com/yandex/ClickHouse/pull/5366) ([Ivan](https://github.com/abyss7))
-* Сlose MySQL connections after their usage in external dictionaries. It is related to issue #893. [#5395](https://github.com/yandex/ClickHouse/pull/5395) ([Clément Rodriguez](https://github.com/clemrodriguez))
-* Improvements of MySQL Wire Protocol. Changed name of format to MySQLWire. Using RAII for calling RSA_free. Disabling SSL if context cannot be created. [#5419](https://github.com/yandex/ClickHouse/pull/5419) ([Yuriy Baranov](https://github.com/yurriy))
-* clickhouse-client: allow to run with unaccessable history file (read-only, no disk space, file is directory, ...). [#5431](https://github.com/yandex/ClickHouse/pull/5431) ([proller](https://github.com/proller))
-* Respect query settings in asynchronous INSERTs into Distributed tables. [#4936](https://github.com/yandex/ClickHouse/pull/4936) ([TCeason](https://github.com/TCeason))
-* Renamed functions `leastSqr` to `simpleLinearRegression`, `LinearRegression` to `linearRegression`, `LogisticRegression` to `logisticRegression`. [#5391](https://github.com/yandex/ClickHouse/pull/5391) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+It allows to set commit mode: after every batch of messages is handled, or after the whole block is written to the storage. It's a trade-off between losing some messages or reading them twice in some extreme situations. [#5308](https://github.com/ClickHouse/ClickHouse/pull/5308) ([Ivan](https://github.com/abyss7))
+* Make `windowFunnel` support other Unsigned Integer Types. [#5320](https://github.com/ClickHouse/ClickHouse/pull/5320) ([sundyli](https://github.com/sundy-li))
+* Allow to shadow virtual column `_table` in Merge engine. [#5325](https://github.com/ClickHouse/ClickHouse/pull/5325) ([Ivan](https://github.com/abyss7))
+* Make `sequenceMatch` aggregate functions support other unsigned Integer types [#5339](https://github.com/ClickHouse/ClickHouse/pull/5339) ([sundyli](https://github.com/sundy-li))
+* Better error messages if checksum mismatch is most likely caused by hardware failures. [#5355](https://github.com/ClickHouse/ClickHouse/pull/5355) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Check that underlying tables support sampling for `StorageMerge` [#5366](https://github.com/ClickHouse/ClickHouse/pull/5366) ([Ivan](https://github.com/abyss7))
+* Сlose MySQL connections after their usage in external dictionaries. It is related to issue #893. [#5395](https://github.com/ClickHouse/ClickHouse/pull/5395) ([Clément Rodriguez](https://github.com/clemrodriguez))
+* Improvements of MySQL Wire Protocol. Changed name of format to MySQLWire. Using RAII for calling RSA_free. Disabling SSL if context cannot be created. [#5419](https://github.com/ClickHouse/ClickHouse/pull/5419) ([Yuriy Baranov](https://github.com/yurriy))
+* clickhouse-client: allow to run with unaccessable history file (read-only, no disk space, file is directory, ...). [#5431](https://github.com/ClickHouse/ClickHouse/pull/5431) ([proller](https://github.com/proller))
+* Respect query settings in asynchronous INSERTs into Distributed tables. [#4936](https://github.com/ClickHouse/ClickHouse/pull/4936) ([TCeason](https://github.com/TCeason))
+* Renamed functions `leastSqr` to `simpleLinearRegression`, `LinearRegression` to `linearRegression`, `LogisticRegression` to `logisticRegression`. [#5391](https://github.com/ClickHouse/ClickHouse/pull/5391) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
### Performance Improvements
-* Parallelize processing of parts of non-replicated MergeTree tables in ALTER MODIFY query. [#4639](https://github.com/yandex/ClickHouse/pull/4639) ([Ivan Kush](https://github.com/IvanKush))
-* Optimizations in regular expressions extraction. [#5193](https://github.com/yandex/ClickHouse/pull/5193) [#5191](https://github.com/yandex/ClickHouse/pull/5191) ([Danila Kutenin](https://github.com/danlark1))
-* Do not add right join key column to join result if it's used only in join on section. [#5260](https://github.com/yandex/ClickHouse/pull/5260) ([Artem Zuikov](https://github.com/4ertus2))
-* Freeze the Kafka buffer after first empty response. It avoids multiple invokations of `ReadBuffer::next()` for empty result in some row-parsing streams. [#5283](https://github.com/yandex/ClickHouse/pull/5283) ([Ivan](https://github.com/abyss7))
-* `concat` function optimization for multiple arguments. [#5357](https://github.com/yandex/ClickHouse/pull/5357) ([Danila Kutenin](https://github.com/danlark1))
-* Query optimisation. Allow push down IN statement while rewriting commа/cross join into inner one. [#5396](https://github.com/yandex/ClickHouse/pull/5396) ([Artem Zuikov](https://github.com/4ertus2))
-* Upgrade our LZ4 implementation with reference one to have faster decompression. [#5070](https://github.com/yandex/ClickHouse/pull/5070) ([Danila Kutenin](https://github.com/danlark1))
-* Implemented MSD radix sort (based on kxsort), and partial sorting. [#5129](https://github.com/yandex/ClickHouse/pull/5129) ([Evgenii Pravda](https://github.com/kvinty))
+* Parallelize processing of parts of non-replicated MergeTree tables in ALTER MODIFY query. [#4639](https://github.com/ClickHouse/ClickHouse/pull/4639) ([Ivan Kush](https://github.com/IvanKush))
+* Optimizations in regular expressions extraction. [#5193](https://github.com/ClickHouse/ClickHouse/pull/5193) [#5191](https://github.com/ClickHouse/ClickHouse/pull/5191) ([Danila Kutenin](https://github.com/danlark1))
+* Do not add right join key column to join result if it's used only in join on section. [#5260](https://github.com/ClickHouse/ClickHouse/pull/5260) ([Artem Zuikov](https://github.com/4ertus2))
+* Freeze the Kafka buffer after first empty response. It avoids multiple invokations of `ReadBuffer::next()` for empty result in some row-parsing streams. [#5283](https://github.com/ClickHouse/ClickHouse/pull/5283) ([Ivan](https://github.com/abyss7))
+* `concat` function optimization for multiple arguments. [#5357](https://github.com/ClickHouse/ClickHouse/pull/5357) ([Danila Kutenin](https://github.com/danlark1))
+* Query optimisation. Allow push down IN statement while rewriting commа/cross join into inner one. [#5396](https://github.com/ClickHouse/ClickHouse/pull/5396) ([Artem Zuikov](https://github.com/4ertus2))
+* Upgrade our LZ4 implementation with reference one to have faster decompression. [#5070](https://github.com/ClickHouse/ClickHouse/pull/5070) ([Danila Kutenin](https://github.com/danlark1))
+* Implemented MSD radix sort (based on kxsort), and partial sorting. [#5129](https://github.com/ClickHouse/ClickHouse/pull/5129) ([Evgenii Pravda](https://github.com/kvinty))
### Bug Fixes
-* Fix push require columns with join [#5192](https://github.com/yandex/ClickHouse/pull/5192) ([Winter Zhang](https://github.com/zhang2014))
-* Fixed bug, when ClickHouse is run by systemd, the command `sudo service clickhouse-server forcerestart` was not working as expected. [#5204](https://github.com/yandex/ClickHouse/pull/5204) ([proller](https://github.com/proller))
-* Fix http error codes in DataPartsExchange (interserver http server on 9009 port always returned code 200, even on errors). [#5216](https://github.com/yandex/ClickHouse/pull/5216) ([proller](https://github.com/proller))
-* Fix SimpleAggregateFunction for String longer than MAX_SMALL_STRING_SIZE [#5311](https://github.com/yandex/ClickHouse/pull/5311) ([Azat Khuzhin](https://github.com/azat))
-* Fix error for `Decimal` to `Nullable(Decimal)` conversion in IN. Support other Decimal to Decimal conversions (including different scales). [#5350](https://github.com/yandex/ClickHouse/pull/5350) ([Artem Zuikov](https://github.com/4ertus2))
-* Fixed FPU clobbering in simdjson library that lead to wrong calculation of `uniqHLL` and `uniqCombined` aggregate function and math functions such as `log`. [#5354](https://github.com/yandex/ClickHouse/pull/5354) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed handling mixed const/nonconst cases in JSON functions. [#5435](https://github.com/yandex/ClickHouse/pull/5435) ([Vitaly Baranov](https://github.com/vitlibar))
-* Fix `retention` function. Now all conditions that satisfy in a row of data are added to the data state. [#5119](https://github.com/yandex/ClickHouse/pull/5119) ([小路](https://github.com/nicelulu))
-* Fix result type for `quantileExact` with Decimals. [#5304](https://github.com/yandex/ClickHouse/pull/5304) ([Artem Zuikov](https://github.com/4ertus2))
+* Fix push require columns with join [#5192](https://github.com/ClickHouse/ClickHouse/pull/5192) ([Winter Zhang](https://github.com/zhang2014))
+* Fixed bug, when ClickHouse is run by systemd, the command `sudo service clickhouse-server forcerestart` was not working as expected. [#5204](https://github.com/ClickHouse/ClickHouse/pull/5204) ([proller](https://github.com/proller))
+* Fix http error codes in DataPartsExchange (interserver http server on 9009 port always returned code 200, even on errors). [#5216](https://github.com/ClickHouse/ClickHouse/pull/5216) ([proller](https://github.com/proller))
+* Fix SimpleAggregateFunction for String longer than MAX_SMALL_STRING_SIZE [#5311](https://github.com/ClickHouse/ClickHouse/pull/5311) ([Azat Khuzhin](https://github.com/azat))
+* Fix error for `Decimal` to `Nullable(Decimal)` conversion in IN. Support other Decimal to Decimal conversions (including different scales). [#5350](https://github.com/ClickHouse/ClickHouse/pull/5350) ([Artem Zuikov](https://github.com/4ertus2))
+* Fixed FPU clobbering in simdjson library that lead to wrong calculation of `uniqHLL` and `uniqCombined` aggregate function and math functions such as `log`. [#5354](https://github.com/ClickHouse/ClickHouse/pull/5354) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed handling mixed const/nonconst cases in JSON functions. [#5435](https://github.com/ClickHouse/ClickHouse/pull/5435) ([Vitaly Baranov](https://github.com/vitlibar))
+* Fix `retention` function. Now all conditions that satisfy in a row of data are added to the data state. [#5119](https://github.com/ClickHouse/ClickHouse/pull/5119) ([小路](https://github.com/nicelulu))
+* Fix result type for `quantileExact` with Decimals. [#5304](https://github.com/ClickHouse/ClickHouse/pull/5304) ([Artem Zuikov](https://github.com/4ertus2))
### Documentation
-* Translate documentation for `CollapsingMergeTree` to chinese. [#5168](https://github.com/yandex/ClickHouse/pull/5168) ([张风啸](https://github.com/AlexZFX))
+* Translate documentation for `CollapsingMergeTree` to chinese. [#5168](https://github.com/ClickHouse/ClickHouse/pull/5168) ([张风啸](https://github.com/AlexZFX))
* Translate some documentation about table engines to chinese.
- [#5134](https://github.com/yandex/ClickHouse/pull/5134)
- [#5328](https://github.com/yandex/ClickHouse/pull/5328)
+ [#5134](https://github.com/ClickHouse/ClickHouse/pull/5134)
+ [#5328](https://github.com/ClickHouse/ClickHouse/pull/5328)
([never lee](https://github.com/neverlee))
### Build/Testing/Packaging Improvements
-* Fix some sanitizer reports that show probable use-after-free.[#5139](https://github.com/yandex/ClickHouse/pull/5139) [#5143](https://github.com/yandex/ClickHouse/pull/5143) [#5393](https://github.com/yandex/ClickHouse/pull/5393) ([Ivan](https://github.com/abyss7))
-* Move performance tests out of separate directories for convenience. [#5158](https://github.com/yandex/ClickHouse/pull/5158) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix incorrect performance tests. [#5255](https://github.com/yandex/ClickHouse/pull/5255) ([alesapin](https://github.com/alesapin))
-* Added a tool to calculate checksums caused by bit flips to debug hardware issues. [#5334](https://github.com/yandex/ClickHouse/pull/5334) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Make runner script more usable. [#5340](https://github.com/yandex/ClickHouse/pull/5340)[#5360](https://github.com/yandex/ClickHouse/pull/5360) ([filimonov](https://github.com/filimonov))
-* Add small instruction how to write performance tests. [#5408](https://github.com/yandex/ClickHouse/pull/5408) ([alesapin](https://github.com/alesapin))
-* Add ability to make substitutions in create, fill and drop query in performance tests [#5367](https://github.com/yandex/ClickHouse/pull/5367) ([Olga Khvostikova](https://github.com/stavrolia))
+* Fix some sanitizer reports that show probable use-after-free.[#5139](https://github.com/ClickHouse/ClickHouse/pull/5139) [#5143](https://github.com/ClickHouse/ClickHouse/pull/5143) [#5393](https://github.com/ClickHouse/ClickHouse/pull/5393) ([Ivan](https://github.com/abyss7))
+* Move performance tests out of separate directories for convenience. [#5158](https://github.com/ClickHouse/ClickHouse/pull/5158) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix incorrect performance tests. [#5255](https://github.com/ClickHouse/ClickHouse/pull/5255) ([alesapin](https://github.com/alesapin))
+* Added a tool to calculate checksums caused by bit flips to debug hardware issues. [#5334](https://github.com/ClickHouse/ClickHouse/pull/5334) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Make runner script more usable. [#5340](https://github.com/ClickHouse/ClickHouse/pull/5340)[#5360](https://github.com/ClickHouse/ClickHouse/pull/5360) ([filimonov](https://github.com/filimonov))
+* Add small instruction how to write performance tests. [#5408](https://github.com/ClickHouse/ClickHouse/pull/5408) ([alesapin](https://github.com/alesapin))
+* Add ability to make substitutions in create, fill and drop query in performance tests [#5367](https://github.com/ClickHouse/ClickHouse/pull/5367) ([Olga Khvostikova](https://github.com/stavrolia))
## ClickHouse release 19.7.5.27, 2019-06-09
### New features
-* Added bitmap related functions `bitmapHasAny` and `bitmapHasAll` analogous to `hasAny` and `hasAll` functions for arrays. [#5279](https://github.com/yandex/ClickHouse/pull/5279) ([Sergi Vladykin](https://github.com/svladykin))
+* Added bitmap related functions `bitmapHasAny` and `bitmapHasAll` analogous to `hasAny` and `hasAll` functions for arrays. [#5279](https://github.com/ClickHouse/ClickHouse/pull/5279) ([Sergi Vladykin](https://github.com/svladykin))
### Bug Fixes
-* Fix segfault on `minmax` INDEX with Null value. [#5246](https://github.com/yandex/ClickHouse/pull/5246) ([Nikita Vasilev](https://github.com/nikvas0))
-* Mark all input columns in LIMIT BY as required output. It fixes 'Not found column' error in some distributed queries. [#5407](https://github.com/yandex/ClickHouse/pull/5407) ([Constantin S. Pan](https://github.com/kvap))
-* Fix "Column '0' already exists" error in `SELECT .. PREWHERE` on column with DEFAULT [#5397](https://github.com/yandex/ClickHouse/pull/5397) ([proller](https://github.com/proller))
-* Fix `ALTER MODIFY TTL` query on `ReplicatedMergeTree`. [#5539](https://github.com/yandex/ClickHouse/pull/5539/commits) ([Anton Popov](https://github.com/CurtizJ))
-* Don't crash the server when Kafka consumers have failed to start. [#5285](https://github.com/yandex/ClickHouse/pull/5285) ([Ivan](https://github.com/abyss7))
-* Fixed bitmap functions produce wrong result. [#5359](https://github.com/yandex/ClickHouse/pull/5359) ([Andy Yang](https://github.com/andyyzh))
-* Fix element_count for hashed dictionary (do not include duplicates) [#5440](https://github.com/yandex/ClickHouse/pull/5440) ([Azat Khuzhin](https://github.com/azat))
-* Use contents of environment variable TZ as the name for timezone. It helps to correctly detect default timezone in some cases.[#5443](https://github.com/yandex/ClickHouse/pull/5443) ([Ivan](https://github.com/abyss7))
-* Do not try to convert integers in `dictGetT` functions, because it doesn't work correctly. Throw an exception instead. [#5446](https://github.com/yandex/ClickHouse/pull/5446) ([Artem Zuikov](https://github.com/4ertus2))
-* Fix settings in ExternalData HTTP request. [#5455](https://github.com/yandex/ClickHouse/pull/5455) ([Danila
+* Fix segfault on `minmax` INDEX with Null value. [#5246](https://github.com/ClickHouse/ClickHouse/pull/5246) ([Nikita Vasilev](https://github.com/nikvas0))
+* Mark all input columns in LIMIT BY as required output. It fixes 'Not found column' error in some distributed queries. [#5407](https://github.com/ClickHouse/ClickHouse/pull/5407) ([Constantin S. Pan](https://github.com/kvap))
+* Fix "Column '0' already exists" error in `SELECT .. PREWHERE` on column with DEFAULT [#5397](https://github.com/ClickHouse/ClickHouse/pull/5397) ([proller](https://github.com/proller))
+* Fix `ALTER MODIFY TTL` query on `ReplicatedMergeTree`. [#5539](https://github.com/ClickHouse/ClickHouse/pull/5539/commits) ([Anton Popov](https://github.com/CurtizJ))
+* Don't crash the server when Kafka consumers have failed to start. [#5285](https://github.com/ClickHouse/ClickHouse/pull/5285) ([Ivan](https://github.com/abyss7))
+* Fixed bitmap functions produce wrong result. [#5359](https://github.com/ClickHouse/ClickHouse/pull/5359) ([Andy Yang](https://github.com/andyyzh))
+* Fix element_count for hashed dictionary (do not include duplicates) [#5440](https://github.com/ClickHouse/ClickHouse/pull/5440) ([Azat Khuzhin](https://github.com/azat))
+* Use contents of environment variable TZ as the name for timezone. It helps to correctly detect default timezone in some cases.[#5443](https://github.com/ClickHouse/ClickHouse/pull/5443) ([Ivan](https://github.com/abyss7))
+* Do not try to convert integers in `dictGetT` functions, because it doesn't work correctly. Throw an exception instead. [#5446](https://github.com/ClickHouse/ClickHouse/pull/5446) ([Artem Zuikov](https://github.com/4ertus2))
+* Fix settings in ExternalData HTTP request. [#5455](https://github.com/ClickHouse/ClickHouse/pull/5455) ([Danila
Kutenin](https://github.com/danlark1))
-* Fix bug when parts were removed only from FS without dropping them from Zookeeper. [#5520](https://github.com/yandex/ClickHouse/pull/5520) ([alesapin](https://github.com/alesapin))
-* Fix segmentation fault in `bitmapHasAny` function. [#5528](https://github.com/yandex/ClickHouse/pull/5528) ([Zhichang Yu](https://github.com/yuzhichang))
-* Fixed error when replication connection pool doesn't retry to resolve host, even when DNS cache was dropped. [#5534](https://github.com/yandex/ClickHouse/pull/5534) ([alesapin](https://github.com/alesapin))
-* Fixed `DROP INDEX IF EXISTS` query. Now `ALTER TABLE ... DROP INDEX IF EXISTS ...` query doesn't raise an exception if provided index does not exist. [#5524](https://github.com/yandex/ClickHouse/pull/5524) ([Gleb Novikov](https://github.com/NanoBjorn))
-* Fix union all supertype column. There were cases with inconsistent data and column types of resulting columns. [#5503](https://github.com/yandex/ClickHouse/pull/5503) ([Artem Zuikov](https://github.com/4ertus2))
+* Fix bug when parts were removed only from FS without dropping them from Zookeeper. [#5520](https://github.com/ClickHouse/ClickHouse/pull/5520) ([alesapin](https://github.com/alesapin))
+* Fix segmentation fault in `bitmapHasAny` function. [#5528](https://github.com/ClickHouse/ClickHouse/pull/5528) ([Zhichang Yu](https://github.com/yuzhichang))
+* Fixed error when replication connection pool doesn't retry to resolve host, even when DNS cache was dropped. [#5534](https://github.com/ClickHouse/ClickHouse/pull/5534) ([alesapin](https://github.com/alesapin))
+* Fixed `DROP INDEX IF EXISTS` query. Now `ALTER TABLE ... DROP INDEX IF EXISTS ...` query doesn't raise an exception if provided index does not exist. [#5524](https://github.com/ClickHouse/ClickHouse/pull/5524) ([Gleb Novikov](https://github.com/NanoBjorn))
+* Fix union all supertype column. There were cases with inconsistent data and column types of resulting columns. [#5503](https://github.com/ClickHouse/ClickHouse/pull/5503) ([Artem Zuikov](https://github.com/4ertus2))
* Skip ZNONODE during DDL query processing. Before if another node removes the znode in task queue, the one that
-did not process it, but already get list of children, will terminate the DDLWorker thread. [#5489](https://github.com/yandex/ClickHouse/pull/5489) ([Azat Khuzhin](https://github.com/azat))
-* Fix INSERT into Distributed() table with MATERIALIZED column. [#5429](https://github.com/yandex/ClickHouse/pull/5429) ([Azat Khuzhin](https://github.com/azat))
+did not process it, but already get list of children, will terminate the DDLWorker thread. [#5489](https://github.com/ClickHouse/ClickHouse/pull/5489) ([Azat Khuzhin](https://github.com/azat))
+* Fix INSERT into Distributed() table with MATERIALIZED column. [#5429](https://github.com/ClickHouse/ClickHouse/pull/5429) ([Azat Khuzhin](https://github.com/azat))
## ClickHouse release 19.7.3.9, 2019-05-30
### New Features
* Allow to limit the range of a setting that can be specified by user.
These constraints can be set up in user settings profile.
-[#4931](https://github.com/yandex/ClickHouse/pull/4931) ([Vitaly
+[#4931](https://github.com/ClickHouse/ClickHouse/pull/4931) ([Vitaly
Baranov](https://github.com/vitlibar))
* Add a second version of the function `groupUniqArray` with an optional
`max_size` parameter that limits the size of the resulting array. This
behavior is similar to `groupArray(max_size)(x)` function.
-[#5026](https://github.com/yandex/ClickHouse/pull/5026) ([Guillaume
+[#5026](https://github.com/ClickHouse/ClickHouse/pull/5026) ([Guillaume
Tassery](https://github.com/YiuRULE))
* For TSVWithNames/CSVWithNames input file formats, column order can now be
determined from file header. This is controlled by
`input_format_with_names_use_header` parameter.
-[#5081](https://github.com/yandex/ClickHouse/pull/5081)
+[#5081](https://github.com/ClickHouse/ClickHouse/pull/5081)
([Alexander](https://github.com/Akazz))
### Bug Fixes
* Crash with uncompressed_cache + JOIN during merge (#5197)
-[#5133](https://github.com/yandex/ClickHouse/pull/5133) ([Danila
+[#5133](https://github.com/ClickHouse/ClickHouse/pull/5133) ([Danila
Kutenin](https://github.com/danlark1))
* Segmentation fault on a clickhouse-client query to system tables. #5066
-[#5127](https://github.com/yandex/ClickHouse/pull/5127)
+[#5127](https://github.com/ClickHouse/ClickHouse/pull/5127)
([Ivan](https://github.com/abyss7))
* Data loss on heavy load via KafkaEngine (#4736)
-[#5080](https://github.com/yandex/ClickHouse/pull/5080)
+[#5080](https://github.com/ClickHouse/ClickHouse/pull/5080)
([Ivan](https://github.com/abyss7))
-* Fixed very rare data race condition that could happen when executing a query with UNION ALL involving at least two SELECTs from system.columns, system.tables, system.parts, system.parts_tables or tables of Merge family and performing ALTER of columns of the related tables concurrently. [#5189](https://github.com/yandex/ClickHouse/pull/5189) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed very rare data race condition that could happen when executing a query with UNION ALL involving at least two SELECTs from system.columns, system.tables, system.parts, system.parts_tables or tables of Merge family and performing ALTER of columns of the related tables concurrently. [#5189](https://github.com/ClickHouse/ClickHouse/pull/5189) ([alexey-milovidov](https://github.com/alexey-milovidov))
### Performance Improvements
* Use radix sort for sorting by single numeric column in `ORDER BY` without
- `LIMIT`. [#5106](https://github.com/yandex/ClickHouse/pull/5106),
-[#4439](https://github.com/yandex/ClickHouse/pull/4439)
+ `LIMIT`. [#5106](https://github.com/ClickHouse/ClickHouse/pull/5106),
+[#4439](https://github.com/ClickHouse/ClickHouse/pull/4439)
([Evgenii Pravda](https://github.com/kvinty),
[alexey-milovidov](https://github.com/alexey-milovidov))
### Documentation
* Translate documentation for some table engines to Chinese.
- [#5107](https://github.com/yandex/ClickHouse/pull/5107),
-[#5094](https://github.com/yandex/ClickHouse/pull/5094),
-[#5087](https://github.com/yandex/ClickHouse/pull/5087)
+ [#5107](https://github.com/ClickHouse/ClickHouse/pull/5107),
+[#5094](https://github.com/ClickHouse/ClickHouse/pull/5094),
+[#5087](https://github.com/ClickHouse/ClickHouse/pull/5087)
([张风啸](https://github.com/AlexZFX)),
-[#5068](https://github.com/yandex/ClickHouse/pull/5068) ([never
+[#5068](https://github.com/ClickHouse/ClickHouse/pull/5068) ([never
lee](https://github.com/neverlee))
### Build/Testing/Packaging Improvements
* Print UTF-8 characters properly in `clickhouse-test`.
- [#5084](https://github.com/yandex/ClickHouse/pull/5084)
+ [#5084](https://github.com/ClickHouse/ClickHouse/pull/5084)
([alexey-milovidov](https://github.com/alexey-milovidov))
* Add command line parameter for clickhouse-client to always load suggestion
- data. [#5102](https://github.com/yandex/ClickHouse/pull/5102)
+ data. [#5102](https://github.com/ClickHouse/ClickHouse/pull/5102)
([alexey-milovidov](https://github.com/alexey-milovidov))
* Resolve some of PVS-Studio warnings.
- [#5082](https://github.com/yandex/ClickHouse/pull/5082)
+ [#5082](https://github.com/ClickHouse/ClickHouse/pull/5082)
([alexey-milovidov](https://github.com/alexey-milovidov))
-* Update LZ4 [#5040](https://github.com/yandex/ClickHouse/pull/5040) ([Danila
+* Update LZ4 [#5040](https://github.com/ClickHouse/ClickHouse/pull/5040) ([Danila
Kutenin](https://github.com/danlark1))
* Add gperf to build requirements for upcoming pull request #5030.
- [#5110](https://github.com/yandex/ClickHouse/pull/5110)
+ [#5110](https://github.com/ClickHouse/ClickHouse/pull/5110)
([proller](https://github.com/proller))
## ClickHouse release 19.6.3.18, 2019-06-13
### Bug Fixes
-* Fixed IN condition pushdown for queries from table functions `mysql` and `odbc` and corresponding table engines. This fixes #3540 and #2384. [#5313](https://github.com/yandex/ClickHouse/pull/5313) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix deadlock in Zookeeper. [#5297](https://github.com/yandex/ClickHouse/pull/5297) ([github1youlc](https://github.com/github1youlc))
-* Allow quoted decimals in CSV. [#5284](https://github.com/yandex/ClickHouse/pull/5284) ([Artem Zuikov](https://github.com/4ertus2)
-* Disallow conversion from float Inf/NaN into Decimals (throw exception). [#5282](https://github.com/yandex/ClickHouse/pull/5282) ([Artem Zuikov](https://github.com/4ertus2))
-* Fix data race in rename query. [#5247](https://github.com/yandex/ClickHouse/pull/5247) ([Winter Zhang](https://github.com/zhang2014))
-* Temporarily disable LFAlloc. Usage of LFAlloc might lead to a lot of MAP_FAILED in allocating UncompressedCache and in a result to crashes of queries at high loaded servers. [cfdba93](https://github.com/yandex/ClickHouse/commit/cfdba938ce22f16efeec504f7f90206a515b1280)([Danila Kutenin](https://github.com/danlark1))
+* Fixed IN condition pushdown for queries from table functions `mysql` and `odbc` and corresponding table engines. This fixes #3540 and #2384. [#5313](https://github.com/ClickHouse/ClickHouse/pull/5313) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix deadlock in Zookeeper. [#5297](https://github.com/ClickHouse/ClickHouse/pull/5297) ([github1youlc](https://github.com/github1youlc))
+* Allow quoted decimals in CSV. [#5284](https://github.com/ClickHouse/ClickHouse/pull/5284) ([Artem Zuikov](https://github.com/4ertus2)
+* Disallow conversion from float Inf/NaN into Decimals (throw exception). [#5282](https://github.com/ClickHouse/ClickHouse/pull/5282) ([Artem Zuikov](https://github.com/4ertus2))
+* Fix data race in rename query. [#5247](https://github.com/ClickHouse/ClickHouse/pull/5247) ([Winter Zhang](https://github.com/zhang2014))
+* Temporarily disable LFAlloc. Usage of LFAlloc might lead to a lot of MAP_FAILED in allocating UncompressedCache and in a result to crashes of queries at high loaded servers. [cfdba93](https://github.com/ClickHouse/ClickHouse/commit/cfdba938ce22f16efeec504f7f90206a515b1280)([Danila Kutenin](https://github.com/danlark1))
## ClickHouse release 19.6.2.11, 2019-05-13
### New Features
-* TTL expressions for columns and tables. [#4212](https://github.com/yandex/ClickHouse/pull/4212) ([Anton Popov](https://github.com/CurtizJ))
-* Added support for `brotli` compression for HTTP responses (Accept-Encoding: br) [#4388](https://github.com/yandex/ClickHouse/pull/4388) ([Mikhail](https://github.com/fandyushin))
-* Added new function `isValidUTF8` for checking whether a set of bytes is correctly utf-8 encoded. [#4934](https://github.com/yandex/ClickHouse/pull/4934) ([Danila Kutenin](https://github.com/danlark1))
-* Add new load balancing policy `first_or_random` which sends queries to the first specified host and if it's inaccessible send queries to random hosts of shard. Useful for cross-replication topology setups. [#5012](https://github.com/yandex/ClickHouse/pull/5012) ([nvartolomei](https://github.com/nvartolomei))
+* TTL expressions for columns and tables. [#4212](https://github.com/ClickHouse/ClickHouse/pull/4212) ([Anton Popov](https://github.com/CurtizJ))
+* Added support for `brotli` compression for HTTP responses (Accept-Encoding: br) [#4388](https://github.com/ClickHouse/ClickHouse/pull/4388) ([Mikhail](https://github.com/fandyushin))
+* Added new function `isValidUTF8` for checking whether a set of bytes is correctly utf-8 encoded. [#4934](https://github.com/ClickHouse/ClickHouse/pull/4934) ([Danila Kutenin](https://github.com/danlark1))
+* Add new load balancing policy `first_or_random` which sends queries to the first specified host and if it's inaccessible send queries to random hosts of shard. Useful for cross-replication topology setups. [#5012](https://github.com/ClickHouse/ClickHouse/pull/5012) ([nvartolomei](https://github.com/nvartolomei))
### Experimental Features
-* Add setting `index_granularity_bytes` (adaptive index granularity) for MergeTree* tables family. [#4826](https://github.com/yandex/ClickHouse/pull/4826) ([alesapin](https://github.com/alesapin))
+* Add setting `index_granularity_bytes` (adaptive index granularity) for MergeTree* tables family. [#4826](https://github.com/ClickHouse/ClickHouse/pull/4826) ([alesapin](https://github.com/alesapin))
### Improvements
-* Added support for non-constant and negative size and length arguments for function `substringUTF8`. [#4989](https://github.com/yandex/ClickHouse/pull/4989) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Disable push-down to right table in left join, left table in right join, and both tables in full join. This fixes wrong JOIN results in some cases. [#4846](https://github.com/yandex/ClickHouse/pull/4846) ([Ivan](https://github.com/abyss7))
-* `clickhouse-copier`: auto upload task configuration from `--task-file` option [#4876](https://github.com/yandex/ClickHouse/pull/4876) ([proller](https://github.com/proller))
-* Added typos handler for storage factory and table functions factory. [#4891](https://github.com/yandex/ClickHouse/pull/4891) ([Danila Kutenin](https://github.com/danlark1))
-* Support asterisks and qualified asterisks for multiple joins without subqueries [#4898](https://github.com/yandex/ClickHouse/pull/4898) ([Artem Zuikov](https://github.com/4ertus2))
-* Make missing column error message more user friendly. [#4915](https://github.com/yandex/ClickHouse/pull/4915) ([Artem Zuikov](https://github.com/4ertus2))
+* Added support for non-constant and negative size and length arguments for function `substringUTF8`. [#4989](https://github.com/ClickHouse/ClickHouse/pull/4989) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Disable push-down to right table in left join, left table in right join, and both tables in full join. This fixes wrong JOIN results in some cases. [#4846](https://github.com/ClickHouse/ClickHouse/pull/4846) ([Ivan](https://github.com/abyss7))
+* `clickhouse-copier`: auto upload task configuration from `--task-file` option [#4876](https://github.com/ClickHouse/ClickHouse/pull/4876) ([proller](https://github.com/proller))
+* Added typos handler for storage factory and table functions factory. [#4891](https://github.com/ClickHouse/ClickHouse/pull/4891) ([Danila Kutenin](https://github.com/danlark1))
+* Support asterisks and qualified asterisks for multiple joins without subqueries [#4898](https://github.com/ClickHouse/ClickHouse/pull/4898) ([Artem Zuikov](https://github.com/4ertus2))
+* Make missing column error message more user friendly. [#4915](https://github.com/ClickHouse/ClickHouse/pull/4915) ([Artem Zuikov](https://github.com/4ertus2))
### Performance Improvements
-* Significant speedup of ASOF JOIN [#4924](https://github.com/yandex/ClickHouse/pull/4924) ([Martijn Bakker](https://github.com/Gladdy))
+* Significant speedup of ASOF JOIN [#4924](https://github.com/ClickHouse/ClickHouse/pull/4924) ([Martijn Bakker](https://github.com/Gladdy))
### Backward Incompatible Changes
-* HTTP header `Query-Id` was renamed to `X-ClickHouse-Query-Id` for consistency. [#4972](https://github.com/yandex/ClickHouse/pull/4972) ([Mikhail](https://github.com/fandyushin))
+* HTTP header `Query-Id` was renamed to `X-ClickHouse-Query-Id` for consistency. [#4972](https://github.com/ClickHouse/ClickHouse/pull/4972) ([Mikhail](https://github.com/fandyushin))
### Bug Fixes
-* Fixed potential null pointer dereference in `clickhouse-copier`. [#4900](https://github.com/yandex/ClickHouse/pull/4900) ([proller](https://github.com/proller))
-* Fixed error on query with JOIN + ARRAY JOIN [#4938](https://github.com/yandex/ClickHouse/pull/4938) ([Artem Zuikov](https://github.com/4ertus2))
-* Fixed hanging on start of the server when a dictionary depends on another dictionary via a database with engine=Dictionary. [#4962](https://github.com/yandex/ClickHouse/pull/4962) ([Vitaly Baranov](https://github.com/vitlibar))
-* Partially fix distributed_product_mode = local. It's possible to allow columns of local tables in where/having/order by/... via table aliases. Throw exception if table does not have alias. There's not possible to access to the columns without table aliases yet. [#4986](https://github.com/yandex/ClickHouse/pull/4986) ([Artem Zuikov](https://github.com/4ertus2))
-* Fix potentially wrong result for `SELECT DISTINCT` with `JOIN` [#5001](https://github.com/yandex/ClickHouse/pull/5001) ([Artem Zuikov](https://github.com/4ertus2))
-* Fixed very rare data race condition that could happen when executing a query with UNION ALL involving at least two SELECTs from system.columns, system.tables, system.parts, system.parts_tables or tables of Merge family and performing ALTER of columns of the related tables concurrently. [#5189](https://github.com/yandex/ClickHouse/pull/5189) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed potential null pointer dereference in `clickhouse-copier`. [#4900](https://github.com/ClickHouse/ClickHouse/pull/4900) ([proller](https://github.com/proller))
+* Fixed error on query with JOIN + ARRAY JOIN [#4938](https://github.com/ClickHouse/ClickHouse/pull/4938) ([Artem Zuikov](https://github.com/4ertus2))
+* Fixed hanging on start of the server when a dictionary depends on another dictionary via a database with engine=Dictionary. [#4962](https://github.com/ClickHouse/ClickHouse/pull/4962) ([Vitaly Baranov](https://github.com/vitlibar))
+* Partially fix distributed_product_mode = local. It's possible to allow columns of local tables in where/having/order by/... via table aliases. Throw exception if table does not have alias. There's not possible to access to the columns without table aliases yet. [#4986](https://github.com/ClickHouse/ClickHouse/pull/4986) ([Artem Zuikov](https://github.com/4ertus2))
+* Fix potentially wrong result for `SELECT DISTINCT` with `JOIN` [#5001](https://github.com/ClickHouse/ClickHouse/pull/5001) ([Artem Zuikov](https://github.com/4ertus2))
+* Fixed very rare data race condition that could happen when executing a query with UNION ALL involving at least two SELECTs from system.columns, system.tables, system.parts, system.parts_tables or tables of Merge family and performing ALTER of columns of the related tables concurrently. [#5189](https://github.com/ClickHouse/ClickHouse/pull/5189) ([alexey-milovidov](https://github.com/alexey-milovidov))
### Build/Testing/Packaging Improvements
-* Fixed test failures when running clickhouse-server on different host [#4713](https://github.com/yandex/ClickHouse/pull/4713) ([Vasily Nemkov](https://github.com/Enmk))
-* clickhouse-test: Disable color control sequences in non tty environment. [#4937](https://github.com/yandex/ClickHouse/pull/4937) ([alesapin](https://github.com/alesapin))
-* clickhouse-test: Allow use any test database (remove `test.` qualification where it possible) [#5008](https://github.com/yandex/ClickHouse/pull/5008) ([proller](https://github.com/proller))
-* Fix ubsan errors [#5037](https://github.com/yandex/ClickHouse/pull/5037) ([Vitaly Baranov](https://github.com/vitlibar))
-* Yandex LFAlloc was added to ClickHouse to allocate MarkCache and UncompressedCache data in different ways to catch segfaults more reliable [#4995](https://github.com/yandex/ClickHouse/pull/4995) ([Danila Kutenin](https://github.com/danlark1))
-* Python util to help with backports and changelogs. [#4949](https://github.com/yandex/ClickHouse/pull/4949) ([Ivan](https://github.com/abyss7))
+* Fixed test failures when running clickhouse-server on different host [#4713](https://github.com/ClickHouse/ClickHouse/pull/4713) ([Vasily Nemkov](https://github.com/Enmk))
+* clickhouse-test: Disable color control sequences in non tty environment. [#4937](https://github.com/ClickHouse/ClickHouse/pull/4937) ([alesapin](https://github.com/alesapin))
+* clickhouse-test: Allow use any test database (remove `test.` qualification where it possible) [#5008](https://github.com/ClickHouse/ClickHouse/pull/5008) ([proller](https://github.com/proller))
+* Fix ubsan errors [#5037](https://github.com/ClickHouse/ClickHouse/pull/5037) ([Vitaly Baranov](https://github.com/vitlibar))
+* Yandex LFAlloc was added to ClickHouse to allocate MarkCache and UncompressedCache data in different ways to catch segfaults more reliable [#4995](https://github.com/ClickHouse/ClickHouse/pull/4995) ([Danila Kutenin](https://github.com/danlark1))
+* Python util to help with backports and changelogs. [#4949](https://github.com/ClickHouse/ClickHouse/pull/4949) ([Ivan](https://github.com/abyss7))
## ClickHouse release 19.5.4.22, 2019-05-13
### Bug fixes
-* Fixed possible crash in bitmap* functions [#5220](https://github.com/yandex/ClickHouse/pull/5220) [#5228](https://github.com/yandex/ClickHouse/pull/5228) ([Andy Yang](https://github.com/andyyzh))
-* Fixed very rare data race condition that could happen when executing a query with UNION ALL involving at least two SELECTs from system.columns, system.tables, system.parts, system.parts_tables or tables of Merge family and performing ALTER of columns of the related tables concurrently. [#5189](https://github.com/yandex/ClickHouse/pull/5189) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed error `Set for IN is not created yet in case of using single LowCardinality column in the left part of IN`. This error happened if LowCardinality column was the part of primary key. #5031 [#5154](https://github.com/yandex/ClickHouse/pull/5154) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
-* Modification of retention function: If a row satisfies both the first and NTH condition, only the first satisfied condition is added to the data state. Now all conditions that satisfy in a row of data are added to the data state. [#5119](https://github.com/yandex/ClickHouse/pull/5119) ([小路](https://github.com/nicelulu))
+* Fixed possible crash in bitmap* functions [#5220](https://github.com/ClickHouse/ClickHouse/pull/5220) [#5228](https://github.com/ClickHouse/ClickHouse/pull/5228) ([Andy Yang](https://github.com/andyyzh))
+* Fixed very rare data race condition that could happen when executing a query with UNION ALL involving at least two SELECTs from system.columns, system.tables, system.parts, system.parts_tables or tables of Merge family and performing ALTER of columns of the related tables concurrently. [#5189](https://github.com/ClickHouse/ClickHouse/pull/5189) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed error `Set for IN is not created yet in case of using single LowCardinality column in the left part of IN`. This error happened if LowCardinality column was the part of primary key. #5031 [#5154](https://github.com/ClickHouse/ClickHouse/pull/5154) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* Modification of retention function: If a row satisfies both the first and NTH condition, only the first satisfied condition is added to the data state. Now all conditions that satisfy in a row of data are added to the data state. [#5119](https://github.com/ClickHouse/ClickHouse/pull/5119) ([小路](https://github.com/nicelulu))
## ClickHouse release 19.5.3.8, 2019-04-18
### Bug fixes
-* Fixed type of setting `max_partitions_per_insert_block` from boolean to UInt64. [#5028](https://github.com/yandex/ClickHouse/pull/5028) ([Mohammad Hossein Sekhavat](https://github.com/mhsekhavat))
+* Fixed type of setting `max_partitions_per_insert_block` from boolean to UInt64. [#5028](https://github.com/ClickHouse/ClickHouse/pull/5028) ([Mohammad Hossein Sekhavat](https://github.com/mhsekhavat))
## ClickHouse release 19.5.2.6, 2019-04-15
### New Features
-* [Hyperscan](https://github.com/intel/hyperscan) multiple regular expression matching was added (functions `multiMatchAny`, `multiMatchAnyIndex`, `multiFuzzyMatchAny`, `multiFuzzyMatchAnyIndex`). [#4780](https://github.com/yandex/ClickHouse/pull/4780), [#4841](https://github.com/yandex/ClickHouse/pull/4841) ([Danila Kutenin](https://github.com/danlark1))
-* `multiSearchFirstPosition` function was added. [#4780](https://github.com/yandex/ClickHouse/pull/4780) ([Danila Kutenin](https://github.com/danlark1))
-* Implement the predefined expression filter per row for tables. [#4792](https://github.com/yandex/ClickHouse/pull/4792) ([Ivan](https://github.com/abyss7))
-* A new type of data skipping indices based on bloom filters (can be used for `equal`, `in` and `like` functions). [#4499](https://github.com/yandex/ClickHouse/pull/4499) ([Nikita Vasilev](https://github.com/nikvas0))
-* Added `ASOF JOIN` which allows to run queries that join to the most recent value known. [#4774](https://github.com/yandex/ClickHouse/pull/4774) [#4867](https://github.com/yandex/ClickHouse/pull/4867) [#4863](https://github.com/yandex/ClickHouse/pull/4863) [#4875](https://github.com/yandex/ClickHouse/pull/4875) ([Martijn Bakker](https://github.com/Gladdy), [Artem Zuikov](https://github.com/4ertus2))
-* Rewrite multiple `COMMA JOIN` to `CROSS JOIN`. Then rewrite them to `INNER JOIN` if possible. [#4661](https://github.com/yandex/ClickHouse/pull/4661) ([Artem Zuikov](https://github.com/4ertus2))
+* [Hyperscan](https://github.com/intel/hyperscan) multiple regular expression matching was added (functions `multiMatchAny`, `multiMatchAnyIndex`, `multiFuzzyMatchAny`, `multiFuzzyMatchAnyIndex`). [#4780](https://github.com/ClickHouse/ClickHouse/pull/4780), [#4841](https://github.com/ClickHouse/ClickHouse/pull/4841) ([Danila Kutenin](https://github.com/danlark1))
+* `multiSearchFirstPosition` function was added. [#4780](https://github.com/ClickHouse/ClickHouse/pull/4780) ([Danila Kutenin](https://github.com/danlark1))
+* Implement the predefined expression filter per row for tables. [#4792](https://github.com/ClickHouse/ClickHouse/pull/4792) ([Ivan](https://github.com/abyss7))
+* A new type of data skipping indices based on bloom filters (can be used for `equal`, `in` and `like` functions). [#4499](https://github.com/ClickHouse/ClickHouse/pull/4499) ([Nikita Vasilev](https://github.com/nikvas0))
+* Added `ASOF JOIN` which allows to run queries that join to the most recent value known. [#4774](https://github.com/ClickHouse/ClickHouse/pull/4774) [#4867](https://github.com/ClickHouse/ClickHouse/pull/4867) [#4863](https://github.com/ClickHouse/ClickHouse/pull/4863) [#4875](https://github.com/ClickHouse/ClickHouse/pull/4875) ([Martijn Bakker](https://github.com/Gladdy), [Artem Zuikov](https://github.com/4ertus2))
+* Rewrite multiple `COMMA JOIN` to `CROSS JOIN`. Then rewrite them to `INNER JOIN` if possible. [#4661](https://github.com/ClickHouse/ClickHouse/pull/4661) ([Artem Zuikov](https://github.com/4ertus2))
### Improvement
-* `topK` and `topKWeighted` now supports custom `loadFactor` (fixes issue [#4252](https://github.com/yandex/ClickHouse/issues/4252)). [#4634](https://github.com/yandex/ClickHouse/pull/4634) ([Kirill Danshin](https://github.com/kirillDanshin))
-* Allow to use `parallel_replicas_count > 1` even for tables without sampling (the setting is simply ignored for them). In previous versions it was lead to exception. [#4637](https://github.com/yandex/ClickHouse/pull/4637) ([Alexey Elymanov](https://github.com/digitalist))
-* Support for `CREATE OR REPLACE VIEW`. Allow to create a view or set a new definition in a single statement. [#4654](https://github.com/yandex/ClickHouse/pull/4654) ([Boris Granveaud](https://github.com/bgranvea))
-* `Buffer` table engine now supports `PREWHERE`. [#4671](https://github.com/yandex/ClickHouse/pull/4671) ([Yangkuan Liu](https://github.com/LiuYangkuan))
-* Add ability to start replicated table without metadata in zookeeper in `readonly` mode. [#4691](https://github.com/yandex/ClickHouse/pull/4691) ([alesapin](https://github.com/alesapin))
-* Fixed flicker of progress bar in clickhouse-client. The issue was most noticeable when using `FORMAT Null` with streaming queries. [#4811](https://github.com/yandex/ClickHouse/pull/4811) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Allow to disable functions with `hyperscan` library on per user basis to limit potentially excessive and uncontrolled resource usage. [#4816](https://github.com/yandex/ClickHouse/pull/4816) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Add version number logging in all errors. [#4824](https://github.com/yandex/ClickHouse/pull/4824) ([proller](https://github.com/proller))
-* Added restriction to the `multiMatch` functions which requires string size to fit into `unsigned int`. Also added the number of arguments limit to the `multiSearch` functions. [#4834](https://github.com/yandex/ClickHouse/pull/4834) ([Danila Kutenin](https://github.com/danlark1))
-* Improved usage of scratch space and error handling in Hyperscan. [#4866](https://github.com/yandex/ClickHouse/pull/4866) ([Danila Kutenin](https://github.com/danlark1))
-* Fill `system.graphite_detentions` from a table config of `*GraphiteMergeTree` engine tables. [#4584](https://github.com/yandex/ClickHouse/pull/4584) ([Mikhail f. Shiryaev](https://github.com/Felixoid))
-* Rename `trigramDistance` function to `ngramDistance` and add more functions with `CaseInsensitive` and `UTF`. [#4602](https://github.com/yandex/ClickHouse/pull/4602) ([Danila Kutenin](https://github.com/danlark1))
-* Improved data skipping indices calculation. [#4640](https://github.com/yandex/ClickHouse/pull/4640) ([Nikita Vasilev](https://github.com/nikvas0))
-* Keep ordinary, `DEFAULT`, `MATERIALIZED` and `ALIAS` columns in a single list (fixes issue [#2867](https://github.com/yandex/ClickHouse/issues/2867)). [#4707](https://github.com/yandex/ClickHouse/pull/4707) ([Alex Zatelepin](https://github.com/ztlpn))
+* `topK` and `topKWeighted` now supports custom `loadFactor` (fixes issue [#4252](https://github.com/ClickHouse/ClickHouse/issues/4252)). [#4634](https://github.com/ClickHouse/ClickHouse/pull/4634) ([Kirill Danshin](https://github.com/kirillDanshin))
+* Allow to use `parallel_replicas_count > 1` even for tables without sampling (the setting is simply ignored for them). In previous versions it was lead to exception. [#4637](https://github.com/ClickHouse/ClickHouse/pull/4637) ([Alexey Elymanov](https://github.com/digitalist))
+* Support for `CREATE OR REPLACE VIEW`. Allow to create a view or set a new definition in a single statement. [#4654](https://github.com/ClickHouse/ClickHouse/pull/4654) ([Boris Granveaud](https://github.com/bgranvea))
+* `Buffer` table engine now supports `PREWHERE`. [#4671](https://github.com/ClickHouse/ClickHouse/pull/4671) ([Yangkuan Liu](https://github.com/LiuYangkuan))
+* Add ability to start replicated table without metadata in zookeeper in `readonly` mode. [#4691](https://github.com/ClickHouse/ClickHouse/pull/4691) ([alesapin](https://github.com/alesapin))
+* Fixed flicker of progress bar in clickhouse-client. The issue was most noticeable when using `FORMAT Null` with streaming queries. [#4811](https://github.com/ClickHouse/ClickHouse/pull/4811) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Allow to disable functions with `hyperscan` library on per user basis to limit potentially excessive and uncontrolled resource usage. [#4816](https://github.com/ClickHouse/ClickHouse/pull/4816) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Add version number logging in all errors. [#4824](https://github.com/ClickHouse/ClickHouse/pull/4824) ([proller](https://github.com/proller))
+* Added restriction to the `multiMatch` functions which requires string size to fit into `unsigned int`. Also added the number of arguments limit to the `multiSearch` functions. [#4834](https://github.com/ClickHouse/ClickHouse/pull/4834) ([Danila Kutenin](https://github.com/danlark1))
+* Improved usage of scratch space and error handling in Hyperscan. [#4866](https://github.com/ClickHouse/ClickHouse/pull/4866) ([Danila Kutenin](https://github.com/danlark1))
+* Fill `system.graphite_detentions` from a table config of `*GraphiteMergeTree` engine tables. [#4584](https://github.com/ClickHouse/ClickHouse/pull/4584) ([Mikhail f. Shiryaev](https://github.com/Felixoid))
+* Rename `trigramDistance` function to `ngramDistance` and add more functions with `CaseInsensitive` and `UTF`. [#4602](https://github.com/ClickHouse/ClickHouse/pull/4602) ([Danila Kutenin](https://github.com/danlark1))
+* Improved data skipping indices calculation. [#4640](https://github.com/ClickHouse/ClickHouse/pull/4640) ([Nikita Vasilev](https://github.com/nikvas0))
+* Keep ordinary, `DEFAULT`, `MATERIALIZED` and `ALIAS` columns in a single list (fixes issue [#2867](https://github.com/ClickHouse/ClickHouse/issues/2867)). [#4707](https://github.com/ClickHouse/ClickHouse/pull/4707) ([Alex Zatelepin](https://github.com/ztlpn))
### Bug Fix
-* Avoid `std::terminate` in case of memory allocation failure. Now `std::bad_alloc` exception is thrown as expected. [#4665](https://github.com/yandex/ClickHouse/pull/4665) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixes capnproto reading from buffer. Sometimes files wasn't loaded successfully by HTTP. [#4674](https://github.com/yandex/ClickHouse/pull/4674) ([Vladislav](https://github.com/smirnov-vs))
-* Fix error `Unknown log entry type: 0` after `OPTIMIZE TABLE FINAL` query. [#4683](https://github.com/yandex/ClickHouse/pull/4683) ([Amos Bird](https://github.com/amosbird))
-* Wrong arguments to `hasAny` or `hasAll` functions may lead to segfault. [#4698](https://github.com/yandex/ClickHouse/pull/4698) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Deadlock may happen while executing `DROP DATABASE dictionary` query. [#4701](https://github.com/yandex/ClickHouse/pull/4701) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix undefined behavior in `median` and `quantile` functions. [#4702](https://github.com/yandex/ClickHouse/pull/4702) ([hcz](https://github.com/hczhcz))
-* Fix compression level detection when `network_compression_method` in lowercase. Broken in v19.1. [#4706](https://github.com/yandex/ClickHouse/pull/4706) ([proller](https://github.com/proller))
-* Fixed ignorance of `UTC` setting (fixes issue [#4658](https://github.com/yandex/ClickHouse/issues/4658)). [#4718](https://github.com/yandex/ClickHouse/pull/4718) ([proller](https://github.com/proller))
-* Fix `histogram` function behaviour with `Distributed` tables. [#4741](https://github.com/yandex/ClickHouse/pull/4741) ([olegkv](https://github.com/olegkv))
-* Fixed tsan report `destroy of a locked mutex`. [#4742](https://github.com/yandex/ClickHouse/pull/4742) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed TSan report on shutdown due to race condition in system logs usage. Fixed potential use-after-free on shutdown when part_log is enabled. [#4758](https://github.com/yandex/ClickHouse/pull/4758) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix recheck parts in `ReplicatedMergeTreeAlterThread` in case of error. [#4772](https://github.com/yandex/ClickHouse/pull/4772) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
-* Arithmetic operations on intermediate aggregate function states were not working for constant arguments (such as subquery results). [#4776](https://github.com/yandex/ClickHouse/pull/4776) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Always backquote column names in metadata. Otherwise it's impossible to create a table with column named `index` (server won't restart due to malformed `ATTACH` query in metadata). [#4782](https://github.com/yandex/ClickHouse/pull/4782) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix crash in `ALTER ... MODIFY ORDER BY` on `Distributed` table. [#4790](https://github.com/yandex/ClickHouse/pull/4790) ([TCeason](https://github.com/TCeason))
-* Fix segfault in `JOIN ON` with enabled `enable_optimize_predicate_expression`. [#4794](https://github.com/yandex/ClickHouse/pull/4794) ([Winter Zhang](https://github.com/zhang2014))
-* Fix bug with adding an extraneous row after consuming a protobuf message from Kafka. [#4808](https://github.com/yandex/ClickHouse/pull/4808) ([Vitaly Baranov](https://github.com/vitlibar))
-* Fix crash of `JOIN` on not-nullable vs nullable column. Fix `NULLs` in right keys in `ANY JOIN` + `join_use_nulls`. [#4815](https://github.com/yandex/ClickHouse/pull/4815) ([Artem Zuikov](https://github.com/4ertus2))
-* Fix segmentation fault in `clickhouse-copier`. [#4835](https://github.com/yandex/ClickHouse/pull/4835) ([proller](https://github.com/proller))
-* Fixed race condition in `SELECT` from `system.tables` if the table is renamed or altered concurrently. [#4836](https://github.com/yandex/ClickHouse/pull/4836) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed data race when fetching data part that is already obsolete. [#4839](https://github.com/yandex/ClickHouse/pull/4839) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed rare data race that can happen during `RENAME` table of MergeTree family. [#4844](https://github.com/yandex/ClickHouse/pull/4844) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed segmentation fault in function `arrayIntersect`. Segmentation fault could happen if function was called with mixed constant and ordinary arguments. [#4847](https://github.com/yandex/ClickHouse/pull/4847) ([Lixiang Qian](https://github.com/fancyqlx))
-* Fixed reading from `Array(LowCardinality)` column in rare case when column contained a long sequence of empty arrays. [#4850](https://github.com/yandex/ClickHouse/pull/4850) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
-* Fix crash in `FULL/RIGHT JOIN` when we joining on nullable vs not nullable. [#4855](https://github.com/yandex/ClickHouse/pull/4855) ([Artem Zuikov](https://github.com/4ertus2))
-* Fix `No message received` exception while fetching parts between replicas. [#4856](https://github.com/yandex/ClickHouse/pull/4856) ([alesapin](https://github.com/alesapin))
-* Fixed `arrayIntersect` function wrong result in case of several repeated values in single array. [#4871](https://github.com/yandex/ClickHouse/pull/4871) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
-* Fix a race condition during concurrent `ALTER COLUMN` queries that could lead to a server crash (fixes issue [#3421](https://github.com/yandex/ClickHouse/issues/3421)). [#4592](https://github.com/yandex/ClickHouse/pull/4592) ([Alex Zatelepin](https://github.com/ztlpn))
-* Fix incorrect result in `FULL/RIGHT JOIN` with const column. [#4723](https://github.com/yandex/ClickHouse/pull/4723) ([Artem Zuikov](https://github.com/4ertus2))
-* Fix duplicates in `GLOBAL JOIN` with asterisk. [#4705](https://github.com/yandex/ClickHouse/pull/4705) ([Artem Zuikov](https://github.com/4ertus2))
-* Fix parameter deduction in `ALTER MODIFY` of column `CODEC` when column type is not specified. [#4883](https://github.com/yandex/ClickHouse/pull/4883) ([alesapin](https://github.com/alesapin))
-* Functions `cutQueryStringAndFragment()` and `queryStringAndFragment()` now works correctly when `URL` contains a fragment and no query. [#4894](https://github.com/yandex/ClickHouse/pull/4894) ([Vitaly Baranov](https://github.com/vitlibar))
-* Fix rare bug when setting `min_bytes_to_use_direct_io` is greater than zero, which occures when thread have to seek backward in column file. [#4897](https://github.com/yandex/ClickHouse/pull/4897) ([alesapin](https://github.com/alesapin))
-* Fix wrong argument types for aggregate functions with `LowCardinality` arguments (fixes issue [#4919](https://github.com/yandex/ClickHouse/issues/4919)). [#4922](https://github.com/yandex/ClickHouse/pull/4922) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
-* Fix wrong name qualification in `GLOBAL JOIN`. [#4969](https://github.com/yandex/ClickHouse/pull/4969) ([Artem Zuikov](https://github.com/4ertus2))
-* Fix function `toISOWeek` result for year 1970. [#4988](https://github.com/yandex/ClickHouse/pull/4988) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix `DROP`, `TRUNCATE` and `OPTIMIZE` queries duplication, when executed on `ON CLUSTER` for `ReplicatedMergeTree*` tables family. [#4991](https://github.com/yandex/ClickHouse/pull/4991) ([alesapin](https://github.com/alesapin))
+* Avoid `std::terminate` in case of memory allocation failure. Now `std::bad_alloc` exception is thrown as expected. [#4665](https://github.com/ClickHouse/ClickHouse/pull/4665) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixes capnproto reading from buffer. Sometimes files wasn't loaded successfully by HTTP. [#4674](https://github.com/ClickHouse/ClickHouse/pull/4674) ([Vladislav](https://github.com/smirnov-vs))
+* Fix error `Unknown log entry type: 0` after `OPTIMIZE TABLE FINAL` query. [#4683](https://github.com/ClickHouse/ClickHouse/pull/4683) ([Amos Bird](https://github.com/amosbird))
+* Wrong arguments to `hasAny` or `hasAll` functions may lead to segfault. [#4698](https://github.com/ClickHouse/ClickHouse/pull/4698) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Deadlock may happen while executing `DROP DATABASE dictionary` query. [#4701](https://github.com/ClickHouse/ClickHouse/pull/4701) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix undefined behavior in `median` and `quantile` functions. [#4702](https://github.com/ClickHouse/ClickHouse/pull/4702) ([hcz](https://github.com/hczhcz))
+* Fix compression level detection when `network_compression_method` in lowercase. Broken in v19.1. [#4706](https://github.com/ClickHouse/ClickHouse/pull/4706) ([proller](https://github.com/proller))
+* Fixed ignorance of `UTC` setting (fixes issue [#4658](https://github.com/ClickHouse/ClickHouse/issues/4658)). [#4718](https://github.com/ClickHouse/ClickHouse/pull/4718) ([proller](https://github.com/proller))
+* Fix `histogram` function behaviour with `Distributed` tables. [#4741](https://github.com/ClickHouse/ClickHouse/pull/4741) ([olegkv](https://github.com/olegkv))
+* Fixed tsan report `destroy of a locked mutex`. [#4742](https://github.com/ClickHouse/ClickHouse/pull/4742) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed TSan report on shutdown due to race condition in system logs usage. Fixed potential use-after-free on shutdown when part_log is enabled. [#4758](https://github.com/ClickHouse/ClickHouse/pull/4758) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix recheck parts in `ReplicatedMergeTreeAlterThread` in case of error. [#4772](https://github.com/ClickHouse/ClickHouse/pull/4772) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* Arithmetic operations on intermediate aggregate function states were not working for constant arguments (such as subquery results). [#4776](https://github.com/ClickHouse/ClickHouse/pull/4776) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Always backquote column names in metadata. Otherwise it's impossible to create a table with column named `index` (server won't restart due to malformed `ATTACH` query in metadata). [#4782](https://github.com/ClickHouse/ClickHouse/pull/4782) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix crash in `ALTER ... MODIFY ORDER BY` on `Distributed` table. [#4790](https://github.com/ClickHouse/ClickHouse/pull/4790) ([TCeason](https://github.com/TCeason))
+* Fix segfault in `JOIN ON` with enabled `enable_optimize_predicate_expression`. [#4794](https://github.com/ClickHouse/ClickHouse/pull/4794) ([Winter Zhang](https://github.com/zhang2014))
+* Fix bug with adding an extraneous row after consuming a protobuf message from Kafka. [#4808](https://github.com/ClickHouse/ClickHouse/pull/4808) ([Vitaly Baranov](https://github.com/vitlibar))
+* Fix crash of `JOIN` on not-nullable vs nullable column. Fix `NULLs` in right keys in `ANY JOIN` + `join_use_nulls`. [#4815](https://github.com/ClickHouse/ClickHouse/pull/4815) ([Artem Zuikov](https://github.com/4ertus2))
+* Fix segmentation fault in `clickhouse-copier`. [#4835](https://github.com/ClickHouse/ClickHouse/pull/4835) ([proller](https://github.com/proller))
+* Fixed race condition in `SELECT` from `system.tables` if the table is renamed or altered concurrently. [#4836](https://github.com/ClickHouse/ClickHouse/pull/4836) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed data race when fetching data part that is already obsolete. [#4839](https://github.com/ClickHouse/ClickHouse/pull/4839) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed rare data race that can happen during `RENAME` table of MergeTree family. [#4844](https://github.com/ClickHouse/ClickHouse/pull/4844) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed segmentation fault in function `arrayIntersect`. Segmentation fault could happen if function was called with mixed constant and ordinary arguments. [#4847](https://github.com/ClickHouse/ClickHouse/pull/4847) ([Lixiang Qian](https://github.com/fancyqlx))
+* Fixed reading from `Array(LowCardinality)` column in rare case when column contained a long sequence of empty arrays. [#4850](https://github.com/ClickHouse/ClickHouse/pull/4850) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* Fix crash in `FULL/RIGHT JOIN` when we joining on nullable vs not nullable. [#4855](https://github.com/ClickHouse/ClickHouse/pull/4855) ([Artem Zuikov](https://github.com/4ertus2))
+* Fix `No message received` exception while fetching parts between replicas. [#4856](https://github.com/ClickHouse/ClickHouse/pull/4856) ([alesapin](https://github.com/alesapin))
+* Fixed `arrayIntersect` function wrong result in case of several repeated values in single array. [#4871](https://github.com/ClickHouse/ClickHouse/pull/4871) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* Fix a race condition during concurrent `ALTER COLUMN` queries that could lead to a server crash (fixes issue [#3421](https://github.com/ClickHouse/ClickHouse/issues/3421)). [#4592](https://github.com/ClickHouse/ClickHouse/pull/4592) ([Alex Zatelepin](https://github.com/ztlpn))
+* Fix incorrect result in `FULL/RIGHT JOIN` with const column. [#4723](https://github.com/ClickHouse/ClickHouse/pull/4723) ([Artem Zuikov](https://github.com/4ertus2))
+* Fix duplicates in `GLOBAL JOIN` with asterisk. [#4705](https://github.com/ClickHouse/ClickHouse/pull/4705) ([Artem Zuikov](https://github.com/4ertus2))
+* Fix parameter deduction in `ALTER MODIFY` of column `CODEC` when column type is not specified. [#4883](https://github.com/ClickHouse/ClickHouse/pull/4883) ([alesapin](https://github.com/alesapin))
+* Functions `cutQueryStringAndFragment()` and `queryStringAndFragment()` now works correctly when `URL` contains a fragment and no query. [#4894](https://github.com/ClickHouse/ClickHouse/pull/4894) ([Vitaly Baranov](https://github.com/vitlibar))
+* Fix rare bug when setting `min_bytes_to_use_direct_io` is greater than zero, which occures when thread have to seek backward in column file. [#4897](https://github.com/ClickHouse/ClickHouse/pull/4897) ([alesapin](https://github.com/alesapin))
+* Fix wrong argument types for aggregate functions with `LowCardinality` arguments (fixes issue [#4919](https://github.com/ClickHouse/ClickHouse/issues/4919)). [#4922](https://github.com/ClickHouse/ClickHouse/pull/4922) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* Fix wrong name qualification in `GLOBAL JOIN`. [#4969](https://github.com/ClickHouse/ClickHouse/pull/4969) ([Artem Zuikov](https://github.com/4ertus2))
+* Fix function `toISOWeek` result for year 1970. [#4988](https://github.com/ClickHouse/ClickHouse/pull/4988) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix `DROP`, `TRUNCATE` and `OPTIMIZE` queries duplication, when executed on `ON CLUSTER` for `ReplicatedMergeTree*` tables family. [#4991](https://github.com/ClickHouse/ClickHouse/pull/4991) ([alesapin](https://github.com/alesapin))
### Backward Incompatible Change
-* Rename setting `insert_sample_with_metadata` to setting `input_format_defaults_for_omitted_fields`. [#4771](https://github.com/yandex/ClickHouse/pull/4771) ([Artem Zuikov](https://github.com/4ertus2))
-* Added setting `max_partitions_per_insert_block` (with value 100 by default). If inserted block contains larger number of partitions, an exception is thrown. Set it to 0 if you want to remove the limit (not recommended). [#4845](https://github.com/yandex/ClickHouse/pull/4845) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Multi-search functions were renamed (`multiPosition` to `multiSearchAllPositions`, `multiSearch` to `multiSearchAny`, `firstMatch` to `multiSearchFirstIndex`). [#4780](https://github.com/yandex/ClickHouse/pull/4780) ([Danila Kutenin](https://github.com/danlark1))
+* Rename setting `insert_sample_with_metadata` to setting `input_format_defaults_for_omitted_fields`. [#4771](https://github.com/ClickHouse/ClickHouse/pull/4771) ([Artem Zuikov](https://github.com/4ertus2))
+* Added setting `max_partitions_per_insert_block` (with value 100 by default). If inserted block contains larger number of partitions, an exception is thrown. Set it to 0 if you want to remove the limit (not recommended). [#4845](https://github.com/ClickHouse/ClickHouse/pull/4845) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Multi-search functions were renamed (`multiPosition` to `multiSearchAllPositions`, `multiSearch` to `multiSearchAny`, `firstMatch` to `multiSearchFirstIndex`). [#4780](https://github.com/ClickHouse/ClickHouse/pull/4780) ([Danila Kutenin](https://github.com/danlark1))
### Performance Improvement
-* Optimize Volnitsky searcher by inlining, giving about 5-10% search improvement for queries with many needles or many similar bigrams. [#4862](https://github.com/yandex/ClickHouse/pull/4862) ([Danila Kutenin](https://github.com/danlark1))
-* Fix performance issue when setting `use_uncompressed_cache` is greater than zero, which appeared when all read data contained in cache. [#4913](https://github.com/yandex/ClickHouse/pull/4913) ([alesapin](https://github.com/alesapin))
+* Optimize Volnitsky searcher by inlining, giving about 5-10% search improvement for queries with many needles or many similar bigrams. [#4862](https://github.com/ClickHouse/ClickHouse/pull/4862) ([Danila Kutenin](https://github.com/danlark1))
+* Fix performance issue when setting `use_uncompressed_cache` is greater than zero, which appeared when all read data contained in cache. [#4913](https://github.com/ClickHouse/ClickHouse/pull/4913) ([alesapin](https://github.com/alesapin))
### Build/Testing/Packaging Improvement
-* Hardening debug build: more granular memory mappings and ASLR; add memory protection for mark cache and index. This allows to find more memory stomping bugs in case when ASan and MSan cannot do it. [#4632](https://github.com/yandex/ClickHouse/pull/4632) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Add support for cmake variables `ENABLE_PROTOBUF`, `ENABLE_PARQUET` and `ENABLE_BROTLI` which allows to enable/disable the above features (same as we can do for librdkafka, mysql, etc). [#4669](https://github.com/yandex/ClickHouse/pull/4669) ([Silviu Caragea](https://github.com/silviucpp))
-* Add ability to print process list and stacktraces of all threads if some queries are hung after test run. [#4675](https://github.com/yandex/ClickHouse/pull/4675) ([alesapin](https://github.com/alesapin))
-* Add retries on `Connection loss` error in `clickhouse-test`. [#4682](https://github.com/yandex/ClickHouse/pull/4682) ([alesapin](https://github.com/alesapin))
-* Add freebsd build with vagrant and build with thread sanitizer to packager script. [#4712](https://github.com/yandex/ClickHouse/pull/4712) [#4748](https://github.com/yandex/ClickHouse/pull/4748) ([alesapin](https://github.com/alesapin))
-* Now user asked for password for user `'default'` during installation. [#4725](https://github.com/yandex/ClickHouse/pull/4725) ([proller](https://github.com/proller))
-* Suppress warning in `rdkafka` library. [#4740](https://github.com/yandex/ClickHouse/pull/4740) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Allow ability to build without ssl. [#4750](https://github.com/yandex/ClickHouse/pull/4750) ([proller](https://github.com/proller))
-* Add a way to launch clickhouse-server image from a custom user. [#4753](https://github.com/yandex/ClickHouse/pull/4753) ([Mikhail f. Shiryaev](https://github.com/Felixoid))
-* Upgrade contrib boost to 1.69. [#4793](https://github.com/yandex/ClickHouse/pull/4793) ([proller](https://github.com/proller))
-* Disable usage of `mremap` when compiled with Thread Sanitizer. Surprisingly enough, TSan does not intercept `mremap` (though it does intercept `mmap`, `munmap`) that leads to false positives. Fixed TSan report in stateful tests. [#4859](https://github.com/yandex/ClickHouse/pull/4859) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Add test checking using format schema via HTTP interface. [#4864](https://github.com/yandex/ClickHouse/pull/4864) ([Vitaly Baranov](https://github.com/vitlibar))
+* Hardening debug build: more granular memory mappings and ASLR; add memory protection for mark cache and index. This allows to find more memory stomping bugs in case when ASan and MSan cannot do it. [#4632](https://github.com/ClickHouse/ClickHouse/pull/4632) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Add support for cmake variables `ENABLE_PROTOBUF`, `ENABLE_PARQUET` and `ENABLE_BROTLI` which allows to enable/disable the above features (same as we can do for librdkafka, mysql, etc). [#4669](https://github.com/ClickHouse/ClickHouse/pull/4669) ([Silviu Caragea](https://github.com/silviucpp))
+* Add ability to print process list and stacktraces of all threads if some queries are hung after test run. [#4675](https://github.com/ClickHouse/ClickHouse/pull/4675) ([alesapin](https://github.com/alesapin))
+* Add retries on `Connection loss` error in `clickhouse-test`. [#4682](https://github.com/ClickHouse/ClickHouse/pull/4682) ([alesapin](https://github.com/alesapin))
+* Add freebsd build with vagrant and build with thread sanitizer to packager script. [#4712](https://github.com/ClickHouse/ClickHouse/pull/4712) [#4748](https://github.com/ClickHouse/ClickHouse/pull/4748) ([alesapin](https://github.com/alesapin))
+* Now user asked for password for user `'default'` during installation. [#4725](https://github.com/ClickHouse/ClickHouse/pull/4725) ([proller](https://github.com/proller))
+* Suppress warning in `rdkafka` library. [#4740](https://github.com/ClickHouse/ClickHouse/pull/4740) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Allow ability to build without ssl. [#4750](https://github.com/ClickHouse/ClickHouse/pull/4750) ([proller](https://github.com/proller))
+* Add a way to launch clickhouse-server image from a custom user. [#4753](https://github.com/ClickHouse/ClickHouse/pull/4753) ([Mikhail f. Shiryaev](https://github.com/Felixoid))
+* Upgrade contrib boost to 1.69. [#4793](https://github.com/ClickHouse/ClickHouse/pull/4793) ([proller](https://github.com/proller))
+* Disable usage of `mremap` when compiled with Thread Sanitizer. Surprisingly enough, TSan does not intercept `mremap` (though it does intercept `mmap`, `munmap`) that leads to false positives. Fixed TSan report in stateful tests. [#4859](https://github.com/ClickHouse/ClickHouse/pull/4859) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Add test checking using format schema via HTTP interface. [#4864](https://github.com/ClickHouse/ClickHouse/pull/4864) ([Vitaly Baranov](https://github.com/vitlibar))
## ClickHouse release 19.4.4.33, 2019-04-17
### Bug Fixes
-* Avoid `std::terminate` in case of memory allocation failure. Now `std::bad_alloc` exception is thrown as expected. [#4665](https://github.com/yandex/ClickHouse/pull/4665) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixes capnproto reading from buffer. Sometimes files wasn't loaded successfully by HTTP. [#4674](https://github.com/yandex/ClickHouse/pull/4674) ([Vladislav](https://github.com/smirnov-vs))
-* Fix error `Unknown log entry type: 0` after `OPTIMIZE TABLE FINAL` query. [#4683](https://github.com/yandex/ClickHouse/pull/4683) ([Amos Bird](https://github.com/amosbird))
-* Wrong arguments to `hasAny` or `hasAll` functions may lead to segfault. [#4698](https://github.com/yandex/ClickHouse/pull/4698) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Deadlock may happen while executing `DROP DATABASE dictionary` query. [#4701](https://github.com/yandex/ClickHouse/pull/4701) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix undefined behavior in `median` and `quantile` functions. [#4702](https://github.com/yandex/ClickHouse/pull/4702) ([hcz](https://github.com/hczhcz))
-* Fix compression level detection when `network_compression_method` in lowercase. Broken in v19.1. [#4706](https://github.com/yandex/ClickHouse/pull/4706) ([proller](https://github.com/proller))
-* Fixed ignorance of `UTC` setting (fixes issue [#4658](https://github.com/yandex/ClickHouse/issues/4658)). [#4718](https://github.com/yandex/ClickHouse/pull/4718) ([proller](https://github.com/proller))
-* Fix `histogram` function behaviour with `Distributed` tables. [#4741](https://github.com/yandex/ClickHouse/pull/4741) ([olegkv](https://github.com/olegkv))
-* Fixed tsan report `destroy of a locked mutex`. [#4742](https://github.com/yandex/ClickHouse/pull/4742) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed TSan report on shutdown due to race condition in system logs usage. Fixed potential use-after-free on shutdown when part_log is enabled. [#4758](https://github.com/yandex/ClickHouse/pull/4758) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix recheck parts in `ReplicatedMergeTreeAlterThread` in case of error. [#4772](https://github.com/yandex/ClickHouse/pull/4772) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
-* Arithmetic operations on intermediate aggregate function states were not working for constant arguments (such as subquery results). [#4776](https://github.com/yandex/ClickHouse/pull/4776) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Always backquote column names in metadata. Otherwise it's impossible to create a table with column named `index` (server won't restart due to malformed `ATTACH` query in metadata). [#4782](https://github.com/yandex/ClickHouse/pull/4782) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix crash in `ALTER ... MODIFY ORDER BY` on `Distributed` table. [#4790](https://github.com/yandex/ClickHouse/pull/4790) ([TCeason](https://github.com/TCeason))
-* Fix segfault in `JOIN ON` with enabled `enable_optimize_predicate_expression`. [#4794](https://github.com/yandex/ClickHouse/pull/4794) ([Winter Zhang](https://github.com/zhang2014))
-* Fix bug with adding an extraneous row after consuming a protobuf message from Kafka. [#4808](https://github.com/yandex/ClickHouse/pull/4808) ([Vitaly Baranov](https://github.com/vitlibar))
-* Fix segmentation fault in `clickhouse-copier`. [#4835](https://github.com/yandex/ClickHouse/pull/4835) ([proller](https://github.com/proller))
-* Fixed race condition in `SELECT` from `system.tables` if the table is renamed or altered concurrently. [#4836](https://github.com/yandex/ClickHouse/pull/4836) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed data race when fetching data part that is already obsolete. [#4839](https://github.com/yandex/ClickHouse/pull/4839) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed rare data race that can happen during `RENAME` table of MergeTree family. [#4844](https://github.com/yandex/ClickHouse/pull/4844) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed segmentation fault in function `arrayIntersect`. Segmentation fault could happen if function was called with mixed constant and ordinary arguments. [#4847](https://github.com/yandex/ClickHouse/pull/4847) ([Lixiang Qian](https://github.com/fancyqlx))
-* Fixed reading from `Array(LowCardinality)` column in rare case when column contained a long sequence of empty arrays. [#4850](https://github.com/yandex/ClickHouse/pull/4850) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
-* Fix `No message received` exception while fetching parts between replicas. [#4856](https://github.com/yandex/ClickHouse/pull/4856) ([alesapin](https://github.com/alesapin))
-* Fixed `arrayIntersect` function wrong result in case of several repeated values in single array. [#4871](https://github.com/yandex/ClickHouse/pull/4871) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
-* Fix a race condition during concurrent `ALTER COLUMN` queries that could lead to a server crash (fixes issue [#3421](https://github.com/yandex/ClickHouse/issues/3421)). [#4592](https://github.com/yandex/ClickHouse/pull/4592) ([Alex Zatelepin](https://github.com/ztlpn))
-* Fix parameter deduction in `ALTER MODIFY` of column `CODEC` when column type is not specified. [#4883](https://github.com/yandex/ClickHouse/pull/4883) ([alesapin](https://github.com/alesapin))
-* Functions `cutQueryStringAndFragment()` and `queryStringAndFragment()` now works correctly when `URL` contains a fragment and no query. [#4894](https://github.com/yandex/ClickHouse/pull/4894) ([Vitaly Baranov](https://github.com/vitlibar))
-* Fix rare bug when setting `min_bytes_to_use_direct_io` is greater than zero, which occures when thread have to seek backward in column file. [#4897](https://github.com/yandex/ClickHouse/pull/4897) ([alesapin](https://github.com/alesapin))
-* Fix wrong argument types for aggregate functions with `LowCardinality` arguments (fixes issue [#4919](https://github.com/yandex/ClickHouse/issues/4919)). [#4922](https://github.com/yandex/ClickHouse/pull/4922) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
-* Fix function `toISOWeek` result for year 1970. [#4988](https://github.com/yandex/ClickHouse/pull/4988) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix `DROP`, `TRUNCATE` and `OPTIMIZE` queries duplication, when executed on `ON CLUSTER` for `ReplicatedMergeTree*` tables family. [#4991](https://github.com/yandex/ClickHouse/pull/4991) ([alesapin](https://github.com/alesapin))
+* Avoid `std::terminate` in case of memory allocation failure. Now `std::bad_alloc` exception is thrown as expected. [#4665](https://github.com/ClickHouse/ClickHouse/pull/4665) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixes capnproto reading from buffer. Sometimes files wasn't loaded successfully by HTTP. [#4674](https://github.com/ClickHouse/ClickHouse/pull/4674) ([Vladislav](https://github.com/smirnov-vs))
+* Fix error `Unknown log entry type: 0` after `OPTIMIZE TABLE FINAL` query. [#4683](https://github.com/ClickHouse/ClickHouse/pull/4683) ([Amos Bird](https://github.com/amosbird))
+* Wrong arguments to `hasAny` or `hasAll` functions may lead to segfault. [#4698](https://github.com/ClickHouse/ClickHouse/pull/4698) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Deadlock may happen while executing `DROP DATABASE dictionary` query. [#4701](https://github.com/ClickHouse/ClickHouse/pull/4701) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix undefined behavior in `median` and `quantile` functions. [#4702](https://github.com/ClickHouse/ClickHouse/pull/4702) ([hcz](https://github.com/hczhcz))
+* Fix compression level detection when `network_compression_method` in lowercase. Broken in v19.1. [#4706](https://github.com/ClickHouse/ClickHouse/pull/4706) ([proller](https://github.com/proller))
+* Fixed ignorance of `UTC` setting (fixes issue [#4658](https://github.com/ClickHouse/ClickHouse/issues/4658)). [#4718](https://github.com/ClickHouse/ClickHouse/pull/4718) ([proller](https://github.com/proller))
+* Fix `histogram` function behaviour with `Distributed` tables. [#4741](https://github.com/ClickHouse/ClickHouse/pull/4741) ([olegkv](https://github.com/olegkv))
+* Fixed tsan report `destroy of a locked mutex`. [#4742](https://github.com/ClickHouse/ClickHouse/pull/4742) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed TSan report on shutdown due to race condition in system logs usage. Fixed potential use-after-free on shutdown when part_log is enabled. [#4758](https://github.com/ClickHouse/ClickHouse/pull/4758) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix recheck parts in `ReplicatedMergeTreeAlterThread` in case of error. [#4772](https://github.com/ClickHouse/ClickHouse/pull/4772) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* Arithmetic operations on intermediate aggregate function states were not working for constant arguments (such as subquery results). [#4776](https://github.com/ClickHouse/ClickHouse/pull/4776) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Always backquote column names in metadata. Otherwise it's impossible to create a table with column named `index` (server won't restart due to malformed `ATTACH` query in metadata). [#4782](https://github.com/ClickHouse/ClickHouse/pull/4782) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix crash in `ALTER ... MODIFY ORDER BY` on `Distributed` table. [#4790](https://github.com/ClickHouse/ClickHouse/pull/4790) ([TCeason](https://github.com/TCeason))
+* Fix segfault in `JOIN ON` with enabled `enable_optimize_predicate_expression`. [#4794](https://github.com/ClickHouse/ClickHouse/pull/4794) ([Winter Zhang](https://github.com/zhang2014))
+* Fix bug with adding an extraneous row after consuming a protobuf message from Kafka. [#4808](https://github.com/ClickHouse/ClickHouse/pull/4808) ([Vitaly Baranov](https://github.com/vitlibar))
+* Fix segmentation fault in `clickhouse-copier`. [#4835](https://github.com/ClickHouse/ClickHouse/pull/4835) ([proller](https://github.com/proller))
+* Fixed race condition in `SELECT` from `system.tables` if the table is renamed or altered concurrently. [#4836](https://github.com/ClickHouse/ClickHouse/pull/4836) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed data race when fetching data part that is already obsolete. [#4839](https://github.com/ClickHouse/ClickHouse/pull/4839) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed rare data race that can happen during `RENAME` table of MergeTree family. [#4844](https://github.com/ClickHouse/ClickHouse/pull/4844) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed segmentation fault in function `arrayIntersect`. Segmentation fault could happen if function was called with mixed constant and ordinary arguments. [#4847](https://github.com/ClickHouse/ClickHouse/pull/4847) ([Lixiang Qian](https://github.com/fancyqlx))
+* Fixed reading from `Array(LowCardinality)` column in rare case when column contained a long sequence of empty arrays. [#4850](https://github.com/ClickHouse/ClickHouse/pull/4850) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* Fix `No message received` exception while fetching parts between replicas. [#4856](https://github.com/ClickHouse/ClickHouse/pull/4856) ([alesapin](https://github.com/alesapin))
+* Fixed `arrayIntersect` function wrong result in case of several repeated values in single array. [#4871](https://github.com/ClickHouse/ClickHouse/pull/4871) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* Fix a race condition during concurrent `ALTER COLUMN` queries that could lead to a server crash (fixes issue [#3421](https://github.com/ClickHouse/ClickHouse/issues/3421)). [#4592](https://github.com/ClickHouse/ClickHouse/pull/4592) ([Alex Zatelepin](https://github.com/ztlpn))
+* Fix parameter deduction in `ALTER MODIFY` of column `CODEC` when column type is not specified. [#4883](https://github.com/ClickHouse/ClickHouse/pull/4883) ([alesapin](https://github.com/alesapin))
+* Functions `cutQueryStringAndFragment()` and `queryStringAndFragment()` now works correctly when `URL` contains a fragment and no query. [#4894](https://github.com/ClickHouse/ClickHouse/pull/4894) ([Vitaly Baranov](https://github.com/vitlibar))
+* Fix rare bug when setting `min_bytes_to_use_direct_io` is greater than zero, which occures when thread have to seek backward in column file. [#4897](https://github.com/ClickHouse/ClickHouse/pull/4897) ([alesapin](https://github.com/alesapin))
+* Fix wrong argument types for aggregate functions with `LowCardinality` arguments (fixes issue [#4919](https://github.com/ClickHouse/ClickHouse/issues/4919)). [#4922](https://github.com/ClickHouse/ClickHouse/pull/4922) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* Fix function `toISOWeek` result for year 1970. [#4988](https://github.com/ClickHouse/ClickHouse/pull/4988) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix `DROP`, `TRUNCATE` and `OPTIMIZE` queries duplication, when executed on `ON CLUSTER` for `ReplicatedMergeTree*` tables family. [#4991](https://github.com/ClickHouse/ClickHouse/pull/4991) ([alesapin](https://github.com/alesapin))
### Improvements
-* Keep ordinary, `DEFAULT`, `MATERIALIZED` and `ALIAS` columns in a single list (fixes issue [#2867](https://github.com/yandex/ClickHouse/issues/2867)). [#4707](https://github.com/yandex/ClickHouse/pull/4707) ([Alex Zatelepin](https://github.com/ztlpn))
+* Keep ordinary, `DEFAULT`, `MATERIALIZED` and `ALIAS` columns in a single list (fixes issue [#2867](https://github.com/ClickHouse/ClickHouse/issues/2867)). [#4707](https://github.com/ClickHouse/ClickHouse/pull/4707) ([Alex Zatelepin](https://github.com/ztlpn))
## ClickHouse release 19.4.3.11, 2019-04-02
### Bug Fixes
-* Fix crash in `FULL/RIGHT JOIN` when we joining on nullable vs not nullable. [#4855](https://github.com/yandex/ClickHouse/pull/4855) ([Artem Zuikov](https://github.com/4ertus2))
-* Fix segmentation fault in `clickhouse-copier`. [#4835](https://github.com/yandex/ClickHouse/pull/4835) ([proller](https://github.com/proller))
+* Fix crash in `FULL/RIGHT JOIN` when we joining on nullable vs not nullable. [#4855](https://github.com/ClickHouse/ClickHouse/pull/4855) ([Artem Zuikov](https://github.com/4ertus2))
+* Fix segmentation fault in `clickhouse-copier`. [#4835](https://github.com/ClickHouse/ClickHouse/pull/4835) ([proller](https://github.com/proller))
### Build/Testing/Packaging Improvement
-* Add a way to launch clickhouse-server image from a custom user. [#4753](https://github.com/yandex/ClickHouse/pull/4753) ([Mikhail f. Shiryaev](https://github.com/Felixoid))
+* Add a way to launch clickhouse-server image from a custom user. [#4753](https://github.com/ClickHouse/ClickHouse/pull/4753) ([Mikhail f. Shiryaev](https://github.com/Felixoid))
## ClickHouse release 19.4.2.7, 2019-03-30
### Bug Fixes
-* Fixed reading from `Array(LowCardinality)` column in rare case when column contained a long sequence of empty arrays. [#4850](https://github.com/yandex/ClickHouse/pull/4850) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* Fixed reading from `Array(LowCardinality)` column in rare case when column contained a long sequence of empty arrays. [#4850](https://github.com/ClickHouse/ClickHouse/pull/4850) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
## ClickHouse release 19.4.1.3, 2019-03-19
### Bug Fixes
-* Fixed remote queries which contain both `LIMIT BY` and `LIMIT`. Previously, if `LIMIT BY` and `LIMIT` were used for remote query, `LIMIT` could happen before `LIMIT BY`, which led to too filtered result. [#4708](https://github.com/yandex/ClickHouse/pull/4708) ([Constantin S. Pan](https://github.com/kvap))
+* Fixed remote queries which contain both `LIMIT BY` and `LIMIT`. Previously, if `LIMIT BY` and `LIMIT` were used for remote query, `LIMIT` could happen before `LIMIT BY`, which led to too filtered result. [#4708](https://github.com/ClickHouse/ClickHouse/pull/4708) ([Constantin S. Pan](https://github.com/kvap))
## ClickHouse release 19.4.0.49, 2019-03-09
### New Features
-* Added full support for `Protobuf` format (input and output, nested data structures). [#4174](https://github.com/yandex/ClickHouse/pull/4174) [#4493](https://github.com/yandex/ClickHouse/pull/4493) ([Vitaly Baranov](https://github.com/vitlibar))
-* Added bitmap functions with Roaring Bitmaps. [#4207](https://github.com/yandex/ClickHouse/pull/4207) ([Andy Yang](https://github.com/andyyzh)) [#4568](https://github.com/yandex/ClickHouse/pull/4568) ([Vitaly Baranov](https://github.com/vitlibar))
-* Parquet format support. [#4448](https://github.com/yandex/ClickHouse/pull/4448) ([proller](https://github.com/proller))
-* N-gram distance was added for fuzzy string comparison. It is similar to q-gram metrics in R language. [#4466](https://github.com/yandex/ClickHouse/pull/4466) ([Danila Kutenin](https://github.com/danlark1))
-* Combine rules for graphite rollup from dedicated aggregation and retention patterns. [#4426](https://github.com/yandex/ClickHouse/pull/4426) ([Mikhail f. Shiryaev](https://github.com/Felixoid))
-* Added `max_execution_speed` and `max_execution_speed_bytes` to limit resource usage. Added `min_execution_speed_bytes` setting to complement the `min_execution_speed`. [#4430](https://github.com/yandex/ClickHouse/pull/4430) ([Winter Zhang](https://github.com/zhang2014))
-* Implemented function `flatten`. [#4555](https://github.com/yandex/ClickHouse/pull/4555) [#4409](https://github.com/yandex/ClickHouse/pull/4409) ([alexey-milovidov](https://github.com/alexey-milovidov), [kzon](https://github.com/kzon))
-* Added functions `arrayEnumerateDenseRanked` and `arrayEnumerateUniqRanked` (it's like `arrayEnumerateUniq` but allows to fine tune array depth to look inside multidimensional arrays). [#4475](https://github.com/yandex/ClickHouse/pull/4475) ([proller](https://github.com/proller)) [#4601](https://github.com/yandex/ClickHouse/pull/4601) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Multiple JOINS with some restrictions: no asterisks, no complex aliases in ON/WHERE/GROUP BY/... [#4462](https://github.com/yandex/ClickHouse/pull/4462) ([Artem Zuikov](https://github.com/4ertus2))
+* Added full support for `Protobuf` format (input and output, nested data structures). [#4174](https://github.com/ClickHouse/ClickHouse/pull/4174) [#4493](https://github.com/ClickHouse/ClickHouse/pull/4493) ([Vitaly Baranov](https://github.com/vitlibar))
+* Added bitmap functions with Roaring Bitmaps. [#4207](https://github.com/ClickHouse/ClickHouse/pull/4207) ([Andy Yang](https://github.com/andyyzh)) [#4568](https://github.com/ClickHouse/ClickHouse/pull/4568) ([Vitaly Baranov](https://github.com/vitlibar))
+* Parquet format support. [#4448](https://github.com/ClickHouse/ClickHouse/pull/4448) ([proller](https://github.com/proller))
+* N-gram distance was added for fuzzy string comparison. It is similar to q-gram metrics in R language. [#4466](https://github.com/ClickHouse/ClickHouse/pull/4466) ([Danila Kutenin](https://github.com/danlark1))
+* Combine rules for graphite rollup from dedicated aggregation and retention patterns. [#4426](https://github.com/ClickHouse/ClickHouse/pull/4426) ([Mikhail f. Shiryaev](https://github.com/Felixoid))
+* Added `max_execution_speed` and `max_execution_speed_bytes` to limit resource usage. Added `min_execution_speed_bytes` setting to complement the `min_execution_speed`. [#4430](https://github.com/ClickHouse/ClickHouse/pull/4430) ([Winter Zhang](https://github.com/zhang2014))
+* Implemented function `flatten`. [#4555](https://github.com/ClickHouse/ClickHouse/pull/4555) [#4409](https://github.com/ClickHouse/ClickHouse/pull/4409) ([alexey-milovidov](https://github.com/alexey-milovidov), [kzon](https://github.com/kzon))
+* Added functions `arrayEnumerateDenseRanked` and `arrayEnumerateUniqRanked` (it's like `arrayEnumerateUniq` but allows to fine tune array depth to look inside multidimensional arrays). [#4475](https://github.com/ClickHouse/ClickHouse/pull/4475) ([proller](https://github.com/proller)) [#4601](https://github.com/ClickHouse/ClickHouse/pull/4601) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Multiple JOINS with some restrictions: no asterisks, no complex aliases in ON/WHERE/GROUP BY/... [#4462](https://github.com/ClickHouse/ClickHouse/pull/4462) ([Artem Zuikov](https://github.com/4ertus2))
### Bug Fixes
* This release also contains all bug fixes from 19.3 and 19.1.
-* Fixed bug in data skipping indices: order of granules after INSERT was incorrect. [#4407](https://github.com/yandex/ClickHouse/pull/4407) ([Nikita Vasilev](https://github.com/nikvas0))
-* Fixed `set` index for `Nullable` and `LowCardinality` columns. Before it, `set` index with `Nullable` or `LowCardinality` column led to error `Data type must be deserialized with multiple streams` while selecting. [#4594](https://github.com/yandex/ClickHouse/pull/4594) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
-* Correctly set update_time on full `executable` dictionary update. [#4551](https://github.com/yandex/ClickHouse/pull/4551) ([Tema Novikov](https://github.com/temoon))
-* Fix broken progress bar in 19.3. [#4627](https://github.com/yandex/ClickHouse/pull/4627) ([filimonov](https://github.com/filimonov))
-* Fixed inconsistent values of MemoryTracker when memory region was shrinked, in certain cases. [#4619](https://github.com/yandex/ClickHouse/pull/4619) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed undefined behaviour in ThreadPool. [#4612](https://github.com/yandex/ClickHouse/pull/4612) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed a very rare crash with the message `mutex lock failed: Invalid argument` that could happen when a MergeTree table was dropped concurrently with a SELECT. [#4608](https://github.com/yandex/ClickHouse/pull/4608) ([Alex Zatelepin](https://github.com/ztlpn))
-* ODBC driver compatibility with `LowCardinality` data type. [#4381](https://github.com/yandex/ClickHouse/pull/4381) ([proller](https://github.com/proller))
-* FreeBSD: Fixup for `AIOcontextPool: Found io_event with unknown id 0` error. [#4438](https://github.com/yandex/ClickHouse/pull/4438) ([urgordeadbeef](https://github.com/urgordeadbeef))
-* `system.part_log` table was created regardless to configuration. [#4483](https://github.com/yandex/ClickHouse/pull/4483) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix undefined behaviour in `dictIsIn` function for cache dictionaries. [#4515](https://github.com/yandex/ClickHouse/pull/4515) ([alesapin](https://github.com/alesapin))
-* Fixed a deadlock when a SELECT query locks the same table multiple times (e.g. from different threads or when executing multiple subqueries) and there is a concurrent DDL query. [#4535](https://github.com/yandex/ClickHouse/pull/4535) ([Alex Zatelepin](https://github.com/ztlpn))
-* Disable compile_expressions by default until we get own `llvm` contrib and can test it with `clang` and `asan`. [#4579](https://github.com/yandex/ClickHouse/pull/4579) ([alesapin](https://github.com/alesapin))
-* Prevent `std::terminate` when `invalidate_query` for `clickhouse` external dictionary source has returned wrong resultset (empty or more than one row or more than one column). Fixed issue when the `invalidate_query` was performed every five seconds regardless to the `lifetime`. [#4583](https://github.com/yandex/ClickHouse/pull/4583) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Avoid deadlock when the `invalidate_query` for a dictionary with `clickhouse` source was involving `system.dictionaries` table or `Dictionaries` database (rare case). [#4599](https://github.com/yandex/ClickHouse/pull/4599) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixes for CROSS JOIN with empty WHERE. [#4598](https://github.com/yandex/ClickHouse/pull/4598) ([Artem Zuikov](https://github.com/4ertus2))
-* Fixed segfault in function "replicate" when constant argument is passed. [#4603](https://github.com/yandex/ClickHouse/pull/4603) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix lambda function with predicate optimizer. [#4408](https://github.com/yandex/ClickHouse/pull/4408) ([Winter Zhang](https://github.com/zhang2014))
-* Multiple JOINs multiple fixes. [#4595](https://github.com/yandex/ClickHouse/pull/4595) ([Artem Zuikov](https://github.com/4ertus2))
+* Fixed bug in data skipping indices: order of granules after INSERT was incorrect. [#4407](https://github.com/ClickHouse/ClickHouse/pull/4407) ([Nikita Vasilev](https://github.com/nikvas0))
+* Fixed `set` index for `Nullable` and `LowCardinality` columns. Before it, `set` index with `Nullable` or `LowCardinality` column led to error `Data type must be deserialized with multiple streams` while selecting. [#4594](https://github.com/ClickHouse/ClickHouse/pull/4594) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* Correctly set update_time on full `executable` dictionary update. [#4551](https://github.com/ClickHouse/ClickHouse/pull/4551) ([Tema Novikov](https://github.com/temoon))
+* Fix broken progress bar in 19.3. [#4627](https://github.com/ClickHouse/ClickHouse/pull/4627) ([filimonov](https://github.com/filimonov))
+* Fixed inconsistent values of MemoryTracker when memory region was shrinked, in certain cases. [#4619](https://github.com/ClickHouse/ClickHouse/pull/4619) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed undefined behaviour in ThreadPool. [#4612](https://github.com/ClickHouse/ClickHouse/pull/4612) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed a very rare crash with the message `mutex lock failed: Invalid argument` that could happen when a MergeTree table was dropped concurrently with a SELECT. [#4608](https://github.com/ClickHouse/ClickHouse/pull/4608) ([Alex Zatelepin](https://github.com/ztlpn))
+* ODBC driver compatibility with `LowCardinality` data type. [#4381](https://github.com/ClickHouse/ClickHouse/pull/4381) ([proller](https://github.com/proller))
+* FreeBSD: Fixup for `AIOcontextPool: Found io_event with unknown id 0` error. [#4438](https://github.com/ClickHouse/ClickHouse/pull/4438) ([urgordeadbeef](https://github.com/urgordeadbeef))
+* `system.part_log` table was created regardless to configuration. [#4483](https://github.com/ClickHouse/ClickHouse/pull/4483) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix undefined behaviour in `dictIsIn` function for cache dictionaries. [#4515](https://github.com/ClickHouse/ClickHouse/pull/4515) ([alesapin](https://github.com/alesapin))
+* Fixed a deadlock when a SELECT query locks the same table multiple times (e.g. from different threads or when executing multiple subqueries) and there is a concurrent DDL query. [#4535](https://github.com/ClickHouse/ClickHouse/pull/4535) ([Alex Zatelepin](https://github.com/ztlpn))
+* Disable compile_expressions by default until we get own `llvm` contrib and can test it with `clang` and `asan`. [#4579](https://github.com/ClickHouse/ClickHouse/pull/4579) ([alesapin](https://github.com/alesapin))
+* Prevent `std::terminate` when `invalidate_query` for `clickhouse` external dictionary source has returned wrong resultset (empty or more than one row or more than one column). Fixed issue when the `invalidate_query` was performed every five seconds regardless to the `lifetime`. [#4583](https://github.com/ClickHouse/ClickHouse/pull/4583) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Avoid deadlock when the `invalidate_query` for a dictionary with `clickhouse` source was involving `system.dictionaries` table or `Dictionaries` database (rare case). [#4599](https://github.com/ClickHouse/ClickHouse/pull/4599) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixes for CROSS JOIN with empty WHERE. [#4598](https://github.com/ClickHouse/ClickHouse/pull/4598) ([Artem Zuikov](https://github.com/4ertus2))
+* Fixed segfault in function "replicate" when constant argument is passed. [#4603](https://github.com/ClickHouse/ClickHouse/pull/4603) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix lambda function with predicate optimizer. [#4408](https://github.com/ClickHouse/ClickHouse/pull/4408) ([Winter Zhang](https://github.com/zhang2014))
+* Multiple JOINs multiple fixes. [#4595](https://github.com/ClickHouse/ClickHouse/pull/4595) ([Artem Zuikov](https://github.com/4ertus2))
### Improvements
-* Support aliases in JOIN ON section for right table columns. [#4412](https://github.com/yandex/ClickHouse/pull/4412) ([Artem Zuikov](https://github.com/4ertus2))
-* Result of multiple JOINs need correct result names to be used in subselects. Replace flat aliases with source names in result. [#4474](https://github.com/yandex/ClickHouse/pull/4474) ([Artem Zuikov](https://github.com/4ertus2))
-* Improve push-down logic for joined statements. [#4387](https://github.com/yandex/ClickHouse/pull/4387) ([Ivan](https://github.com/abyss7))
+* Support aliases in JOIN ON section for right table columns. [#4412](https://github.com/ClickHouse/ClickHouse/pull/4412) ([Artem Zuikov](https://github.com/4ertus2))
+* Result of multiple JOINs need correct result names to be used in subselects. Replace flat aliases with source names in result. [#4474](https://github.com/ClickHouse/ClickHouse/pull/4474) ([Artem Zuikov](https://github.com/4ertus2))
+* Improve push-down logic for joined statements. [#4387](https://github.com/ClickHouse/ClickHouse/pull/4387) ([Ivan](https://github.com/abyss7))
### Performance Improvements
-* Improved heuristics of "move to PREWHERE" optimization. [#4405](https://github.com/yandex/ClickHouse/pull/4405) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Use proper lookup tables that uses HashTable's API for 8-bit and 16-bit keys. [#4536](https://github.com/yandex/ClickHouse/pull/4536) ([Amos Bird](https://github.com/amosbird))
-* Improved performance of string comparison. [#4564](https://github.com/yandex/ClickHouse/pull/4564) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Cleanup distributed DDL queue in a separate thread so that it doesn't slow down the main loop that processes distributed DDL tasks. [#4502](https://github.com/yandex/ClickHouse/pull/4502) ([Alex Zatelepin](https://github.com/ztlpn))
-* When `min_bytes_to_use_direct_io` is set to 1, not every file was opened with O_DIRECT mode because the data size to read was sometimes underestimated by the size of one compressed block. [#4526](https://github.com/yandex/ClickHouse/pull/4526) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Improved heuristics of "move to PREWHERE" optimization. [#4405](https://github.com/ClickHouse/ClickHouse/pull/4405) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Use proper lookup tables that uses HashTable's API for 8-bit and 16-bit keys. [#4536](https://github.com/ClickHouse/ClickHouse/pull/4536) ([Amos Bird](https://github.com/amosbird))
+* Improved performance of string comparison. [#4564](https://github.com/ClickHouse/ClickHouse/pull/4564) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Cleanup distributed DDL queue in a separate thread so that it doesn't slow down the main loop that processes distributed DDL tasks. [#4502](https://github.com/ClickHouse/ClickHouse/pull/4502) ([Alex Zatelepin](https://github.com/ztlpn))
+* When `min_bytes_to_use_direct_io` is set to 1, not every file was opened with O_DIRECT mode because the data size to read was sometimes underestimated by the size of one compressed block. [#4526](https://github.com/ClickHouse/ClickHouse/pull/4526) ([alexey-milovidov](https://github.com/alexey-milovidov))
### Build/Testing/Packaging Improvement
-* Added support for clang-9 [#4604](https://github.com/yandex/ClickHouse/pull/4604) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix wrong `__asm__` instructions (again) [#4621](https://github.com/yandex/ClickHouse/pull/4621) ([Konstantin Podshumok](https://github.com/podshumok))
-* Add ability to specify settings for `clickhouse-performance-test` from command line. [#4437](https://github.com/yandex/ClickHouse/pull/4437) ([alesapin](https://github.com/alesapin))
-* Add dictionaries tests to integration tests. [#4477](https://github.com/yandex/ClickHouse/pull/4477) ([alesapin](https://github.com/alesapin))
-* Added queries from the benchmark on the website to automated performance tests. [#4496](https://github.com/yandex/ClickHouse/pull/4496) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* `xxhash.h` does not exist in external lz4 because it is an implementation detail and its symbols are namespaced with `XXH_NAMESPACE` macro. When lz4 is external, xxHash has to be external too, and the dependents have to link to it. [#4495](https://github.com/yandex/ClickHouse/pull/4495) ([Orivej Desh](https://github.com/orivej))
-* Fixed a case when `quantileTiming` aggregate function can be called with negative or floating point argument (this fixes fuzz test with undefined behaviour sanitizer). [#4506](https://github.com/yandex/ClickHouse/pull/4506) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Spelling error correction. [#4531](https://github.com/yandex/ClickHouse/pull/4531) ([sdk2](https://github.com/sdk2))
-* Fix compilation on Mac. [#4371](https://github.com/yandex/ClickHouse/pull/4371) ([Vitaly Baranov](https://github.com/vitlibar))
-* Build fixes for FreeBSD and various unusual build configurations. [#4444](https://github.com/yandex/ClickHouse/pull/4444) ([proller](https://github.com/proller))
+* Added support for clang-9 [#4604](https://github.com/ClickHouse/ClickHouse/pull/4604) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix wrong `__asm__` instructions (again) [#4621](https://github.com/ClickHouse/ClickHouse/pull/4621) ([Konstantin Podshumok](https://github.com/podshumok))
+* Add ability to specify settings for `clickhouse-performance-test` from command line. [#4437](https://github.com/ClickHouse/ClickHouse/pull/4437) ([alesapin](https://github.com/alesapin))
+* Add dictionaries tests to integration tests. [#4477](https://github.com/ClickHouse/ClickHouse/pull/4477) ([alesapin](https://github.com/alesapin))
+* Added queries from the benchmark on the website to automated performance tests. [#4496](https://github.com/ClickHouse/ClickHouse/pull/4496) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* `xxhash.h` does not exist in external lz4 because it is an implementation detail and its symbols are namespaced with `XXH_NAMESPACE` macro. When lz4 is external, xxHash has to be external too, and the dependents have to link to it. [#4495](https://github.com/ClickHouse/ClickHouse/pull/4495) ([Orivej Desh](https://github.com/orivej))
+* Fixed a case when `quantileTiming` aggregate function can be called with negative or floating point argument (this fixes fuzz test with undefined behaviour sanitizer). [#4506](https://github.com/ClickHouse/ClickHouse/pull/4506) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Spelling error correction. [#4531](https://github.com/ClickHouse/ClickHouse/pull/4531) ([sdk2](https://github.com/sdk2))
+* Fix compilation on Mac. [#4371](https://github.com/ClickHouse/ClickHouse/pull/4371) ([Vitaly Baranov](https://github.com/vitlibar))
+* Build fixes for FreeBSD and various unusual build configurations. [#4444](https://github.com/ClickHouse/ClickHouse/pull/4444) ([proller](https://github.com/proller))
## ClickHouse release 19.3.9.1, 2019-04-02
### Bug Fixes
-* Fix crash in `FULL/RIGHT JOIN` when we joining on nullable vs not nullable. [#4855](https://github.com/yandex/ClickHouse/pull/4855) ([Artem Zuikov](https://github.com/4ertus2))
-* Fix segmentation fault in `clickhouse-copier`. [#4835](https://github.com/yandex/ClickHouse/pull/4835) ([proller](https://github.com/proller))
-* Fixed reading from `Array(LowCardinality)` column in rare case when column contained a long sequence of empty arrays. [#4850](https://github.com/yandex/ClickHouse/pull/4850) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* Fix crash in `FULL/RIGHT JOIN` when we joining on nullable vs not nullable. [#4855](https://github.com/ClickHouse/ClickHouse/pull/4855) ([Artem Zuikov](https://github.com/4ertus2))
+* Fix segmentation fault in `clickhouse-copier`. [#4835](https://github.com/ClickHouse/ClickHouse/pull/4835) ([proller](https://github.com/proller))
+* Fixed reading from `Array(LowCardinality)` column in rare case when column contained a long sequence of empty arrays. [#4850](https://github.com/ClickHouse/ClickHouse/pull/4850) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
### Build/Testing/Packaging Improvement
-* Add a way to launch clickhouse-server image from a custom user [#4753](https://github.com/yandex/ClickHouse/pull/4753) ([Mikhail f. Shiryaev](https://github.com/Felixoid))
+* Add a way to launch clickhouse-server image from a custom user [#4753](https://github.com/ClickHouse/ClickHouse/pull/4753) ([Mikhail f. Shiryaev](https://github.com/Felixoid))
## ClickHouse release 19.3.7, 2019-03-12
### Bug fixes
-* Fixed error in #3920. This error manifestate itself as random cache corruption (messages `Unknown codec family code`, `Cannot seek through file`) and segfaults. This bug first appeared in version 19.1 and is present in versions up to 19.1.10 and 19.3.6. [#4623](https://github.com/yandex/ClickHouse/pull/4623) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed error in #3920. This error manifestate itself as random cache corruption (messages `Unknown codec family code`, `Cannot seek through file`) and segfaults. This bug first appeared in version 19.1 and is present in versions up to 19.1.10 and 19.3.6. [#4623](https://github.com/ClickHouse/ClickHouse/pull/4623) ([alexey-milovidov](https://github.com/alexey-milovidov))
## ClickHouse release 19.3.6, 2019-03-02
### Bug fixes
-* When there are more than 1000 threads in a thread pool, `std::terminate` may happen on thread exit. [Azat Khuzhin](https://github.com/azat) [#4485](https://github.com/yandex/ClickHouse/pull/4485) [#4505](https://github.com/yandex/ClickHouse/pull/4505) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Now it's possible to create `ReplicatedMergeTree*` tables with comments on columns without defaults and tables with columns codecs without comments and defaults. Also fix comparison of codecs. [#4523](https://github.com/yandex/ClickHouse/pull/4523) ([alesapin](https://github.com/alesapin))
-* Fixed crash on JOIN with array or tuple. [#4552](https://github.com/yandex/ClickHouse/pull/4552) ([Artem Zuikov](https://github.com/4ertus2))
-* Fixed crash in clickhouse-copier with the message `ThreadStatus not created`. [#4540](https://github.com/yandex/ClickHouse/pull/4540) ([Artem Zuikov](https://github.com/4ertus2))
-* Fixed hangup on server shutdown if distributed DDLs were used. [#4472](https://github.com/yandex/ClickHouse/pull/4472) ([Alex Zatelepin](https://github.com/ztlpn))
-* Incorrect column numbers were printed in error message about text format parsing for columns with number greater than 10. [#4484](https://github.com/yandex/ClickHouse/pull/4484) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* When there are more than 1000 threads in a thread pool, `std::terminate` may happen on thread exit. [Azat Khuzhin](https://github.com/azat) [#4485](https://github.com/ClickHouse/ClickHouse/pull/4485) [#4505](https://github.com/ClickHouse/ClickHouse/pull/4505) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Now it's possible to create `ReplicatedMergeTree*` tables with comments on columns without defaults and tables with columns codecs without comments and defaults. Also fix comparison of codecs. [#4523](https://github.com/ClickHouse/ClickHouse/pull/4523) ([alesapin](https://github.com/alesapin))
+* Fixed crash on JOIN with array or tuple. [#4552](https://github.com/ClickHouse/ClickHouse/pull/4552) ([Artem Zuikov](https://github.com/4ertus2))
+* Fixed crash in clickhouse-copier with the message `ThreadStatus not created`. [#4540](https://github.com/ClickHouse/ClickHouse/pull/4540) ([Artem Zuikov](https://github.com/4ertus2))
+* Fixed hangup on server shutdown if distributed DDLs were used. [#4472](https://github.com/ClickHouse/ClickHouse/pull/4472) ([Alex Zatelepin](https://github.com/ztlpn))
+* Incorrect column numbers were printed in error message about text format parsing for columns with number greater than 10. [#4484](https://github.com/ClickHouse/ClickHouse/pull/4484) ([alexey-milovidov](https://github.com/alexey-milovidov))
### Build/Testing/Packaging Improvements
-* Fixed build with AVX enabled. [#4527](https://github.com/yandex/ClickHouse/pull/4527) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Enable extended accounting and IO accounting based on good known version instead of kernel under which it is compiled. [#4541](https://github.com/yandex/ClickHouse/pull/4541) ([nvartolomei](https://github.com/nvartolomei))
-* Allow to skip setting of core_dump.size_limit, warning instead of throw if limit set fail. [#4473](https://github.com/yandex/ClickHouse/pull/4473) ([proller](https://github.com/proller))
-* Removed the `inline` tags of `void readBinary(...)` in `Field.cpp`. Also merged redundant `namespace DB` blocks. [#4530](https://github.com/yandex/ClickHouse/pull/4530) ([hcz](https://github.com/hczhcz))
+* Fixed build with AVX enabled. [#4527](https://github.com/ClickHouse/ClickHouse/pull/4527) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Enable extended accounting and IO accounting based on good known version instead of kernel under which it is compiled. [#4541](https://github.com/ClickHouse/ClickHouse/pull/4541) ([nvartolomei](https://github.com/nvartolomei))
+* Allow to skip setting of core_dump.size_limit, warning instead of throw if limit set fail. [#4473](https://github.com/ClickHouse/ClickHouse/pull/4473) ([proller](https://github.com/proller))
+* Removed the `inline` tags of `void readBinary(...)` in `Field.cpp`. Also merged redundant `namespace DB` blocks. [#4530](https://github.com/ClickHouse/ClickHouse/pull/4530) ([hcz](https://github.com/hczhcz))
## ClickHouse release 19.3.5, 2019-02-21
### Bug fixes
-* Fixed bug with large http insert queries processing. [#4454](https://github.com/yandex/ClickHouse/pull/4454) ([alesapin](https://github.com/alesapin))
-* Fixed backward incompatibility with old versions due to wrong implementation of `send_logs_level` setting. [#4445](https://github.com/yandex/ClickHouse/pull/4445) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed backward incompatibility of table function `remote` introduced with column comments. [#4446](https://github.com/yandex/ClickHouse/pull/4446) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed bug with large http insert queries processing. [#4454](https://github.com/ClickHouse/ClickHouse/pull/4454) ([alesapin](https://github.com/alesapin))
+* Fixed backward incompatibility with old versions due to wrong implementation of `send_logs_level` setting. [#4445](https://github.com/ClickHouse/ClickHouse/pull/4445) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed backward incompatibility of table function `remote` introduced with column comments. [#4446](https://github.com/ClickHouse/ClickHouse/pull/4446) ([alexey-milovidov](https://github.com/alexey-milovidov))
## ClickHouse release 19.3.4, 2019-02-16
### Improvements
-* Table index size is not accounted for memory limits when doing `ATTACH TABLE` query. Avoided the possibility that a table cannot be attached after being detached. [#4396](https://github.com/yandex/ClickHouse/pull/4396) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Slightly raised up the limit on max string and array size received from ZooKeeper. It allows to continue to work with increased size of `CLIENT_JVMFLAGS=-Djute.maxbuffer=...` on ZooKeeper. [#4398](https://github.com/yandex/ClickHouse/pull/4398) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Allow to repair abandoned replica even if it already has huge number of nodes in its queue. [#4399](https://github.com/yandex/ClickHouse/pull/4399) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Add one required argument to `SET` index (max stored rows number). [#4386](https://github.com/yandex/ClickHouse/pull/4386) ([Nikita Vasilev](https://github.com/nikvas0))
+* Table index size is not accounted for memory limits when doing `ATTACH TABLE` query. Avoided the possibility that a table cannot be attached after being detached. [#4396](https://github.com/ClickHouse/ClickHouse/pull/4396) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Slightly raised up the limit on max string and array size received from ZooKeeper. It allows to continue to work with increased size of `CLIENT_JVMFLAGS=-Djute.maxbuffer=...` on ZooKeeper. [#4398](https://github.com/ClickHouse/ClickHouse/pull/4398) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Allow to repair abandoned replica even if it already has huge number of nodes in its queue. [#4399](https://github.com/ClickHouse/ClickHouse/pull/4399) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Add one required argument to `SET` index (max stored rows number). [#4386](https://github.com/ClickHouse/ClickHouse/pull/4386) ([Nikita Vasilev](https://github.com/nikvas0))
### Bug Fixes
-* Fixed `WITH ROLLUP` result for group by single `LowCardinality` key. [#4384](https://github.com/yandex/ClickHouse/pull/4384) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
-* Fixed bug in the set index (dropping a granule if it contains more than `max_rows` rows). [#4386](https://github.com/yandex/ClickHouse/pull/4386) ([Nikita Vasilev](https://github.com/nikvas0))
-* A lot of FreeBSD build fixes. [#4397](https://github.com/yandex/ClickHouse/pull/4397) ([proller](https://github.com/proller))
-* Fixed aliases substitution in queries with subquery containing same alias (issue [#4110](https://github.com/yandex/ClickHouse/issues/4110)). [#4351](https://github.com/yandex/ClickHouse/pull/4351) ([Artem Zuikov](https://github.com/4ertus2))
+* Fixed `WITH ROLLUP` result for group by single `LowCardinality` key. [#4384](https://github.com/ClickHouse/ClickHouse/pull/4384) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* Fixed bug in the set index (dropping a granule if it contains more than `max_rows` rows). [#4386](https://github.com/ClickHouse/ClickHouse/pull/4386) ([Nikita Vasilev](https://github.com/nikvas0))
+* A lot of FreeBSD build fixes. [#4397](https://github.com/ClickHouse/ClickHouse/pull/4397) ([proller](https://github.com/proller))
+* Fixed aliases substitution in queries with subquery containing same alias (issue [#4110](https://github.com/ClickHouse/ClickHouse/issues/4110)). [#4351](https://github.com/ClickHouse/ClickHouse/pull/4351) ([Artem Zuikov](https://github.com/4ertus2))
### Build/Testing/Packaging Improvements
-* Add ability to run `clickhouse-server` for stateless tests in docker image. [#4347](https://github.com/yandex/ClickHouse/pull/4347) ([Vasily Nemkov](https://github.com/Enmk))
+* Add ability to run `clickhouse-server` for stateless tests in docker image. [#4347](https://github.com/ClickHouse/ClickHouse/pull/4347) ([Vasily Nemkov](https://github.com/Enmk))
## ClickHouse release 19.3.3, 2019-02-13
### New Features
-* Added the `KILL MUTATION` statement that allows removing mutations that are for some reasons stuck. Added `latest_failed_part`, `latest_fail_time`, `latest_fail_reason` fields to the `system.mutations` table for easier troubleshooting. [#4287](https://github.com/yandex/ClickHouse/pull/4287) ([Alex Zatelepin](https://github.com/ztlpn))
-* Added aggregate function `entropy` which computes Shannon entropy. [#4238](https://github.com/yandex/ClickHouse/pull/4238) ([Quid37](https://github.com/Quid37))
-* Added ability to send queries `INSERT INTO tbl VALUES (....` to server without splitting on `query` and `data` parts. [#4301](https://github.com/yandex/ClickHouse/pull/4301) ([alesapin](https://github.com/alesapin))
-* Generic implementation of `arrayWithConstant` function was added. [#4322](https://github.com/yandex/ClickHouse/pull/4322) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Implemented `NOT BETWEEN` comparison operator. [#4228](https://github.com/yandex/ClickHouse/pull/4228) ([Dmitry Naumov](https://github.com/nezed))
-* Implement `sumMapFiltered` in order to be able to limit the number of keys for which values will be summed by `sumMap`. [#4129](https://github.com/yandex/ClickHouse/pull/4129) ([Léo Ercolanelli](https://github.com/ercolanelli-leo))
-* Added support of `Nullable` types in `mysql` table function. [#4198](https://github.com/yandex/ClickHouse/pull/4198) ([Emmanuel Donin de Rosière](https://github.com/edonin))
-* Support for arbitrary constant expressions in `LIMIT` clause. [#4246](https://github.com/yandex/ClickHouse/pull/4246) ([k3box](https://github.com/k3box))
-* Added `topKWeighted` aggregate function that takes additional argument with (unsigned integer) weight. [#4245](https://github.com/yandex/ClickHouse/pull/4245) ([Andrew Golman](https://github.com/andrewgolman))
-* `StorageJoin` now supports `join_any_take_last_row` setting that allows overwriting existing values of the same key. [#3973](https://github.com/yandex/ClickHouse/pull/3973) ([Amos Bird](https://github.com/amosbird)
-* Added function `toStartOfInterval`. [#4304](https://github.com/yandex/ClickHouse/pull/4304) ([Vitaly Baranov](https://github.com/vitlibar))
-* Added `RowBinaryWithNamesAndTypes` format. [#4200](https://github.com/yandex/ClickHouse/pull/4200) ([Oleg V. Kozlyuk](https://github.com/DarkWanderer))
-* Added `IPv4` and `IPv6` data types. More effective implementations of `IPv*` functions. [#3669](https://github.com/yandex/ClickHouse/pull/3669) ([Vasily Nemkov](https://github.com/Enmk))
-* Added function `toStartOfTenMinutes()`. [#4298](https://github.com/yandex/ClickHouse/pull/4298) ([Vitaly Baranov](https://github.com/vitlibar))
-* Added `Protobuf` output format. [#4005](https://github.com/yandex/ClickHouse/pull/4005) [#4158](https://github.com/yandex/ClickHouse/pull/4158) ([Vitaly Baranov](https://github.com/vitlibar))
-* Added brotli support for HTTP interface for data import (INSERTs). [#4235](https://github.com/yandex/ClickHouse/pull/4235) ([Mikhail ](https://github.com/fandyushin))
-* Added hints while user make typo in function name or type in command line client. [#4239](https://github.com/yandex/ClickHouse/pull/4239) ([Danila Kutenin](https://github.com/danlark1))
-* Added `Query-Id` to Server's HTTP Response header. [#4231](https://github.com/yandex/ClickHouse/pull/4231) ([Mikhail ](https://github.com/fandyushin))
+* Added the `KILL MUTATION` statement that allows removing mutations that are for some reasons stuck. Added `latest_failed_part`, `latest_fail_time`, `latest_fail_reason` fields to the `system.mutations` table for easier troubleshooting. [#4287](https://github.com/ClickHouse/ClickHouse/pull/4287) ([Alex Zatelepin](https://github.com/ztlpn))
+* Added aggregate function `entropy` which computes Shannon entropy. [#4238](https://github.com/ClickHouse/ClickHouse/pull/4238) ([Quid37](https://github.com/Quid37))
+* Added ability to send queries `INSERT INTO tbl VALUES (....` to server without splitting on `query` and `data` parts. [#4301](https://github.com/ClickHouse/ClickHouse/pull/4301) ([alesapin](https://github.com/alesapin))
+* Generic implementation of `arrayWithConstant` function was added. [#4322](https://github.com/ClickHouse/ClickHouse/pull/4322) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Implemented `NOT BETWEEN` comparison operator. [#4228](https://github.com/ClickHouse/ClickHouse/pull/4228) ([Dmitry Naumov](https://github.com/nezed))
+* Implement `sumMapFiltered` in order to be able to limit the number of keys for which values will be summed by `sumMap`. [#4129](https://github.com/ClickHouse/ClickHouse/pull/4129) ([Léo Ercolanelli](https://github.com/ercolanelli-leo))
+* Added support of `Nullable` types in `mysql` table function. [#4198](https://github.com/ClickHouse/ClickHouse/pull/4198) ([Emmanuel Donin de Rosière](https://github.com/edonin))
+* Support for arbitrary constant expressions in `LIMIT` clause. [#4246](https://github.com/ClickHouse/ClickHouse/pull/4246) ([k3box](https://github.com/k3box))
+* Added `topKWeighted` aggregate function that takes additional argument with (unsigned integer) weight. [#4245](https://github.com/ClickHouse/ClickHouse/pull/4245) ([Andrew Golman](https://github.com/andrewgolman))
+* `StorageJoin` now supports `join_any_take_last_row` setting that allows overwriting existing values of the same key. [#3973](https://github.com/ClickHouse/ClickHouse/pull/3973) ([Amos Bird](https://github.com/amosbird)
+* Added function `toStartOfInterval`. [#4304](https://github.com/ClickHouse/ClickHouse/pull/4304) ([Vitaly Baranov](https://github.com/vitlibar))
+* Added `RowBinaryWithNamesAndTypes` format. [#4200](https://github.com/ClickHouse/ClickHouse/pull/4200) ([Oleg V. Kozlyuk](https://github.com/DarkWanderer))
+* Added `IPv4` and `IPv6` data types. More effective implementations of `IPv*` functions. [#3669](https://github.com/ClickHouse/ClickHouse/pull/3669) ([Vasily Nemkov](https://github.com/Enmk))
+* Added function `toStartOfTenMinutes()`. [#4298](https://github.com/ClickHouse/ClickHouse/pull/4298) ([Vitaly Baranov](https://github.com/vitlibar))
+* Added `Protobuf` output format. [#4005](https://github.com/ClickHouse/ClickHouse/pull/4005) [#4158](https://github.com/ClickHouse/ClickHouse/pull/4158) ([Vitaly Baranov](https://github.com/vitlibar))
+* Added brotli support for HTTP interface for data import (INSERTs). [#4235](https://github.com/ClickHouse/ClickHouse/pull/4235) ([Mikhail ](https://github.com/fandyushin))
+* Added hints while user make typo in function name or type in command line client. [#4239](https://github.com/ClickHouse/ClickHouse/pull/4239) ([Danila Kutenin](https://github.com/danlark1))
+* Added `Query-Id` to Server's HTTP Response header. [#4231](https://github.com/ClickHouse/ClickHouse/pull/4231) ([Mikhail ](https://github.com/fandyushin))
### Experimental features
-* Added `minmax` and `set` data skipping indices for MergeTree table engines family. [#4143](https://github.com/yandex/ClickHouse/pull/4143) ([Nikita Vasilev](https://github.com/nikvas0))
-* Added conversion of `CROSS JOIN` to `INNER JOIN` if possible. [#4221](https://github.com/yandex/ClickHouse/pull/4221) [#4266](https://github.com/yandex/ClickHouse/pull/4266) ([Artem Zuikov](https://github.com/4ertus2))
+* Added `minmax` and `set` data skipping indices for MergeTree table engines family. [#4143](https://github.com/ClickHouse/ClickHouse/pull/4143) ([Nikita Vasilev](https://github.com/nikvas0))
+* Added conversion of `CROSS JOIN` to `INNER JOIN` if possible. [#4221](https://github.com/ClickHouse/ClickHouse/pull/4221) [#4266](https://github.com/ClickHouse/ClickHouse/pull/4266) ([Artem Zuikov](https://github.com/4ertus2))
### Bug Fixes
-* Fixed `Not found column` for duplicate columns in `JOIN ON` section. [#4279](https://github.com/yandex/ClickHouse/pull/4279) ([Artem Zuikov](https://github.com/4ertus2))
-* Make `START REPLICATED SENDS` command start replicated sends. [#4229](https://github.com/yandex/ClickHouse/pull/4229) ([nvartolomei](https://github.com/nvartolomei))
-* Fixed aggregate functions execution with `Array(LowCardinality)` arguments. [#4055](https://github.com/yandex/ClickHouse/pull/4055) ([KochetovNicolai](https://github.com/KochetovNicolai))
-* Fixed wrong behaviour when doing `INSERT ... SELECT ... FROM file(...)` query and file has `CSVWithNames` or `TSVWIthNames` format and the first data row is missing. [#4297](https://github.com/yandex/ClickHouse/pull/4297) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed crash on dictionary reload if dictionary not available. This bug was appeared in 19.1.6. [#4188](https://github.com/yandex/ClickHouse/pull/4188) ([proller](https://github.com/proller))
-* Fixed `ALL JOIN` with duplicates in right table. [#4184](https://github.com/yandex/ClickHouse/pull/4184) ([Artem Zuikov](https://github.com/4ertus2))
-* Fixed segmentation fault with `use_uncompressed_cache=1` and exception with wrong uncompressed size. This bug was appeared in 19.1.6. [#4186](https://github.com/yandex/ClickHouse/pull/4186) ([alesapin](https://github.com/alesapin))
-* Fixed `compile_expressions` bug with comparison of big (more than int16) dates. [#4341](https://github.com/yandex/ClickHouse/pull/4341) ([alesapin](https://github.com/alesapin))
-* Fixed infinite loop when selecting from table function `numbers(0)`. [#4280](https://github.com/yandex/ClickHouse/pull/4280) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Temporarily disable predicate optimization for `ORDER BY`. [#3890](https://github.com/yandex/ClickHouse/pull/3890) ([Winter Zhang](https://github.com/zhang2014))
-* Fixed `Illegal instruction` error when using base64 functions on old CPUs. This error has been reproduced only when ClickHouse was compiled with gcc-8. [#4275](https://github.com/yandex/ClickHouse/pull/4275) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed `No message received` error when interacting with PostgreSQL ODBC Driver through TLS connection. Also fixes segfault when using MySQL ODBC Driver. [#4170](https://github.com/yandex/ClickHouse/pull/4170) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed incorrect result when `Date` and `DateTime` arguments are used in branches of conditional operator (function `if`). Added generic case for function `if`. [#4243](https://github.com/yandex/ClickHouse/pull/4243) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* ClickHouse dictionaries now load within `clickhouse` process. [#4166](https://github.com/yandex/ClickHouse/pull/4166) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed deadlock when `SELECT` from a table with `File` engine was retried after `No such file or directory` error. [#4161](https://github.com/yandex/ClickHouse/pull/4161) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed race condition when selecting from `system.tables` may give `table doesn't exist` error. [#4313](https://github.com/yandex/ClickHouse/pull/4313) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* `clickhouse-client` can segfault on exit while loading data for command line suggestions if it was run in interactive mode. [#4317](https://github.com/yandex/ClickHouse/pull/4317) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed a bug when the execution of mutations containing `IN` operators was producing incorrect results. [#4099](https://github.com/yandex/ClickHouse/pull/4099) ([Alex Zatelepin](https://github.com/ztlpn))
-* Fixed error: if there is a database with `Dictionary` engine, all dictionaries forced to load at server startup, and if there is a dictionary with ClickHouse source from localhost, the dictionary cannot load. [#4255](https://github.com/yandex/ClickHouse/pull/4255) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed error when system logs are tried to create again at server shutdown. [#4254](https://github.com/yandex/ClickHouse/pull/4254) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Correctly return the right type and properly handle locks in `joinGet` function. [#4153](https://github.com/yandex/ClickHouse/pull/4153) ([Amos Bird](https://github.com/amosbird))
-* Added `sumMapWithOverflow` function. [#4151](https://github.com/yandex/ClickHouse/pull/4151) ([Léo Ercolanelli](https://github.com/ercolanelli-leo))
-* Fixed segfault with `allow_experimental_multiple_joins_emulation`. [52de2c](https://github.com/yandex/ClickHouse/commit/52de2cd927f7b5257dd67e175f0a5560a48840d0) ([Artem Zuikov](https://github.com/4ertus2))
-* Fixed bug with incorrect `Date` and `DateTime` comparison. [#4237](https://github.com/yandex/ClickHouse/pull/4237) ([valexey](https://github.com/valexey))
-* Fixed fuzz test under undefined behavior sanitizer: added parameter type check for `quantile*Weighted` family of functions. [#4145](https://github.com/yandex/ClickHouse/pull/4145) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed rare race condition when removing of old data parts can fail with `File not found` error. [#4378](https://github.com/yandex/ClickHouse/pull/4378) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix install package with missing /etc/clickhouse-server/config.xml. [#4343](https://github.com/yandex/ClickHouse/pull/4343) ([proller](https://github.com/proller))
+* Fixed `Not found column` for duplicate columns in `JOIN ON` section. [#4279](https://github.com/ClickHouse/ClickHouse/pull/4279) ([Artem Zuikov](https://github.com/4ertus2))
+* Make `START REPLICATED SENDS` command start replicated sends. [#4229](https://github.com/ClickHouse/ClickHouse/pull/4229) ([nvartolomei](https://github.com/nvartolomei))
+* Fixed aggregate functions execution with `Array(LowCardinality)` arguments. [#4055](https://github.com/ClickHouse/ClickHouse/pull/4055) ([KochetovNicolai](https://github.com/KochetovNicolai))
+* Fixed wrong behaviour when doing `INSERT ... SELECT ... FROM file(...)` query and file has `CSVWithNames` or `TSVWIthNames` format and the first data row is missing. [#4297](https://github.com/ClickHouse/ClickHouse/pull/4297) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed crash on dictionary reload if dictionary not available. This bug was appeared in 19.1.6. [#4188](https://github.com/ClickHouse/ClickHouse/pull/4188) ([proller](https://github.com/proller))
+* Fixed `ALL JOIN` with duplicates in right table. [#4184](https://github.com/ClickHouse/ClickHouse/pull/4184) ([Artem Zuikov](https://github.com/4ertus2))
+* Fixed segmentation fault with `use_uncompressed_cache=1` and exception with wrong uncompressed size. This bug was appeared in 19.1.6. [#4186](https://github.com/ClickHouse/ClickHouse/pull/4186) ([alesapin](https://github.com/alesapin))
+* Fixed `compile_expressions` bug with comparison of big (more than int16) dates. [#4341](https://github.com/ClickHouse/ClickHouse/pull/4341) ([alesapin](https://github.com/alesapin))
+* Fixed infinite loop when selecting from table function `numbers(0)`. [#4280](https://github.com/ClickHouse/ClickHouse/pull/4280) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Temporarily disable predicate optimization for `ORDER BY`. [#3890](https://github.com/ClickHouse/ClickHouse/pull/3890) ([Winter Zhang](https://github.com/zhang2014))
+* Fixed `Illegal instruction` error when using base64 functions on old CPUs. This error has been reproduced only when ClickHouse was compiled with gcc-8. [#4275](https://github.com/ClickHouse/ClickHouse/pull/4275) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed `No message received` error when interacting with PostgreSQL ODBC Driver through TLS connection. Also fixes segfault when using MySQL ODBC Driver. [#4170](https://github.com/ClickHouse/ClickHouse/pull/4170) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed incorrect result when `Date` and `DateTime` arguments are used in branches of conditional operator (function `if`). Added generic case for function `if`. [#4243](https://github.com/ClickHouse/ClickHouse/pull/4243) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* ClickHouse dictionaries now load within `clickhouse` process. [#4166](https://github.com/ClickHouse/ClickHouse/pull/4166) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed deadlock when `SELECT` from a table with `File` engine was retried after `No such file or directory` error. [#4161](https://github.com/ClickHouse/ClickHouse/pull/4161) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed race condition when selecting from `system.tables` may give `table doesn't exist` error. [#4313](https://github.com/ClickHouse/ClickHouse/pull/4313) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* `clickhouse-client` can segfault on exit while loading data for command line suggestions if it was run in interactive mode. [#4317](https://github.com/ClickHouse/ClickHouse/pull/4317) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed a bug when the execution of mutations containing `IN` operators was producing incorrect results. [#4099](https://github.com/ClickHouse/ClickHouse/pull/4099) ([Alex Zatelepin](https://github.com/ztlpn))
+* Fixed error: if there is a database with `Dictionary` engine, all dictionaries forced to load at server startup, and if there is a dictionary with ClickHouse source from localhost, the dictionary cannot load. [#4255](https://github.com/ClickHouse/ClickHouse/pull/4255) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed error when system logs are tried to create again at server shutdown. [#4254](https://github.com/ClickHouse/ClickHouse/pull/4254) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Correctly return the right type and properly handle locks in `joinGet` function. [#4153](https://github.com/ClickHouse/ClickHouse/pull/4153) ([Amos Bird](https://github.com/amosbird))
+* Added `sumMapWithOverflow` function. [#4151](https://github.com/ClickHouse/ClickHouse/pull/4151) ([Léo Ercolanelli](https://github.com/ercolanelli-leo))
+* Fixed segfault with `allow_experimental_multiple_joins_emulation`. [52de2c](https://github.com/ClickHouse/ClickHouse/commit/52de2cd927f7b5257dd67e175f0a5560a48840d0) ([Artem Zuikov](https://github.com/4ertus2))
+* Fixed bug with incorrect `Date` and `DateTime` comparison. [#4237](https://github.com/ClickHouse/ClickHouse/pull/4237) ([valexey](https://github.com/valexey))
+* Fixed fuzz test under undefined behavior sanitizer: added parameter type check for `quantile*Weighted` family of functions. [#4145](https://github.com/ClickHouse/ClickHouse/pull/4145) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed rare race condition when removing of old data parts can fail with `File not found` error. [#4378](https://github.com/ClickHouse/ClickHouse/pull/4378) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix install package with missing /etc/clickhouse-server/config.xml. [#4343](https://github.com/ClickHouse/ClickHouse/pull/4343) ([proller](https://github.com/proller))
### Build/Testing/Packaging Improvements
-* Debian package: correct /etc/clickhouse-server/preprocessed link according to config. [#4205](https://github.com/yandex/ClickHouse/pull/4205) ([proller](https://github.com/proller))
-* Various build fixes for FreeBSD. [#4225](https://github.com/yandex/ClickHouse/pull/4225) ([proller](https://github.com/proller))
-* Added ability to create, fill and drop tables in perftest. [#4220](https://github.com/yandex/ClickHouse/pull/4220) ([alesapin](https://github.com/alesapin))
-* Added a script to check for duplicate includes. [#4326](https://github.com/yandex/ClickHouse/pull/4326) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Added ability to run queries by index in performance test. [#4264](https://github.com/yandex/ClickHouse/pull/4264) ([alesapin](https://github.com/alesapin))
-* Package with debug symbols is suggested to be installed. [#4274](https://github.com/yandex/ClickHouse/pull/4274) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Refactoring of performance-test. Better logging and signals handling. [#4171](https://github.com/yandex/ClickHouse/pull/4171) ([alesapin](https://github.com/alesapin))
-* Added docs to anonymized Yandex.Metrika datasets. [#4164](https://github.com/yandex/ClickHouse/pull/4164) ([alesapin](https://github.com/alesapin))
-* Аdded tool for converting an old month-partitioned part to the custom-partitioned format. [#4195](https://github.com/yandex/ClickHouse/pull/4195) ([Alex Zatelepin](https://github.com/ztlpn))
-* Added docs about two datasets in s3. [#4144](https://github.com/yandex/ClickHouse/pull/4144) ([alesapin](https://github.com/alesapin))
-* Added script which creates changelog from pull requests description. [#4169](https://github.com/yandex/ClickHouse/pull/4169) [#4173](https://github.com/yandex/ClickHouse/pull/4173) ([KochetovNicolai](https://github.com/KochetovNicolai)) ([KochetovNicolai](https://github.com/KochetovNicolai))
-* Added puppet module for Clickhouse. [#4182](https://github.com/yandex/ClickHouse/pull/4182) ([Maxim Fedotov](https://github.com/MaxFedotov))
-* Added docs for a group of undocumented functions. [#4168](https://github.com/yandex/ClickHouse/pull/4168) ([Winter Zhang](https://github.com/zhang2014))
-* ARM build fixes. [#4210](https://github.com/yandex/ClickHouse/pull/4210)[#4306](https://github.com/yandex/ClickHouse/pull/4306) [#4291](https://github.com/yandex/ClickHouse/pull/4291) ([proller](https://github.com/proller)) ([proller](https://github.com/proller))
-* Dictionary tests now able to run from `ctest`. [#4189](https://github.com/yandex/ClickHouse/pull/4189) ([proller](https://github.com/proller))
-* Now `/etc/ssl` is used as default directory with SSL certificates. [#4167](https://github.com/yandex/ClickHouse/pull/4167) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Added checking SSE and AVX instruction at start. [#4234](https://github.com/yandex/ClickHouse/pull/4234) ([Igr](https://github.com/igron99))
-* Init script will wait server until start. [#4281](https://github.com/yandex/ClickHouse/pull/4281) ([proller](https://github.com/proller))
+* Debian package: correct /etc/clickhouse-server/preprocessed link according to config. [#4205](https://github.com/ClickHouse/ClickHouse/pull/4205) ([proller](https://github.com/proller))
+* Various build fixes for FreeBSD. [#4225](https://github.com/ClickHouse/ClickHouse/pull/4225) ([proller](https://github.com/proller))
+* Added ability to create, fill and drop tables in perftest. [#4220](https://github.com/ClickHouse/ClickHouse/pull/4220) ([alesapin](https://github.com/alesapin))
+* Added a script to check for duplicate includes. [#4326](https://github.com/ClickHouse/ClickHouse/pull/4326) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Added ability to run queries by index in performance test. [#4264](https://github.com/ClickHouse/ClickHouse/pull/4264) ([alesapin](https://github.com/alesapin))
+* Package with debug symbols is suggested to be installed. [#4274](https://github.com/ClickHouse/ClickHouse/pull/4274) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Refactoring of performance-test. Better logging and signals handling. [#4171](https://github.com/ClickHouse/ClickHouse/pull/4171) ([alesapin](https://github.com/alesapin))
+* Added docs to anonymized Yandex.Metrika datasets. [#4164](https://github.com/ClickHouse/ClickHouse/pull/4164) ([alesapin](https://github.com/alesapin))
+* Аdded tool for converting an old month-partitioned part to the custom-partitioned format. [#4195](https://github.com/ClickHouse/ClickHouse/pull/4195) ([Alex Zatelepin](https://github.com/ztlpn))
+* Added docs about two datasets in s3. [#4144](https://github.com/ClickHouse/ClickHouse/pull/4144) ([alesapin](https://github.com/alesapin))
+* Added script which creates changelog from pull requests description. [#4169](https://github.com/ClickHouse/ClickHouse/pull/4169) [#4173](https://github.com/ClickHouse/ClickHouse/pull/4173) ([KochetovNicolai](https://github.com/KochetovNicolai)) ([KochetovNicolai](https://github.com/KochetovNicolai))
+* Added puppet module for Clickhouse. [#4182](https://github.com/ClickHouse/ClickHouse/pull/4182) ([Maxim Fedotov](https://github.com/MaxFedotov))
+* Added docs for a group of undocumented functions. [#4168](https://github.com/ClickHouse/ClickHouse/pull/4168) ([Winter Zhang](https://github.com/zhang2014))
+* ARM build fixes. [#4210](https://github.com/ClickHouse/ClickHouse/pull/4210)[#4306](https://github.com/ClickHouse/ClickHouse/pull/4306) [#4291](https://github.com/ClickHouse/ClickHouse/pull/4291) ([proller](https://github.com/proller)) ([proller](https://github.com/proller))
+* Dictionary tests now able to run from `ctest`. [#4189](https://github.com/ClickHouse/ClickHouse/pull/4189) ([proller](https://github.com/proller))
+* Now `/etc/ssl` is used as default directory with SSL certificates. [#4167](https://github.com/ClickHouse/ClickHouse/pull/4167) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Added checking SSE and AVX instruction at start. [#4234](https://github.com/ClickHouse/ClickHouse/pull/4234) ([Igr](https://github.com/igron99))
+* Init script will wait server until start. [#4281](https://github.com/ClickHouse/ClickHouse/pull/4281) ([proller](https://github.com/proller))
### Backward Incompatible Changes
-* Removed `allow_experimental_low_cardinality_type` setting. `LowCardinality` data types are production ready. [#4323](https://github.com/yandex/ClickHouse/pull/4323) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Reduce mark cache size and uncompressed cache size accordingly to available memory amount. [#4240](https://github.com/yandex/ClickHouse/pull/4240) ([Lopatin Konstantin](https://github.com/k-lopatin)
-* Added keyword `INDEX` in `CREATE TABLE` query. A column with name `index` must be quoted with backticks or double quotes: `` `index` ``. [#4143](https://github.com/yandex/ClickHouse/pull/4143) ([Nikita Vasilev](https://github.com/nikvas0))
-* `sumMap` now promote result type instead of overflow. The old `sumMap` behavior can be obtained by using `sumMapWithOverflow` function. [#4151](https://github.com/yandex/ClickHouse/pull/4151) ([Léo Ercolanelli](https://github.com/ercolanelli-leo))
+* Removed `allow_experimental_low_cardinality_type` setting. `LowCardinality` data types are production ready. [#4323](https://github.com/ClickHouse/ClickHouse/pull/4323) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Reduce mark cache size and uncompressed cache size accordingly to available memory amount. [#4240](https://github.com/ClickHouse/ClickHouse/pull/4240) ([Lopatin Konstantin](https://github.com/k-lopatin)
+* Added keyword `INDEX` in `CREATE TABLE` query. A column with name `index` must be quoted with backticks or double quotes: `` `index` ``. [#4143](https://github.com/ClickHouse/ClickHouse/pull/4143) ([Nikita Vasilev](https://github.com/nikvas0))
+* `sumMap` now promote result type instead of overflow. The old `sumMap` behavior can be obtained by using `sumMapWithOverflow` function. [#4151](https://github.com/ClickHouse/ClickHouse/pull/4151) ([Léo Ercolanelli](https://github.com/ercolanelli-leo))
### Performance Improvements
-* `std::sort` replaced by `pdqsort` for queries without `LIMIT`. [#4236](https://github.com/yandex/ClickHouse/pull/4236) ([Evgenii Pravda](https://github.com/kvinty))
-* Now server reuse threads from global thread pool. This affects performance in some corner cases. [#4150](https://github.com/yandex/ClickHouse/pull/4150) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* `std::sort` replaced by `pdqsort` for queries without `LIMIT`. [#4236](https://github.com/ClickHouse/ClickHouse/pull/4236) ([Evgenii Pravda](https://github.com/kvinty))
+* Now server reuse threads from global thread pool. This affects performance in some corner cases. [#4150](https://github.com/ClickHouse/ClickHouse/pull/4150) ([alexey-milovidov](https://github.com/alexey-milovidov))
### Improvements
-* Implemented AIO support for FreeBSD. [#4305](https://github.com/yandex/ClickHouse/pull/4305) ([urgordeadbeef](https://github.com/urgordeadbeef))
-* `SELECT * FROM a JOIN b USING a, b` now return `a` and `b` columns only from the left table. [#4141](https://github.com/yandex/ClickHouse/pull/4141) ([Artem Zuikov](https://github.com/4ertus2))
-* Allow `-C` option of client to work as `-c` option. [#4232](https://github.com/yandex/ClickHouse/pull/4232) ([syominsergey](https://github.com/syominsergey))
-* Now option `--password` used without value requires password from stdin. [#4230](https://github.com/yandex/ClickHouse/pull/4230) ([BSD_Conqueror](https://github.com/bsd-conqueror))
-* Added highlighting of unescaped metacharacters in string literals that contain `LIKE` expressions or regexps. [#4327](https://github.com/yandex/ClickHouse/pull/4327) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Added cancelling of HTTP read only queries if client socket goes away. [#4213](https://github.com/yandex/ClickHouse/pull/4213) ([nvartolomei](https://github.com/nvartolomei))
-* Now server reports progress to keep client connections alive. [#4215](https://github.com/yandex/ClickHouse/pull/4215) ([Ivan](https://github.com/abyss7))
-* Slightly better message with reason for OPTIMIZE query with `optimize_throw_if_noop` setting enabled. [#4294](https://github.com/yandex/ClickHouse/pull/4294) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Added support of `--version` option for clickhouse server. [#4251](https://github.com/yandex/ClickHouse/pull/4251) ([Lopatin Konstantin](https://github.com/k-lopatin))
-* Added `--help/-h` option to `clickhouse-server`. [#4233](https://github.com/yandex/ClickHouse/pull/4233) ([Yuriy Baranov](https://github.com/yurriy))
-* Added support for scalar subqueries with aggregate function state result. [#4348](https://github.com/yandex/ClickHouse/pull/4348) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
-* Improved server shutdown time and ALTERs waiting time. [#4372](https://github.com/yandex/ClickHouse/pull/4372) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Added info about the replicated_can_become_leader setting to system.replicas and add logging if the replica won't try to become leader. [#4379](https://github.com/yandex/ClickHouse/pull/4379) ([Alex Zatelepin](https://github.com/ztlpn))
+* Implemented AIO support for FreeBSD. [#4305](https://github.com/ClickHouse/ClickHouse/pull/4305) ([urgordeadbeef](https://github.com/urgordeadbeef))
+* `SELECT * FROM a JOIN b USING a, b` now return `a` and `b` columns only from the left table. [#4141](https://github.com/ClickHouse/ClickHouse/pull/4141) ([Artem Zuikov](https://github.com/4ertus2))
+* Allow `-C` option of client to work as `-c` option. [#4232](https://github.com/ClickHouse/ClickHouse/pull/4232) ([syominsergey](https://github.com/syominsergey))
+* Now option `--password` used without value requires password from stdin. [#4230](https://github.com/ClickHouse/ClickHouse/pull/4230) ([BSD_Conqueror](https://github.com/bsd-conqueror))
+* Added highlighting of unescaped metacharacters in string literals that contain `LIKE` expressions or regexps. [#4327](https://github.com/ClickHouse/ClickHouse/pull/4327) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Added cancelling of HTTP read only queries if client socket goes away. [#4213](https://github.com/ClickHouse/ClickHouse/pull/4213) ([nvartolomei](https://github.com/nvartolomei))
+* Now server reports progress to keep client connections alive. [#4215](https://github.com/ClickHouse/ClickHouse/pull/4215) ([Ivan](https://github.com/abyss7))
+* Slightly better message with reason for OPTIMIZE query with `optimize_throw_if_noop` setting enabled. [#4294](https://github.com/ClickHouse/ClickHouse/pull/4294) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Added support of `--version` option for clickhouse server. [#4251](https://github.com/ClickHouse/ClickHouse/pull/4251) ([Lopatin Konstantin](https://github.com/k-lopatin))
+* Added `--help/-h` option to `clickhouse-server`. [#4233](https://github.com/ClickHouse/ClickHouse/pull/4233) ([Yuriy Baranov](https://github.com/yurriy))
+* Added support for scalar subqueries with aggregate function state result. [#4348](https://github.com/ClickHouse/ClickHouse/pull/4348) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
+* Improved server shutdown time and ALTERs waiting time. [#4372](https://github.com/ClickHouse/ClickHouse/pull/4372) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Added info about the replicated_can_become_leader setting to system.replicas and add logging if the replica won't try to become leader. [#4379](https://github.com/ClickHouse/ClickHouse/pull/4379) ([Alex Zatelepin](https://github.com/ztlpn))
## ClickHouse release 19.1.14, 2019-03-14
-* Fixed error `Column ... queried more than once` that may happen if the setting `asterisk_left_columns_only` is set to 1 in case of using `GLOBAL JOIN` with `SELECT *` (rare case). The issue does not exist in 19.3 and newer. [6bac7d8d](https://github.com/yandex/ClickHouse/pull/4692/commits/6bac7d8d11a9b0d6de0b32b53c47eb2f6f8e7062) ([Artem Zuikov](https://github.com/4ertus2))
+* Fixed error `Column ... queried more than once` that may happen if the setting `asterisk_left_columns_only` is set to 1 in case of using `GLOBAL JOIN` with `SELECT *` (rare case). The issue does not exist in 19.3 and newer. [6bac7d8d](https://github.com/ClickHouse/ClickHouse/pull/4692/commits/6bac7d8d11a9b0d6de0b32b53c47eb2f6f8e7062) ([Artem Zuikov](https://github.com/4ertus2))
## ClickHouse release 19.1.13, 2019-03-12
@@ -1278,164 +1278,164 @@ This release contains exactly the same set of patches as 19.3.6.
## ClickHouse release 19.1.9, 2019-02-21
### Bug fixes
-* Fixed backward incompatibility with old versions due to wrong implementation of `send_logs_level` setting. [#4445](https://github.com/yandex/ClickHouse/pull/4445) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed backward incompatibility of table function `remote` introduced with column comments. [#4446](https://github.com/yandex/ClickHouse/pull/4446) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed backward incompatibility with old versions due to wrong implementation of `send_logs_level` setting. [#4445](https://github.com/ClickHouse/ClickHouse/pull/4445) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed backward incompatibility of table function `remote` introduced with column comments. [#4446](https://github.com/ClickHouse/ClickHouse/pull/4446) ([alexey-milovidov](https://github.com/alexey-milovidov))
## ClickHouse release 19.1.8, 2019-02-16
### Bug Fixes
-* Fix install package with missing /etc/clickhouse-server/config.xml. [#4343](https://github.com/yandex/ClickHouse/pull/4343) ([proller](https://github.com/proller))
+* Fix install package with missing /etc/clickhouse-server/config.xml. [#4343](https://github.com/ClickHouse/ClickHouse/pull/4343) ([proller](https://github.com/proller))
## ClickHouse release 19.1.7, 2019-02-15
### Bug Fixes
-* Correctly return the right type and properly handle locks in `joinGet` function. [#4153](https://github.com/yandex/ClickHouse/pull/4153) ([Amos Bird](https://github.com/amosbird))
-* Fixed error when system logs are tried to create again at server shutdown. [#4254](https://github.com/yandex/ClickHouse/pull/4254) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed error: if there is a database with `Dictionary` engine, all dictionaries forced to load at server startup, and if there is a dictionary with ClickHouse source from localhost, the dictionary cannot load. [#4255](https://github.com/yandex/ClickHouse/pull/4255) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed a bug when the execution of mutations containing `IN` operators was producing incorrect results. [#4099](https://github.com/yandex/ClickHouse/pull/4099) ([Alex Zatelepin](https://github.com/ztlpn))
-* `clickhouse-client` can segfault on exit while loading data for command line suggestions if it was run in interactive mode. [#4317](https://github.com/yandex/ClickHouse/pull/4317) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed race condition when selecting from `system.tables` may give `table doesn't exist` error. [#4313](https://github.com/yandex/ClickHouse/pull/4313) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed deadlock when `SELECT` from a table with `File` engine was retried after `No such file or directory` error. [#4161](https://github.com/yandex/ClickHouse/pull/4161) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed an issue: local ClickHouse dictionaries are loaded via TCP, but should load within process. [#4166](https://github.com/yandex/ClickHouse/pull/4166) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed `No message received` error when interacting with PostgreSQL ODBC Driver through TLS connection. Also fixes segfault when using MySQL ODBC Driver. [#4170](https://github.com/yandex/ClickHouse/pull/4170) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Temporarily disable predicate optimization for `ORDER BY`. [#3890](https://github.com/yandex/ClickHouse/pull/3890) ([Winter Zhang](https://github.com/zhang2014))
-* Fixed infinite loop when selecting from table function `numbers(0)`. [#4280](https://github.com/yandex/ClickHouse/pull/4280) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed `compile_expressions` bug with comparison of big (more than int16) dates. [#4341](https://github.com/yandex/ClickHouse/pull/4341) ([alesapin](https://github.com/alesapin))
-* Fixed segmentation fault with `uncompressed_cache=1` and exception with wrong uncompressed size. [#4186](https://github.com/yandex/ClickHouse/pull/4186) ([alesapin](https://github.com/alesapin))
-* Fixed `ALL JOIN` with duplicates in right table. [#4184](https://github.com/yandex/ClickHouse/pull/4184) ([Artem Zuikov](https://github.com/4ertus2))
-* Fixed wrong behaviour when doing `INSERT ... SELECT ... FROM file(...)` query and file has `CSVWithNames` or `TSVWIthNames` format and the first data row is missing. [#4297](https://github.com/yandex/ClickHouse/pull/4297) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed aggregate functions execution with `Array(LowCardinality)` arguments. [#4055](https://github.com/yandex/ClickHouse/pull/4055) ([KochetovNicolai](https://github.com/KochetovNicolai))
-* Debian package: correct /etc/clickhouse-server/preprocessed link according to config. [#4205](https://github.com/yandex/ClickHouse/pull/4205) ([proller](https://github.com/proller))
-* Fixed fuzz test under undefined behavior sanitizer: added parameter type check for `quantile*Weighted` family of functions. [#4145](https://github.com/yandex/ClickHouse/pull/4145) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Make `START REPLICATED SENDS` command start replicated sends. [#4229](https://github.com/yandex/ClickHouse/pull/4229) ([nvartolomei](https://github.com/nvartolomei))
-* Fixed `Not found column` for duplicate columns in JOIN ON section. [#4279](https://github.com/yandex/ClickHouse/pull/4279) ([Artem Zuikov](https://github.com/4ertus2))
-* Now `/etc/ssl` is used as default directory with SSL certificates. [#4167](https://github.com/yandex/ClickHouse/pull/4167) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed crash on dictionary reload if dictionary not available. [#4188](https://github.com/yandex/ClickHouse/pull/4188) ([proller](https://github.com/proller))
-* Fixed bug with incorrect `Date` and `DateTime` comparison. [#4237](https://github.com/yandex/ClickHouse/pull/4237) ([valexey](https://github.com/valexey))
-* Fixed incorrect result when `Date` and `DateTime` arguments are used in branches of conditional operator (function `if`). Added generic case for function `if`. [#4243](https://github.com/yandex/ClickHouse/pull/4243) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Correctly return the right type and properly handle locks in `joinGet` function. [#4153](https://github.com/ClickHouse/ClickHouse/pull/4153) ([Amos Bird](https://github.com/amosbird))
+* Fixed error when system logs are tried to create again at server shutdown. [#4254](https://github.com/ClickHouse/ClickHouse/pull/4254) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed error: if there is a database with `Dictionary` engine, all dictionaries forced to load at server startup, and if there is a dictionary with ClickHouse source from localhost, the dictionary cannot load. [#4255](https://github.com/ClickHouse/ClickHouse/pull/4255) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed a bug when the execution of mutations containing `IN` operators was producing incorrect results. [#4099](https://github.com/ClickHouse/ClickHouse/pull/4099) ([Alex Zatelepin](https://github.com/ztlpn))
+* `clickhouse-client` can segfault on exit while loading data for command line suggestions if it was run in interactive mode. [#4317](https://github.com/ClickHouse/ClickHouse/pull/4317) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed race condition when selecting from `system.tables` may give `table doesn't exist` error. [#4313](https://github.com/ClickHouse/ClickHouse/pull/4313) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed deadlock when `SELECT` from a table with `File` engine was retried after `No such file or directory` error. [#4161](https://github.com/ClickHouse/ClickHouse/pull/4161) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed an issue: local ClickHouse dictionaries are loaded via TCP, but should load within process. [#4166](https://github.com/ClickHouse/ClickHouse/pull/4166) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed `No message received` error when interacting with PostgreSQL ODBC Driver through TLS connection. Also fixes segfault when using MySQL ODBC Driver. [#4170](https://github.com/ClickHouse/ClickHouse/pull/4170) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Temporarily disable predicate optimization for `ORDER BY`. [#3890](https://github.com/ClickHouse/ClickHouse/pull/3890) ([Winter Zhang](https://github.com/zhang2014))
+* Fixed infinite loop when selecting from table function `numbers(0)`. [#4280](https://github.com/ClickHouse/ClickHouse/pull/4280) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed `compile_expressions` bug with comparison of big (more than int16) dates. [#4341](https://github.com/ClickHouse/ClickHouse/pull/4341) ([alesapin](https://github.com/alesapin))
+* Fixed segmentation fault with `uncompressed_cache=1` and exception with wrong uncompressed size. [#4186](https://github.com/ClickHouse/ClickHouse/pull/4186) ([alesapin](https://github.com/alesapin))
+* Fixed `ALL JOIN` with duplicates in right table. [#4184](https://github.com/ClickHouse/ClickHouse/pull/4184) ([Artem Zuikov](https://github.com/4ertus2))
+* Fixed wrong behaviour when doing `INSERT ... SELECT ... FROM file(...)` query and file has `CSVWithNames` or `TSVWIthNames` format and the first data row is missing. [#4297](https://github.com/ClickHouse/ClickHouse/pull/4297) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed aggregate functions execution with `Array(LowCardinality)` arguments. [#4055](https://github.com/ClickHouse/ClickHouse/pull/4055) ([KochetovNicolai](https://github.com/KochetovNicolai))
+* Debian package: correct /etc/clickhouse-server/preprocessed link according to config. [#4205](https://github.com/ClickHouse/ClickHouse/pull/4205) ([proller](https://github.com/proller))
+* Fixed fuzz test under undefined behavior sanitizer: added parameter type check for `quantile*Weighted` family of functions. [#4145](https://github.com/ClickHouse/ClickHouse/pull/4145) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Make `START REPLICATED SENDS` command start replicated sends. [#4229](https://github.com/ClickHouse/ClickHouse/pull/4229) ([nvartolomei](https://github.com/nvartolomei))
+* Fixed `Not found column` for duplicate columns in JOIN ON section. [#4279](https://github.com/ClickHouse/ClickHouse/pull/4279) ([Artem Zuikov](https://github.com/4ertus2))
+* Now `/etc/ssl` is used as default directory with SSL certificates. [#4167](https://github.com/ClickHouse/ClickHouse/pull/4167) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed crash on dictionary reload if dictionary not available. [#4188](https://github.com/ClickHouse/ClickHouse/pull/4188) ([proller](https://github.com/proller))
+* Fixed bug with incorrect `Date` and `DateTime` comparison. [#4237](https://github.com/ClickHouse/ClickHouse/pull/4237) ([valexey](https://github.com/valexey))
+* Fixed incorrect result when `Date` and `DateTime` arguments are used in branches of conditional operator (function `if`). Added generic case for function `if`. [#4243](https://github.com/ClickHouse/ClickHouse/pull/4243) ([alexey-milovidov](https://github.com/alexey-milovidov))
## ClickHouse release 19.1.6, 2019-01-24
### New Features
-* Custom per column compression codecs for tables. [#3899](https://github.com/yandex/ClickHouse/pull/3899) [#4111](https://github.com/yandex/ClickHouse/pull/4111) ([alesapin](https://github.com/alesapin), [Winter Zhang](https://github.com/zhang2014), [Anatoly](https://github.com/Sindbag))
-* Added compression codec `Delta`. [#4052](https://github.com/yandex/ClickHouse/pull/4052) ([alesapin](https://github.com/alesapin))
-* Allow to `ALTER` compression codecs. [#4054](https://github.com/yandex/ClickHouse/pull/4054) ([alesapin](https://github.com/alesapin))
-* Added functions `left`, `right`, `trim`, `ltrim`, `rtrim`, `timestampadd`, `timestampsub` for SQL standard compatibility. [#3826](https://github.com/yandex/ClickHouse/pull/3826) ([Ivan Blinkov](https://github.com/blinkov))
-* Support for write in `HDFS` tables and `hdfs` table function. [#4084](https://github.com/yandex/ClickHouse/pull/4084) ([alesapin](https://github.com/alesapin))
-* Added functions to search for multiple constant strings from big haystack: `multiPosition`, `multiSearch` ,`firstMatch` also with `-UTF8`, `-CaseInsensitive`, and `-CaseInsensitiveUTF8` variants. [#4053](https://github.com/yandex/ClickHouse/pull/4053) ([Danila Kutenin](https://github.com/danlark1))
-* Pruning of unused shards if `SELECT` query filters by sharding key (setting `optimize_skip_unused_shards`). [#3851](https://github.com/yandex/ClickHouse/pull/3851) ([Gleb Kanterov](https://github.com/kanterov), [Ivan](https://github.com/abyss7))
-* Allow `Kafka` engine to ignore some number of parsing errors per block. [#4094](https://github.com/yandex/ClickHouse/pull/4094) ([Ivan](https://github.com/abyss7))
-* Added support for `CatBoost` multiclass models evaluation. Function `modelEvaluate` returns tuple with per-class raw predictions for multiclass models. `libcatboostmodel.so` should be built with [#607](https://github.com/catboost/catboost/pull/607). [#3959](https://github.com/yandex/ClickHouse/pull/3959) ([KochetovNicolai](https://github.com/KochetovNicolai))
-* Added functions `filesystemAvailable`, `filesystemFree`, `filesystemCapacity`. [#4097](https://github.com/yandex/ClickHouse/pull/4097) ([Boris Granveaud](https://github.com/bgranvea))
-* Added hashing functions `xxHash64` and `xxHash32`. [#3905](https://github.com/yandex/ClickHouse/pull/3905) ([filimonov](https://github.com/filimonov))
-* Added `gccMurmurHash` hashing function (GCC flavoured Murmur hash) which uses the same hash seed as [gcc](https://github.com/gcc-mirror/gcc/blob/41d6b10e96a1de98e90a7c0378437c3255814b16/libstdc%2B%2B-v3/include/bits/functional_hash.h#L191) [#4000](https://github.com/yandex/ClickHouse/pull/4000) ([sundyli](https://github.com/sundy-li))
-* Added hashing functions `javaHash`, `hiveHash`. [#3811](https://github.com/yandex/ClickHouse/pull/3811) ([shangshujie365](https://github.com/shangshujie365))
-* Added table function `remoteSecure`. Function works as `remote`, but uses secure connection. [#4088](https://github.com/yandex/ClickHouse/pull/4088) ([proller](https://github.com/proller))
+* Custom per column compression codecs for tables. [#3899](https://github.com/ClickHouse/ClickHouse/pull/3899) [#4111](https://github.com/ClickHouse/ClickHouse/pull/4111) ([alesapin](https://github.com/alesapin), [Winter Zhang](https://github.com/zhang2014), [Anatoly](https://github.com/Sindbag))
+* Added compression codec `Delta`. [#4052](https://github.com/ClickHouse/ClickHouse/pull/4052) ([alesapin](https://github.com/alesapin))
+* Allow to `ALTER` compression codecs. [#4054](https://github.com/ClickHouse/ClickHouse/pull/4054) ([alesapin](https://github.com/alesapin))
+* Added functions `left`, `right`, `trim`, `ltrim`, `rtrim`, `timestampadd`, `timestampsub` for SQL standard compatibility. [#3826](https://github.com/ClickHouse/ClickHouse/pull/3826) ([Ivan Blinkov](https://github.com/blinkov))
+* Support for write in `HDFS` tables and `hdfs` table function. [#4084](https://github.com/ClickHouse/ClickHouse/pull/4084) ([alesapin](https://github.com/alesapin))
+* Added functions to search for multiple constant strings from big haystack: `multiPosition`, `multiSearch` ,`firstMatch` also with `-UTF8`, `-CaseInsensitive`, and `-CaseInsensitiveUTF8` variants. [#4053](https://github.com/ClickHouse/ClickHouse/pull/4053) ([Danila Kutenin](https://github.com/danlark1))
+* Pruning of unused shards if `SELECT` query filters by sharding key (setting `optimize_skip_unused_shards`). [#3851](https://github.com/ClickHouse/ClickHouse/pull/3851) ([Gleb Kanterov](https://github.com/kanterov), [Ivan](https://github.com/abyss7))
+* Allow `Kafka` engine to ignore some number of parsing errors per block. [#4094](https://github.com/ClickHouse/ClickHouse/pull/4094) ([Ivan](https://github.com/abyss7))
+* Added support for `CatBoost` multiclass models evaluation. Function `modelEvaluate` returns tuple with per-class raw predictions for multiclass models. `libcatboostmodel.so` should be built with [#607](https://github.com/catboost/catboost/pull/607). [#3959](https://github.com/ClickHouse/ClickHouse/pull/3959) ([KochetovNicolai](https://github.com/KochetovNicolai))
+* Added functions `filesystemAvailable`, `filesystemFree`, `filesystemCapacity`. [#4097](https://github.com/ClickHouse/ClickHouse/pull/4097) ([Boris Granveaud](https://github.com/bgranvea))
+* Added hashing functions `xxHash64` and `xxHash32`. [#3905](https://github.com/ClickHouse/ClickHouse/pull/3905) ([filimonov](https://github.com/filimonov))
+* Added `gccMurmurHash` hashing function (GCC flavoured Murmur hash) which uses the same hash seed as [gcc](https://github.com/gcc-mirror/gcc/blob/41d6b10e96a1de98e90a7c0378437c3255814b16/libstdc%2B%2B-v3/include/bits/functional_hash.h#L191) [#4000](https://github.com/ClickHouse/ClickHouse/pull/4000) ([sundyli](https://github.com/sundy-li))
+* Added hashing functions `javaHash`, `hiveHash`. [#3811](https://github.com/ClickHouse/ClickHouse/pull/3811) ([shangshujie365](https://github.com/shangshujie365))
+* Added table function `remoteSecure`. Function works as `remote`, but uses secure connection. [#4088](https://github.com/ClickHouse/ClickHouse/pull/4088) ([proller](https://github.com/proller))
### Experimental features
-* Added multiple JOINs emulation (`allow_experimental_multiple_joins_emulation` setting). [#3946](https://github.com/yandex/ClickHouse/pull/3946) ([Artem Zuikov](https://github.com/4ertus2))
+* Added multiple JOINs emulation (`allow_experimental_multiple_joins_emulation` setting). [#3946](https://github.com/ClickHouse/ClickHouse/pull/3946) ([Artem Zuikov](https://github.com/4ertus2))
### Bug Fixes
-* Make `compiled_expression_cache_size` setting limited by default to lower memory consumption. [#4041](https://github.com/yandex/ClickHouse/pull/4041) ([alesapin](https://github.com/alesapin))
-* Fix a bug that led to hangups in threads that perform ALTERs of Replicated tables and in the thread that updates configuration from ZooKeeper. [#2947](https://github.com/yandex/ClickHouse/issues/2947) [#3891](https://github.com/yandex/ClickHouse/issues/3891) [#3934](https://github.com/yandex/ClickHouse/pull/3934) ([Alex Zatelepin](https://github.com/ztlpn))
-* Fixed a race condition when executing a distributed ALTER task. The race condition led to more than one replica trying to execute the task and all replicas except one failing with a ZooKeeper error. [#3904](https://github.com/yandex/ClickHouse/pull/3904) ([Alex Zatelepin](https://github.com/ztlpn))
-* Fix a bug when `from_zk` config elements weren't refreshed after a request to ZooKeeper timed out. [#2947](https://github.com/yandex/ClickHouse/issues/2947) [#3947](https://github.com/yandex/ClickHouse/pull/3947) ([Alex Zatelepin](https://github.com/ztlpn))
-* Fix bug with wrong prefix for IPv4 subnet masks. [#3945](https://github.com/yandex/ClickHouse/pull/3945) ([alesapin](https://github.com/alesapin))
-* Fixed crash (`std::terminate`) in rare cases when a new thread cannot be created due to exhausted resources. [#3956](https://github.com/yandex/ClickHouse/pull/3956) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix bug when in `remote` table function execution when wrong restrictions were used for in `getStructureOfRemoteTable`. [#4009](https://github.com/yandex/ClickHouse/pull/4009) ([alesapin](https://github.com/alesapin))
-* Fix a leak of netlink sockets. They were placed in a pool where they were never deleted and new sockets were created at the start of a new thread when all current sockets were in use. [#4017](https://github.com/yandex/ClickHouse/pull/4017) ([Alex Zatelepin](https://github.com/ztlpn))
-* Fix bug with closing `/proc/self/fd` directory earlier than all fds were read from `/proc` after forking `odbc-bridge` subprocess. [#4120](https://github.com/yandex/ClickHouse/pull/4120) ([alesapin](https://github.com/alesapin))
-* Fixed String to UInt monotonic conversion in case of usage String in primary key. [#3870](https://github.com/yandex/ClickHouse/pull/3870) ([Winter Zhang](https://github.com/zhang2014))
-* Fixed error in calculation of integer conversion function monotonicity. [#3921](https://github.com/yandex/ClickHouse/pull/3921) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed segfault in `arrayEnumerateUniq`, `arrayEnumerateDense` functions in case of some invalid arguments. [#3909](https://github.com/yandex/ClickHouse/pull/3909) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fix UB in StorageMerge. [#3910](https://github.com/yandex/ClickHouse/pull/3910) ([Amos Bird](https://github.com/amosbird))
-* Fixed segfault in functions `addDays`, `subtractDays`. [#3913](https://github.com/yandex/ClickHouse/pull/3913) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed error: functions `round`, `floor`, `trunc`, `ceil` may return bogus result when executed on integer argument and large negative scale. [#3914](https://github.com/yandex/ClickHouse/pull/3914) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed a bug induced by 'kill query sync' which leads to a core dump. [#3916](https://github.com/yandex/ClickHouse/pull/3916) ([muVulDeePecker](https://github.com/fancyqlx))
-* Fix bug with long delay after empty replication queue. [#3928](https://github.com/yandex/ClickHouse/pull/3928) [#3932](https://github.com/yandex/ClickHouse/pull/3932) ([alesapin](https://github.com/alesapin))
-* Fixed excessive memory usage in case of inserting into table with `LowCardinality` primary key. [#3955](https://github.com/yandex/ClickHouse/pull/3955) ([KochetovNicolai](https://github.com/KochetovNicolai))
-* Fixed `LowCardinality` serialization for `Native` format in case of empty arrays. [#3907](https://github.com/yandex/ClickHouse/issues/3907) [#4011](https://github.com/yandex/ClickHouse/pull/4011) ([KochetovNicolai](https://github.com/KochetovNicolai))
-* Fixed incorrect result while using distinct by single LowCardinality numeric column. [#3895](https://github.com/yandex/ClickHouse/issues/3895) [#4012](https://github.com/yandex/ClickHouse/pull/4012) ([KochetovNicolai](https://github.com/KochetovNicolai))
-* Fixed specialized aggregation with LowCardinality key (in case when `compile` setting is enabled). [#3886](https://github.com/yandex/ClickHouse/pull/3886) ([KochetovNicolai](https://github.com/KochetovNicolai))
-* Fix user and password forwarding for replicated tables queries. [#3957](https://github.com/yandex/ClickHouse/pull/3957) ([alesapin](https://github.com/alesapin)) ([小路](https://github.com/nicelulu))
-* Fixed very rare race condition that can happen when listing tables in Dictionary database while reloading dictionaries. [#3970](https://github.com/yandex/ClickHouse/pull/3970) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed incorrect result when HAVING was used with ROLLUP or CUBE. [#3756](https://github.com/yandex/ClickHouse/issues/3756) [#3837](https://github.com/yandex/ClickHouse/pull/3837) ([Sam Chou](https://github.com/reflection))
-* Fixed column aliases for query with `JOIN ON` syntax and distributed tables. [#3980](https://github.com/yandex/ClickHouse/pull/3980) ([Winter Zhang](https://github.com/zhang2014))
-* Fixed error in internal implementation of `quantileTDigest` (found by Artem Vakhrushev). This error never happens in ClickHouse and was relevant only for those who use ClickHouse codebase as a library directly. [#3935](https://github.com/yandex/ClickHouse/pull/3935) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Make `compiled_expression_cache_size` setting limited by default to lower memory consumption. [#4041](https://github.com/ClickHouse/ClickHouse/pull/4041) ([alesapin](https://github.com/alesapin))
+* Fix a bug that led to hangups in threads that perform ALTERs of Replicated tables and in the thread that updates configuration from ZooKeeper. [#2947](https://github.com/ClickHouse/ClickHouse/issues/2947) [#3891](https://github.com/ClickHouse/ClickHouse/issues/3891) [#3934](https://github.com/ClickHouse/ClickHouse/pull/3934) ([Alex Zatelepin](https://github.com/ztlpn))
+* Fixed a race condition when executing a distributed ALTER task. The race condition led to more than one replica trying to execute the task and all replicas except one failing with a ZooKeeper error. [#3904](https://github.com/ClickHouse/ClickHouse/pull/3904) ([Alex Zatelepin](https://github.com/ztlpn))
+* Fix a bug when `from_zk` config elements weren't refreshed after a request to ZooKeeper timed out. [#2947](https://github.com/ClickHouse/ClickHouse/issues/2947) [#3947](https://github.com/ClickHouse/ClickHouse/pull/3947) ([Alex Zatelepin](https://github.com/ztlpn))
+* Fix bug with wrong prefix for IPv4 subnet masks. [#3945](https://github.com/ClickHouse/ClickHouse/pull/3945) ([alesapin](https://github.com/alesapin))
+* Fixed crash (`std::terminate`) in rare cases when a new thread cannot be created due to exhausted resources. [#3956](https://github.com/ClickHouse/ClickHouse/pull/3956) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix bug when in `remote` table function execution when wrong restrictions were used for in `getStructureOfRemoteTable`. [#4009](https://github.com/ClickHouse/ClickHouse/pull/4009) ([alesapin](https://github.com/alesapin))
+* Fix a leak of netlink sockets. They were placed in a pool where they were never deleted and new sockets were created at the start of a new thread when all current sockets were in use. [#4017](https://github.com/ClickHouse/ClickHouse/pull/4017) ([Alex Zatelepin](https://github.com/ztlpn))
+* Fix bug with closing `/proc/self/fd` directory earlier than all fds were read from `/proc` after forking `odbc-bridge` subprocess. [#4120](https://github.com/ClickHouse/ClickHouse/pull/4120) ([alesapin](https://github.com/alesapin))
+* Fixed String to UInt monotonic conversion in case of usage String in primary key. [#3870](https://github.com/ClickHouse/ClickHouse/pull/3870) ([Winter Zhang](https://github.com/zhang2014))
+* Fixed error in calculation of integer conversion function monotonicity. [#3921](https://github.com/ClickHouse/ClickHouse/pull/3921) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed segfault in `arrayEnumerateUniq`, `arrayEnumerateDense` functions in case of some invalid arguments. [#3909](https://github.com/ClickHouse/ClickHouse/pull/3909) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fix UB in StorageMerge. [#3910](https://github.com/ClickHouse/ClickHouse/pull/3910) ([Amos Bird](https://github.com/amosbird))
+* Fixed segfault in functions `addDays`, `subtractDays`. [#3913](https://github.com/ClickHouse/ClickHouse/pull/3913) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed error: functions `round`, `floor`, `trunc`, `ceil` may return bogus result when executed on integer argument and large negative scale. [#3914](https://github.com/ClickHouse/ClickHouse/pull/3914) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed a bug induced by 'kill query sync' which leads to a core dump. [#3916](https://github.com/ClickHouse/ClickHouse/pull/3916) ([muVulDeePecker](https://github.com/fancyqlx))
+* Fix bug with long delay after empty replication queue. [#3928](https://github.com/ClickHouse/ClickHouse/pull/3928) [#3932](https://github.com/ClickHouse/ClickHouse/pull/3932) ([alesapin](https://github.com/alesapin))
+* Fixed excessive memory usage in case of inserting into table with `LowCardinality` primary key. [#3955](https://github.com/ClickHouse/ClickHouse/pull/3955) ([KochetovNicolai](https://github.com/KochetovNicolai))
+* Fixed `LowCardinality` serialization for `Native` format in case of empty arrays. [#3907](https://github.com/ClickHouse/ClickHouse/issues/3907) [#4011](https://github.com/ClickHouse/ClickHouse/pull/4011) ([KochetovNicolai](https://github.com/KochetovNicolai))
+* Fixed incorrect result while using distinct by single LowCardinality numeric column. [#3895](https://github.com/ClickHouse/ClickHouse/issues/3895) [#4012](https://github.com/ClickHouse/ClickHouse/pull/4012) ([KochetovNicolai](https://github.com/KochetovNicolai))
+* Fixed specialized aggregation with LowCardinality key (in case when `compile` setting is enabled). [#3886](https://github.com/ClickHouse/ClickHouse/pull/3886) ([KochetovNicolai](https://github.com/KochetovNicolai))
+* Fix user and password forwarding for replicated tables queries. [#3957](https://github.com/ClickHouse/ClickHouse/pull/3957) ([alesapin](https://github.com/alesapin)) ([小路](https://github.com/nicelulu))
+* Fixed very rare race condition that can happen when listing tables in Dictionary database while reloading dictionaries. [#3970](https://github.com/ClickHouse/ClickHouse/pull/3970) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed incorrect result when HAVING was used with ROLLUP or CUBE. [#3756](https://github.com/ClickHouse/ClickHouse/issues/3756) [#3837](https://github.com/ClickHouse/ClickHouse/pull/3837) ([Sam Chou](https://github.com/reflection))
+* Fixed column aliases for query with `JOIN ON` syntax and distributed tables. [#3980](https://github.com/ClickHouse/ClickHouse/pull/3980) ([Winter Zhang](https://github.com/zhang2014))
+* Fixed error in internal implementation of `quantileTDigest` (found by Artem Vakhrushev). This error never happens in ClickHouse and was relevant only for those who use ClickHouse codebase as a library directly. [#3935](https://github.com/ClickHouse/ClickHouse/pull/3935) ([alexey-milovidov](https://github.com/alexey-milovidov))
### Improvements
-* Support for `IF NOT EXISTS` in `ALTER TABLE ADD COLUMN` statements along with `IF EXISTS` in `DROP/MODIFY/CLEAR/COMMENT COLUMN`. [#3900](https://github.com/yandex/ClickHouse/pull/3900) ([Boris Granveaud](https://github.com/bgranvea))
-* Function `parseDateTimeBestEffort`: support for formats `DD.MM.YYYY`, `DD.MM.YY`, `DD-MM-YYYY`, `DD-Mon-YYYY`, `DD/Month/YYYY` and similar. [#3922](https://github.com/yandex/ClickHouse/pull/3922) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* `CapnProtoInputStream` now support jagged structures. [#4063](https://github.com/yandex/ClickHouse/pull/4063) ([Odin Hultgren Van Der Horst](https://github.com/Miniwoffer))
-* Usability improvement: added a check that server process is started from the data directory's owner. Do not allow to start server from root if the data belongs to non-root user. [#3785](https://github.com/yandex/ClickHouse/pull/3785) ([sergey-v-galtsev](https://github.com/sergey-v-galtsev))
-* Better logic of checking required columns during analysis of queries with JOINs. [#3930](https://github.com/yandex/ClickHouse/pull/3930) ([Artem Zuikov](https://github.com/4ertus2))
-* Decreased the number of connections in case of large number of Distributed tables in a single server. [#3726](https://github.com/yandex/ClickHouse/pull/3726) ([Winter Zhang](https://github.com/zhang2014))
-* Supported totals row for `WITH TOTALS` query for ODBC driver. [#3836](https://github.com/yandex/ClickHouse/pull/3836) ([Maksim Koritckiy](https://github.com/nightweb))
-* Allowed to use `Enum`s as integers inside if function. [#3875](https://github.com/yandex/ClickHouse/pull/3875) ([Ivan](https://github.com/abyss7))
-* Added `low_cardinality_allow_in_native_format` setting. If disabled, do not use `LowCadrinality` type in `Native` format. [#3879](https://github.com/yandex/ClickHouse/pull/3879) ([KochetovNicolai](https://github.com/KochetovNicolai))
-* Removed some redundant objects from compiled expressions cache to lower memory usage. [#4042](https://github.com/yandex/ClickHouse/pull/4042) ([alesapin](https://github.com/alesapin))
-* Add check that `SET send_logs_level = 'value'` query accept appropriate value. [#3873](https://github.com/yandex/ClickHouse/pull/3873) ([Sabyanin Maxim](https://github.com/s-mx))
-* Fixed data type check in type conversion functions. [#3896](https://github.com/yandex/ClickHouse/pull/3896) ([Winter Zhang](https://github.com/zhang2014))
+* Support for `IF NOT EXISTS` in `ALTER TABLE ADD COLUMN` statements along with `IF EXISTS` in `DROP/MODIFY/CLEAR/COMMENT COLUMN`. [#3900](https://github.com/ClickHouse/ClickHouse/pull/3900) ([Boris Granveaud](https://github.com/bgranvea))
+* Function `parseDateTimeBestEffort`: support for formats `DD.MM.YYYY`, `DD.MM.YY`, `DD-MM-YYYY`, `DD-Mon-YYYY`, `DD/Month/YYYY` and similar. [#3922](https://github.com/ClickHouse/ClickHouse/pull/3922) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* `CapnProtoInputStream` now support jagged structures. [#4063](https://github.com/ClickHouse/ClickHouse/pull/4063) ([Odin Hultgren Van Der Horst](https://github.com/Miniwoffer))
+* Usability improvement: added a check that server process is started from the data directory's owner. Do not allow to start server from root if the data belongs to non-root user. [#3785](https://github.com/ClickHouse/ClickHouse/pull/3785) ([sergey-v-galtsev](https://github.com/sergey-v-galtsev))
+* Better logic of checking required columns during analysis of queries with JOINs. [#3930](https://github.com/ClickHouse/ClickHouse/pull/3930) ([Artem Zuikov](https://github.com/4ertus2))
+* Decreased the number of connections in case of large number of Distributed tables in a single server. [#3726](https://github.com/ClickHouse/ClickHouse/pull/3726) ([Winter Zhang](https://github.com/zhang2014))
+* Supported totals row for `WITH TOTALS` query for ODBC driver. [#3836](https://github.com/ClickHouse/ClickHouse/pull/3836) ([Maksim Koritckiy](https://github.com/nightweb))
+* Allowed to use `Enum`s as integers inside if function. [#3875](https://github.com/ClickHouse/ClickHouse/pull/3875) ([Ivan](https://github.com/abyss7))
+* Added `low_cardinality_allow_in_native_format` setting. If disabled, do not use `LowCadrinality` type in `Native` format. [#3879](https://github.com/ClickHouse/ClickHouse/pull/3879) ([KochetovNicolai](https://github.com/KochetovNicolai))
+* Removed some redundant objects from compiled expressions cache to lower memory usage. [#4042](https://github.com/ClickHouse/ClickHouse/pull/4042) ([alesapin](https://github.com/alesapin))
+* Add check that `SET send_logs_level = 'value'` query accept appropriate value. [#3873](https://github.com/ClickHouse/ClickHouse/pull/3873) ([Sabyanin Maxim](https://github.com/s-mx))
+* Fixed data type check in type conversion functions. [#3896](https://github.com/ClickHouse/ClickHouse/pull/3896) ([Winter Zhang](https://github.com/zhang2014))
### Performance Improvements
-* Add a MergeTree setting `use_minimalistic_part_header_in_zookeeper`. If enabled, Replicated tables will store compact part metadata in a single part znode. This can dramatically reduce ZooKeeper snapshot size (especially if the tables have a lot of columns). Note that after enabling this setting you will not be able to downgrade to a version that doesn't support it. [#3960](https://github.com/yandex/ClickHouse/pull/3960) ([Alex Zatelepin](https://github.com/ztlpn))
-* Add an DFA-based implementation for functions `sequenceMatch` and `sequenceCount` in case pattern doesn't contain time. [#4004](https://github.com/yandex/ClickHouse/pull/4004) ([Léo Ercolanelli](https://github.com/ercolanelli-leo))
-* Performance improvement for integer numbers serialization. [#3968](https://github.com/yandex/ClickHouse/pull/3968) ([Amos Bird](https://github.com/amosbird))
-* Zero left padding PODArray so that -1 element is always valid and zeroed. It's used for branchless calculation of offsets. [#3920](https://github.com/yandex/ClickHouse/pull/3920) ([Amos Bird](https://github.com/amosbird))
-* Reverted `jemalloc` version which lead to performance degradation. [#4018](https://github.com/yandex/ClickHouse/pull/4018) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Add a MergeTree setting `use_minimalistic_part_header_in_zookeeper`. If enabled, Replicated tables will store compact part metadata in a single part znode. This can dramatically reduce ZooKeeper snapshot size (especially if the tables have a lot of columns). Note that after enabling this setting you will not be able to downgrade to a version that doesn't support it. [#3960](https://github.com/ClickHouse/ClickHouse/pull/3960) ([Alex Zatelepin](https://github.com/ztlpn))
+* Add an DFA-based implementation for functions `sequenceMatch` and `sequenceCount` in case pattern doesn't contain time. [#4004](https://github.com/ClickHouse/ClickHouse/pull/4004) ([Léo Ercolanelli](https://github.com/ercolanelli-leo))
+* Performance improvement for integer numbers serialization. [#3968](https://github.com/ClickHouse/ClickHouse/pull/3968) ([Amos Bird](https://github.com/amosbird))
+* Zero left padding PODArray so that -1 element is always valid and zeroed. It's used for branchless calculation of offsets. [#3920](https://github.com/ClickHouse/ClickHouse/pull/3920) ([Amos Bird](https://github.com/amosbird))
+* Reverted `jemalloc` version which lead to performance degradation. [#4018](https://github.com/ClickHouse/ClickHouse/pull/4018) ([alexey-milovidov](https://github.com/alexey-milovidov))
### Backward Incompatible Changes
-* Removed undocumented feature `ALTER MODIFY PRIMARY KEY` because it was superseded by the `ALTER MODIFY ORDER BY` command. [#3887](https://github.com/yandex/ClickHouse/pull/3887) ([Alex Zatelepin](https://github.com/ztlpn))
-* Removed function `shardByHash`. [#3833](https://github.com/yandex/ClickHouse/pull/3833) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Forbid using scalar subqueries with result of type `AggregateFunction`. [#3865](https://github.com/yandex/ClickHouse/pull/3865) ([Ivan](https://github.com/abyss7))
+* Removed undocumented feature `ALTER MODIFY PRIMARY KEY` because it was superseded by the `ALTER MODIFY ORDER BY` command. [#3887](https://github.com/ClickHouse/ClickHouse/pull/3887) ([Alex Zatelepin](https://github.com/ztlpn))
+* Removed function `shardByHash`. [#3833](https://github.com/ClickHouse/ClickHouse/pull/3833) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Forbid using scalar subqueries with result of type `AggregateFunction`. [#3865](https://github.com/ClickHouse/ClickHouse/pull/3865) ([Ivan](https://github.com/abyss7))
### Build/Testing/Packaging Improvements
-* Added support for PowerPC (`ppc64le`) build. [#4132](https://github.com/yandex/ClickHouse/pull/4132) ([Danila Kutenin](https://github.com/danlark1))
-* Stateful functional tests are run on public available dataset. [#3969](https://github.com/yandex/ClickHouse/pull/3969) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed error when the server cannot start with the `bash: /usr/bin/clickhouse-extract-from-config: Operation not permitted` message within Docker or systemd-nspawn. [#4136](https://github.com/yandex/ClickHouse/pull/4136) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Updated `rdkafka` library to v1.0.0-RC5. Used cppkafka instead of raw C interface. [#4025](https://github.com/yandex/ClickHouse/pull/4025) ([Ivan](https://github.com/abyss7))
-* Updated `mariadb-client` library. Fixed one of issues found by UBSan. [#3924](https://github.com/yandex/ClickHouse/pull/3924) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Some fixes for UBSan builds. [#3926](https://github.com/yandex/ClickHouse/pull/3926) [#3021](https://github.com/yandex/ClickHouse/pull/3021) [#3948](https://github.com/yandex/ClickHouse/pull/3948) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Added support for PowerPC (`ppc64le`) build. [#4132](https://github.com/ClickHouse/ClickHouse/pull/4132) ([Danila Kutenin](https://github.com/danlark1))
+* Stateful functional tests are run on public available dataset. [#3969](https://github.com/ClickHouse/ClickHouse/pull/3969) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed error when the server cannot start with the `bash: /usr/bin/clickhouse-extract-from-config: Operation not permitted` message within Docker or systemd-nspawn. [#4136](https://github.com/ClickHouse/ClickHouse/pull/4136) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Updated `rdkafka` library to v1.0.0-RC5. Used cppkafka instead of raw C interface. [#4025](https://github.com/ClickHouse/ClickHouse/pull/4025) ([Ivan](https://github.com/abyss7))
+* Updated `mariadb-client` library. Fixed one of issues found by UBSan. [#3924](https://github.com/ClickHouse/ClickHouse/pull/3924) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Some fixes for UBSan builds. [#3926](https://github.com/ClickHouse/ClickHouse/pull/3926) [#3021](https://github.com/ClickHouse/ClickHouse/pull/3021) [#3948](https://github.com/ClickHouse/ClickHouse/pull/3948) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Added per-commit runs of tests with UBSan build.
* Added per-commit runs of PVS-Studio static analyzer.
-* Fixed bugs found by PVS-Studio. [#4013](https://github.com/yandex/ClickHouse/pull/4013) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed glibc compatibility issues. [#4100](https://github.com/yandex/ClickHouse/pull/4100) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Move Docker images to 18.10 and add compatibility file for glibc >= 2.28 [#3965](https://github.com/yandex/ClickHouse/pull/3965) ([alesapin](https://github.com/alesapin))
-* Add env variable if user don't want to chown directories in server Docker image. [#3967](https://github.com/yandex/ClickHouse/pull/3967) ([alesapin](https://github.com/alesapin))
-* Enabled most of the warnings from `-Weverything` in clang. Enabled `-Wpedantic`. [#3986](https://github.com/yandex/ClickHouse/pull/3986) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Added a few more warnings that are available only in clang 8. [#3993](https://github.com/yandex/ClickHouse/pull/3993) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Link to `libLLVM` rather than to individual LLVM libs when using shared linking. [#3989](https://github.com/yandex/ClickHouse/pull/3989) ([Orivej Desh](https://github.com/orivej))
-* Added sanitizer variables for test images. [#4072](https://github.com/yandex/ClickHouse/pull/4072) ([alesapin](https://github.com/alesapin))
-* `clickhouse-server` debian package will recommend `libcap2-bin` package to use `setcap` tool for setting capabilities. This is optional. [#4093](https://github.com/yandex/ClickHouse/pull/4093) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Improved compilation time, fixed includes. [#3898](https://github.com/yandex/ClickHouse/pull/3898) ([proller](https://github.com/proller))
-* Added performance tests for hash functions. [#3918](https://github.com/yandex/ClickHouse/pull/3918) ([filimonov](https://github.com/filimonov))
-* Fixed cyclic library dependences. [#3958](https://github.com/yandex/ClickHouse/pull/3958) ([proller](https://github.com/proller))
-* Improved compilation with low available memory. [#4030](https://github.com/yandex/ClickHouse/pull/4030) ([proller](https://github.com/proller))
-* Added test script to reproduce performance degradation in `jemalloc`. [#4036](https://github.com/yandex/ClickHouse/pull/4036) ([alexey-milovidov](https://github.com/alexey-milovidov))
-* Fixed misspells in comments and string literals under `dbms`. [#4122](https://github.com/yandex/ClickHouse/pull/4122) ([maiha](https://github.com/maiha))
-* Fixed typos in comments. [#4089](https://github.com/yandex/ClickHouse/pull/4089) ([Evgenii Pravda](https://github.com/kvinty))
+* Fixed bugs found by PVS-Studio. [#4013](https://github.com/ClickHouse/ClickHouse/pull/4013) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed glibc compatibility issues. [#4100](https://github.com/ClickHouse/ClickHouse/pull/4100) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Move Docker images to 18.10 and add compatibility file for glibc >= 2.28 [#3965](https://github.com/ClickHouse/ClickHouse/pull/3965) ([alesapin](https://github.com/alesapin))
+* Add env variable if user don't want to chown directories in server Docker image. [#3967](https://github.com/ClickHouse/ClickHouse/pull/3967) ([alesapin](https://github.com/alesapin))
+* Enabled most of the warnings from `-Weverything` in clang. Enabled `-Wpedantic`. [#3986](https://github.com/ClickHouse/ClickHouse/pull/3986) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Added a few more warnings that are available only in clang 8. [#3993](https://github.com/ClickHouse/ClickHouse/pull/3993) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Link to `libLLVM` rather than to individual LLVM libs when using shared linking. [#3989](https://github.com/ClickHouse/ClickHouse/pull/3989) ([Orivej Desh](https://github.com/orivej))
+* Added sanitizer variables for test images. [#4072](https://github.com/ClickHouse/ClickHouse/pull/4072) ([alesapin](https://github.com/alesapin))
+* `clickhouse-server` debian package will recommend `libcap2-bin` package to use `setcap` tool for setting capabilities. This is optional. [#4093](https://github.com/ClickHouse/ClickHouse/pull/4093) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Improved compilation time, fixed includes. [#3898](https://github.com/ClickHouse/ClickHouse/pull/3898) ([proller](https://github.com/proller))
+* Added performance tests for hash functions. [#3918](https://github.com/ClickHouse/ClickHouse/pull/3918) ([filimonov](https://github.com/filimonov))
+* Fixed cyclic library dependences. [#3958](https://github.com/ClickHouse/ClickHouse/pull/3958) ([proller](https://github.com/proller))
+* Improved compilation with low available memory. [#4030](https://github.com/ClickHouse/ClickHouse/pull/4030) ([proller](https://github.com/proller))
+* Added test script to reproduce performance degradation in `jemalloc`. [#4036](https://github.com/ClickHouse/ClickHouse/pull/4036) ([alexey-milovidov](https://github.com/alexey-milovidov))
+* Fixed misspells in comments and string literals under `dbms`. [#4122](https://github.com/ClickHouse/ClickHouse/pull/4122) ([maiha](https://github.com/maiha))
+* Fixed typos in comments. [#4089](https://github.com/ClickHouse/ClickHouse/pull/4089) ([Evgenii Pravda](https://github.com/kvinty))
## ClickHouse release 18.16.1, 2018-12-21
### Bug fixes:
-* Fixed an error that led to problems with updating dictionaries with the ODBC source. [#3825](https://github.com/yandex/ClickHouse/issues/3825), [#3829](https://github.com/yandex/ClickHouse/issues/3829)
-* JIT compilation of aggregate functions now works with LowCardinality columns. [#3838](https://github.com/yandex/ClickHouse/issues/3838)
+* Fixed an error that led to problems with updating dictionaries with the ODBC source. [#3825](https://github.com/ClickHouse/ClickHouse/issues/3825), [#3829](https://github.com/ClickHouse/ClickHouse/issues/3829)
+* JIT compilation of aggregate functions now works with LowCardinality columns. [#3838](https://github.com/ClickHouse/ClickHouse/issues/3838)
### Improvements:
-* Added the `low_cardinality_allow_in_native_format` setting (enabled by default). When disabled, LowCardinality columns will be converted to ordinary columns for SELECT queries and ordinary columns will be expected for INSERT queries. [#3879](https://github.com/yandex/ClickHouse/pull/3879)
+* Added the `low_cardinality_allow_in_native_format` setting (enabled by default). When disabled, LowCardinality columns will be converted to ordinary columns for SELECT queries and ordinary columns will be expected for INSERT queries. [#3879](https://github.com/ClickHouse/ClickHouse/pull/3879)
### Build improvements:
@@ -1445,104 +1445,104 @@ This release contains exactly the same set of patches as 19.3.6.
### New features:
-* `DEFAULT` expressions are evaluated for missing fields when loading data in semi-structured input formats (`JSONEachRow`, `TSKV`). The feature is enabled with the `insert_sample_with_metadata` setting. [#3555](https://github.com/yandex/ClickHouse/pull/3555)
-* The `ALTER TABLE` query now has the `MODIFY ORDER BY` action for changing the sorting key when adding or removing a table column. This is useful for tables in the `MergeTree` family that perform additional tasks when merging based on this sorting key, such as `SummingMergeTree`, `AggregatingMergeTree`, and so on. [#3581](https://github.com/yandex/ClickHouse/pull/3581) [#3755](https://github.com/yandex/ClickHouse/pull/3755)
-* For tables in the `MergeTree` family, now you can specify a different sorting key (`ORDER BY`) and index (`PRIMARY KEY`). The sorting key can be longer than the index. [#3581](https://github.com/yandex/ClickHouse/pull/3581)
-* Added the `hdfs` table function and the `HDFS` table engine for importing and exporting data to HDFS. [chenxing-xc](https://github.com/yandex/ClickHouse/pull/3617)
-* Added functions for working with base64: `base64Encode`, `base64Decode`, `tryBase64Decode`. [Alexander Krasheninnikov](https://github.com/yandex/ClickHouse/pull/3350)
-* Now you can use a parameter to configure the precision of the `uniqCombined` aggregate function (select the number of HyperLogLog cells). [#3406](https://github.com/yandex/ClickHouse/pull/3406)
-* Added the `system.contributors` table that contains the names of everyone who made commits in ClickHouse. [#3452](https://github.com/yandex/ClickHouse/pull/3452)
-* Added the ability to omit the partition for the `ALTER TABLE ... FREEZE` query in order to back up all partitions at once. [#3514](https://github.com/yandex/ClickHouse/pull/3514)
-* Added `dictGet` and `dictGetOrDefault` functions that don't require specifying the type of return value. The type is determined automatically from the dictionary description. [Amos Bird](https://github.com/yandex/ClickHouse/pull/3564)
-* Now you can specify comments for a column in the table description and change it using `ALTER`. [#3377](https://github.com/yandex/ClickHouse/pull/3377)
-* Reading is supported for `Join` type tables with simple keys. [Amos Bird](https://github.com/yandex/ClickHouse/pull/3728)
-* Now you can specify the options `join_use_nulls`, `max_rows_in_join`, `max_bytes_in_join`, and `join_overflow_mode` when creating a `Join` type table. [Amos Bird](https://github.com/yandex/ClickHouse/pull/3728)
-* Added the `joinGet` function that allows you to use a `Join` type table like a dictionary. [Amos Bird](https://github.com/yandex/ClickHouse/pull/3728)
-* Added the `partition_key`, `sorting_key`, `primary_key`, and `sampling_key` columns to the `system.tables` table in order to provide information about table keys. [#3609](https://github.com/yandex/ClickHouse/pull/3609)
-* Added the `is_in_partition_key`, `is_in_sorting_key`, `is_in_primary_key`, and `is_in_sampling_key` columns to the `system.columns` table. [#3609](https://github.com/yandex/ClickHouse/pull/3609)
-* Added the `min_time` and `max_time` columns to the `system.parts` table. These columns are populated when the partitioning key is an expression consisting of `DateTime` columns. [Emmanuel Donin de Rosière](https://github.com/yandex/ClickHouse/pull/3800)
+* `DEFAULT` expressions are evaluated for missing fields when loading data in semi-structured input formats (`JSONEachRow`, `TSKV`). The feature is enabled with the `insert_sample_with_metadata` setting. [#3555](https://github.com/ClickHouse/ClickHouse/pull/3555)
+* The `ALTER TABLE` query now has the `MODIFY ORDER BY` action for changing the sorting key when adding or removing a table column. This is useful for tables in the `MergeTree` family that perform additional tasks when merging based on this sorting key, such as `SummingMergeTree`, `AggregatingMergeTree`, and so on. [#3581](https://github.com/ClickHouse/ClickHouse/pull/3581) [#3755](https://github.com/ClickHouse/ClickHouse/pull/3755)
+* For tables in the `MergeTree` family, now you can specify a different sorting key (`ORDER BY`) and index (`PRIMARY KEY`). The sorting key can be longer than the index. [#3581](https://github.com/ClickHouse/ClickHouse/pull/3581)
+* Added the `hdfs` table function and the `HDFS` table engine for importing and exporting data to HDFS. [chenxing-xc](https://github.com/ClickHouse/ClickHouse/pull/3617)
+* Added functions for working with base64: `base64Encode`, `base64Decode`, `tryBase64Decode`. [Alexander Krasheninnikov](https://github.com/ClickHouse/ClickHouse/pull/3350)
+* Now you can use a parameter to configure the precision of the `uniqCombined` aggregate function (select the number of HyperLogLog cells). [#3406](https://github.com/ClickHouse/ClickHouse/pull/3406)
+* Added the `system.contributors` table that contains the names of everyone who made commits in ClickHouse. [#3452](https://github.com/ClickHouse/ClickHouse/pull/3452)
+* Added the ability to omit the partition for the `ALTER TABLE ... FREEZE` query in order to back up all partitions at once. [#3514](https://github.com/ClickHouse/ClickHouse/pull/3514)
+* Added `dictGet` and `dictGetOrDefault` functions that don't require specifying the type of return value. The type is determined automatically from the dictionary description. [Amos Bird](https://github.com/ClickHouse/ClickHouse/pull/3564)
+* Now you can specify comments for a column in the table description and change it using `ALTER`. [#3377](https://github.com/ClickHouse/ClickHouse/pull/3377)
+* Reading is supported for `Join` type tables with simple keys. [Amos Bird](https://github.com/ClickHouse/ClickHouse/pull/3728)
+* Now you can specify the options `join_use_nulls`, `max_rows_in_join`, `max_bytes_in_join`, and `join_overflow_mode` when creating a `Join` type table. [Amos Bird](https://github.com/ClickHouse/ClickHouse/pull/3728)
+* Added the `joinGet` function that allows you to use a `Join` type table like a dictionary. [Amos Bird](https://github.com/ClickHouse/ClickHouse/pull/3728)
+* Added the `partition_key`, `sorting_key`, `primary_key`, and `sampling_key` columns to the `system.tables` table in order to provide information about table keys. [#3609](https://github.com/ClickHouse/ClickHouse/pull/3609)
+* Added the `is_in_partition_key`, `is_in_sorting_key`, `is_in_primary_key`, and `is_in_sampling_key` columns to the `system.columns` table. [#3609](https://github.com/ClickHouse/ClickHouse/pull/3609)
+* Added the `min_time` and `max_time` columns to the `system.parts` table. These columns are populated when the partitioning key is an expression consisting of `DateTime` columns. [Emmanuel Donin de Rosière](https://github.com/ClickHouse/ClickHouse/pull/3800)
### Bug fixes:
-* Fixes and performance improvements for the `LowCardinality` data type. `GROUP BY` using `LowCardinality(Nullable(...))`. Getting the values of `extremes`. Processing high-order functions. `LEFT ARRAY JOIN`. Distributed `GROUP BY`. Functions that return `Array`. Execution of `ORDER BY`. Writing to `Distributed` tables (nicelulu). Backward compatibility for `INSERT` queries from old clients that implement the `Native` protocol. Support for `LowCardinality` for `JOIN`. Improved performance when working in a single stream. [#3823](https://github.com/yandex/ClickHouse/pull/3823) [#3803](https://github.com/yandex/ClickHouse/pull/3803) [#3799](https://github.com/yandex/ClickHouse/pull/3799) [#3769](https://github.com/yandex/ClickHouse/pull/3769) [#3744](https://github.com/yandex/ClickHouse/pull/3744) [#3681](https://github.com/yandex/ClickHouse/pull/3681) [#3651](https://github.com/yandex/ClickHouse/pull/3651) [#3649](https://github.com/yandex/ClickHouse/pull/3649) [#3641](https://github.com/yandex/ClickHouse/pull/3641) [#3632](https://github.com/yandex/ClickHouse/pull/3632) [#3568](https://github.com/yandex/ClickHouse/pull/3568) [#3523](https://github.com/yandex/ClickHouse/pull/3523) [#3518](https://github.com/yandex/ClickHouse/pull/3518)
-* Fixed how the `select_sequential_consistency` option works. Previously, when this setting was enabled, an incomplete result was sometimes returned after beginning to write to a new partition. [#2863](https://github.com/yandex/ClickHouse/pull/2863)
-* Databases are correctly specified when executing DDL `ON CLUSTER` queries and `ALTER UPDATE/DELETE`. [#3772](https://github.com/yandex/ClickHouse/pull/3772) [#3460](https://github.com/yandex/ClickHouse/pull/3460)
-* Databases are correctly specified for subqueries inside a VIEW. [#3521](https://github.com/yandex/ClickHouse/pull/3521)
-* Fixed a bug in `PREWHERE` with `FINAL` for `VersionedCollapsingMergeTree`. [7167bfd7](https://github.com/yandex/ClickHouse/commit/7167bfd7b365538f7a91c4307ad77e552ab4e8c1)
-* Now you can use `KILL QUERY` to cancel queries that have not started yet because they are waiting for the table to be locked. [#3517](https://github.com/yandex/ClickHouse/pull/3517)
-* Corrected date and time calculations if the clocks were moved back at midnight (this happens in Iran, and happened in Moscow from 1981 to 1983). Previously, this led to the time being reset a day earlier than necessary, and also caused incorrect formatting of the date and time in text format. [#3819](https://github.com/yandex/ClickHouse/pull/3819)
-* Fixed bugs in some cases of `VIEW` and subqueries that omit the database. [Winter Zhang](https://github.com/yandex/ClickHouse/pull/3521)
-* Fixed a race condition when simultaneously reading from a `MATERIALIZED VIEW` and deleting a `MATERIALIZED VIEW` due to not locking the internal `MATERIALIZED VIEW`. [#3404](https://github.com/yandex/ClickHouse/pull/3404) [#3694](https://github.com/yandex/ClickHouse/pull/3694)
-* Fixed the error `Lock handler cannot be nullptr.` [#3689](https://github.com/yandex/ClickHouse/pull/3689)
-* Fixed query processing when the `compile_expressions` option is enabled (it's enabled by default). Nondeterministic constant expressions like the `now` function are no longer unfolded. [#3457](https://github.com/yandex/ClickHouse/pull/3457)
+* Fixes and performance improvements for the `LowCardinality` data type. `GROUP BY` using `LowCardinality(Nullable(...))`. Getting the values of `extremes`. Processing high-order functions. `LEFT ARRAY JOIN`. Distributed `GROUP BY`. Functions that return `Array`. Execution of `ORDER BY`. Writing to `Distributed` tables (nicelulu). Backward compatibility for `INSERT` queries from old clients that implement the `Native` protocol. Support for `LowCardinality` for `JOIN`. Improved performance when working in a single stream. [#3823](https://github.com/ClickHouse/ClickHouse/pull/3823) [#3803](https://github.com/ClickHouse/ClickHouse/pull/3803) [#3799](https://github.com/ClickHouse/ClickHouse/pull/3799) [#3769](https://github.com/ClickHouse/ClickHouse/pull/3769) [#3744](https://github.com/ClickHouse/ClickHouse/pull/3744) [#3681](https://github.com/ClickHouse/ClickHouse/pull/3681) [#3651](https://github.com/ClickHouse/ClickHouse/pull/3651) [#3649](https://github.com/ClickHouse/ClickHouse/pull/3649) [#3641](https://github.com/ClickHouse/ClickHouse/pull/3641) [#3632](https://github.com/ClickHouse/ClickHouse/pull/3632) [#3568](https://github.com/ClickHouse/ClickHouse/pull/3568) [#3523](https://github.com/ClickHouse/ClickHouse/pull/3523) [#3518](https://github.com/ClickHouse/ClickHouse/pull/3518)
+* Fixed how the `select_sequential_consistency` option works. Previously, when this setting was enabled, an incomplete result was sometimes returned after beginning to write to a new partition. [#2863](https://github.com/ClickHouse/ClickHouse/pull/2863)
+* Databases are correctly specified when executing DDL `ON CLUSTER` queries and `ALTER UPDATE/DELETE`. [#3772](https://github.com/ClickHouse/ClickHouse/pull/3772) [#3460](https://github.com/ClickHouse/ClickHouse/pull/3460)
+* Databases are correctly specified for subqueries inside a VIEW. [#3521](https://github.com/ClickHouse/ClickHouse/pull/3521)
+* Fixed a bug in `PREWHERE` with `FINAL` for `VersionedCollapsingMergeTree`. [7167bfd7](https://github.com/ClickHouse/ClickHouse/commit/7167bfd7b365538f7a91c4307ad77e552ab4e8c1)
+* Now you can use `KILL QUERY` to cancel queries that have not started yet because they are waiting for the table to be locked. [#3517](https://github.com/ClickHouse/ClickHouse/pull/3517)
+* Corrected date and time calculations if the clocks were moved back at midnight (this happens in Iran, and happened in Moscow from 1981 to 1983). Previously, this led to the time being reset a day earlier than necessary, and also caused incorrect formatting of the date and time in text format. [#3819](https://github.com/ClickHouse/ClickHouse/pull/3819)
+* Fixed bugs in some cases of `VIEW` and subqueries that omit the database. [Winter Zhang](https://github.com/ClickHouse/ClickHouse/pull/3521)
+* Fixed a race condition when simultaneously reading from a `MATERIALIZED VIEW` and deleting a `MATERIALIZED VIEW` due to not locking the internal `MATERIALIZED VIEW`. [#3404](https://github.com/ClickHouse/ClickHouse/pull/3404) [#3694](https://github.com/ClickHouse/ClickHouse/pull/3694)
+* Fixed the error `Lock handler cannot be nullptr.` [#3689](https://github.com/ClickHouse/ClickHouse/pull/3689)
+* Fixed query processing when the `compile_expressions` option is enabled (it's enabled by default). Nondeterministic constant expressions like the `now` function are no longer unfolded. [#3457](https://github.com/ClickHouse/ClickHouse/pull/3457)
* Fixed a crash when specifying a non-constant scale argument in `toDecimal32/64/128` functions.
-* Fixed an error when trying to insert an array with `NULL` elements in the `Values` format into a column of type `Array` without `Nullable` (if `input_format_values_interpret_expressions` = 1). [#3487](https://github.com/yandex/ClickHouse/pull/3487) [#3503](https://github.com/yandex/ClickHouse/pull/3503)
-* Fixed continuous error logging in `DDLWorker` if ZooKeeper is not available. [8f50c620](https://github.com/yandex/ClickHouse/commit/8f50c620334988b28018213ec0092fe6423847e2)
-* Fixed the return type for `quantile*` functions from `Date` and `DateTime` types of arguments. [#3580](https://github.com/yandex/ClickHouse/pull/3580)
-* Fixed the `WITH` clause if it specifies a simple alias without expressions. [#3570](https://github.com/yandex/ClickHouse/pull/3570)
-* Fixed processing of queries with named sub-queries and qualified column names when `enable_optimize_predicate_expression` is enabled. [Winter Zhang](https://github.com/yandex/ClickHouse/pull/3588)
-* Fixed the error `Attempt to attach to nullptr thread group` when working with materialized views. [Marek Vavruša](https://github.com/yandex/ClickHouse/pull/3623)
-* Fixed a crash when passing certain incorrect arguments to the `arrayReverse` function. [73e3a7b6](https://github.com/yandex/ClickHouse/commit/73e3a7b662161d6005e7727d8a711b930386b871)
-* Fixed the buffer overflow in the `extractURLParameter` function. Improved performance. Added correct processing of strings containing zero bytes. [141e9799](https://github.com/yandex/ClickHouse/commit/141e9799e49201d84ea8e951d1bed4fb6d3dacb5)
-* Fixed buffer overflow in the `lowerUTF8` and `upperUTF8` functions. Removed the ability to execute these functions over `FixedString` type arguments. [#3662](https://github.com/yandex/ClickHouse/pull/3662)
-* Fixed a rare race condition when deleting `MergeTree` tables. [#3680](https://github.com/yandex/ClickHouse/pull/3680)
-* Fixed a race condition when reading from `Buffer` tables and simultaneously performing `ALTER` or `DROP` on the target tables. [#3719](https://github.com/yandex/ClickHouse/pull/3719)
-* Fixed a segfault if the `max_temporary_non_const_columns` limit was exceeded. [#3788](https://github.com/yandex/ClickHouse/pull/3788)
+* Fixed an error when trying to insert an array with `NULL` elements in the `Values` format into a column of type `Array` without `Nullable` (if `input_format_values_interpret_expressions` = 1). [#3487](https://github.com/ClickHouse/ClickHouse/pull/3487) [#3503](https://github.com/ClickHouse/ClickHouse/pull/3503)
+* Fixed continuous error logging in `DDLWorker` if ZooKeeper is not available. [8f50c620](https://github.com/ClickHouse/ClickHouse/commit/8f50c620334988b28018213ec0092fe6423847e2)
+* Fixed the return type for `quantile*` functions from `Date` and `DateTime` types of arguments. [#3580](https://github.com/ClickHouse/ClickHouse/pull/3580)
+* Fixed the `WITH` clause if it specifies a simple alias without expressions. [#3570](https://github.com/ClickHouse/ClickHouse/pull/3570)
+* Fixed processing of queries with named sub-queries and qualified column names when `enable_optimize_predicate_expression` is enabled. [Winter Zhang](https://github.com/ClickHouse/ClickHouse/pull/3588)
+* Fixed the error `Attempt to attach to nullptr thread group` when working with materialized views. [Marek Vavruša](https://github.com/ClickHouse/ClickHouse/pull/3623)
+* Fixed a crash when passing certain incorrect arguments to the `arrayReverse` function. [73e3a7b6](https://github.com/ClickHouse/ClickHouse/commit/73e3a7b662161d6005e7727d8a711b930386b871)
+* Fixed the buffer overflow in the `extractURLParameter` function. Improved performance. Added correct processing of strings containing zero bytes. [141e9799](https://github.com/ClickHouse/ClickHouse/commit/141e9799e49201d84ea8e951d1bed4fb6d3dacb5)
+* Fixed buffer overflow in the `lowerUTF8` and `upperUTF8` functions. Removed the ability to execute these functions over `FixedString` type arguments. [#3662](https://github.com/ClickHouse/ClickHouse/pull/3662)
+* Fixed a rare race condition when deleting `MergeTree` tables. [#3680](https://github.com/ClickHouse/ClickHouse/pull/3680)
+* Fixed a race condition when reading from `Buffer` tables and simultaneously performing `ALTER` or `DROP` on the target tables. [#3719](https://github.com/ClickHouse/ClickHouse/pull/3719)
+* Fixed a segfault if the `max_temporary_non_const_columns` limit was exceeded. [#3788](https://github.com/ClickHouse/ClickHouse/pull/3788)
### Improvements:
-* The server does not write the processed configuration files to the `/etc/clickhouse-server/` directory. Instead, it saves them in the `preprocessed_configs` directory inside `path`. This means that the `/etc/clickhouse-server/` directory doesn't have write access for the `clickhouse` user, which improves security. [#2443](https://github.com/yandex/ClickHouse/pull/2443)
-* The `min_merge_bytes_to_use_direct_io` option is set to 10 GiB by default. A merge that forms large parts of tables from the MergeTree family will be performed in `O_DIRECT` mode, which prevents excessive page cache eviction. [#3504](https://github.com/yandex/ClickHouse/pull/3504)
-* Accelerated server start when there is a very large number of tables. [#3398](https://github.com/yandex/ClickHouse/pull/3398)
-* Added a connection pool and HTTP `Keep-Alive` for connections between replicas. [#3594](https://github.com/yandex/ClickHouse/pull/3594)
-* If the query syntax is invalid, the `400 Bad Request` code is returned in the `HTTP` interface (500 was returned previously). [31bc680a](https://github.com/yandex/ClickHouse/commit/31bc680ac5f4bb1d0360a8ba4696fa84bb47d6ab)
-* The `join_default_strictness` option is set to `ALL` by default for compatibility. [120e2cbe](https://github.com/yandex/ClickHouse/commit/120e2cbe2ff4fbad626c28042d9b28781c805afe)
-* Removed logging to `stderr` from the `re2` library for invalid or complex regular expressions. [#3723](https://github.com/yandex/ClickHouse/pull/3723)
-* Added for the `Kafka` table engine: checks for subscriptions before beginning to read from Kafka; the kafka_max_block_size setting for the table. [Marek Vavruša](https://github.com/yandex/ClickHouse/pull/3396)
-* The `cityHash64`, `farmHash64`, `metroHash64`, `sipHash64`, `halfMD5`, `murmurHash2_32`, `murmurHash2_64`, `murmurHash3_32`, and `murmurHash3_64` functions now work for any number of arguments and for arguments in the form of tuples. [#3451](https://github.com/yandex/ClickHouse/pull/3451) [#3519](https://github.com/yandex/ClickHouse/pull/3519)
-* The `arrayReverse` function now works with any types of arrays. [73e3a7b6](https://github.com/yandex/ClickHouse/commit/73e3a7b662161d6005e7727d8a711b930386b871)
-* Added an optional parameter: the slot size for the `timeSlots` function. [Kirill Shvakov](https://github.com/yandex/ClickHouse/pull/3724)
-* For `FULL` and `RIGHT JOIN`, the `max_block_size` setting is used for a stream of non-joined data from the right table. [Amos Bird](https://github.com/yandex/ClickHouse/pull/3699)
-* Added the `--secure` command line parameter in `clickhouse-benchmark` and `clickhouse-performance-test` to enable TLS. [#3688](https://github.com/yandex/ClickHouse/pull/3688) [#3690](https://github.com/yandex/ClickHouse/pull/3690)
-* Type conversion when the structure of a `Buffer` type table does not match the structure of the destination table. [Vitaly Baranov](https://github.com/yandex/ClickHouse/pull/3603)
-* Added the `tcp_keep_alive_timeout` option to enable keep-alive packets after inactivity for the specified time interval. [#3441](https://github.com/yandex/ClickHouse/pull/3441)
-* Removed unnecessary quoting of values for the partition key in the `system.parts` table if it consists of a single column. [#3652](https://github.com/yandex/ClickHouse/pull/3652)
-* The modulo function works for `Date` and `DateTime` data types. [#3385](https://github.com/yandex/ClickHouse/pull/3385)
-* Added synonyms for the `POWER`, `LN`, `LCASE`, `UCASE`, `REPLACE`, `LOCATE`, `SUBSTR`, and `MID` functions. [#3774](https://github.com/yandex/ClickHouse/pull/3774) [#3763](https://github.com/yandex/ClickHouse/pull/3763) Some function names are case-insensitive for compatibility with the SQL standard. Added syntactic sugar `SUBSTRING(expr FROM start FOR length)` for compatibility with SQL. [#3804](https://github.com/yandex/ClickHouse/pull/3804)
-* Added the ability to `mlock` memory pages corresponding to `clickhouse-server` executable code to prevent it from being forced out of memory. This feature is disabled by default. [#3553](https://github.com/yandex/ClickHouse/pull/3553)
-* Improved performance when reading from `O_DIRECT` (with the `min_bytes_to_use_direct_io` option enabled). [#3405](https://github.com/yandex/ClickHouse/pull/3405)
-* Improved performance of the `dictGet...OrDefault` function for a constant key argument and a non-constant default argument. [Amos Bird](https://github.com/yandex/ClickHouse/pull/3563)
-* The `firstSignificantSubdomain` function now processes the domains `gov`, `mil`, and `edu`. [Igor Hatarist](https://github.com/yandex/ClickHouse/pull/3601) Improved performance. [#3628](https://github.com/yandex/ClickHouse/pull/3628)
+* The server does not write the processed configuration files to the `/etc/clickhouse-server/` directory. Instead, it saves them in the `preprocessed_configs` directory inside `path`. This means that the `/etc/clickhouse-server/` directory doesn't have write access for the `clickhouse` user, which improves security. [#2443](https://github.com/ClickHouse/ClickHouse/pull/2443)
+* The `min_merge_bytes_to_use_direct_io` option is set to 10 GiB by default. A merge that forms large parts of tables from the MergeTree family will be performed in `O_DIRECT` mode, which prevents excessive page cache eviction. [#3504](https://github.com/ClickHouse/ClickHouse/pull/3504)
+* Accelerated server start when there is a very large number of tables. [#3398](https://github.com/ClickHouse/ClickHouse/pull/3398)
+* Added a connection pool and HTTP `Keep-Alive` for connections between replicas. [#3594](https://github.com/ClickHouse/ClickHouse/pull/3594)
+* If the query syntax is invalid, the `400 Bad Request` code is returned in the `HTTP` interface (500 was returned previously). [31bc680a](https://github.com/ClickHouse/ClickHouse/commit/31bc680ac5f4bb1d0360a8ba4696fa84bb47d6ab)
+* The `join_default_strictness` option is set to `ALL` by default for compatibility. [120e2cbe](https://github.com/ClickHouse/ClickHouse/commit/120e2cbe2ff4fbad626c28042d9b28781c805afe)
+* Removed logging to `stderr` from the `re2` library for invalid or complex regular expressions. [#3723](https://github.com/ClickHouse/ClickHouse/pull/3723)
+* Added for the `Kafka` table engine: checks for subscriptions before beginning to read from Kafka; the kafka_max_block_size setting for the table. [Marek Vavruša](https://github.com/ClickHouse/ClickHouse/pull/3396)
+* The `cityHash64`, `farmHash64`, `metroHash64`, `sipHash64`, `halfMD5`, `murmurHash2_32`, `murmurHash2_64`, `murmurHash3_32`, and `murmurHash3_64` functions now work for any number of arguments and for arguments in the form of tuples. [#3451](https://github.com/ClickHouse/ClickHouse/pull/3451) [#3519](https://github.com/ClickHouse/ClickHouse/pull/3519)
+* The `arrayReverse` function now works with any types of arrays. [73e3a7b6](https://github.com/ClickHouse/ClickHouse/commit/73e3a7b662161d6005e7727d8a711b930386b871)
+* Added an optional parameter: the slot size for the `timeSlots` function. [Kirill Shvakov](https://github.com/ClickHouse/ClickHouse/pull/3724)
+* For `FULL` and `RIGHT JOIN`, the `max_block_size` setting is used for a stream of non-joined data from the right table. [Amos Bird](https://github.com/ClickHouse/ClickHouse/pull/3699)
+* Added the `--secure` command line parameter in `clickhouse-benchmark` and `clickhouse-performance-test` to enable TLS. [#3688](https://github.com/ClickHouse/ClickHouse/pull/3688) [#3690](https://github.com/ClickHouse/ClickHouse/pull/3690)
+* Type conversion when the structure of a `Buffer` type table does not match the structure of the destination table. [Vitaly Baranov](https://github.com/ClickHouse/ClickHouse/pull/3603)
+* Added the `tcp_keep_alive_timeout` option to enable keep-alive packets after inactivity for the specified time interval. [#3441](https://github.com/ClickHouse/ClickHouse/pull/3441)
+* Removed unnecessary quoting of values for the partition key in the `system.parts` table if it consists of a single column. [#3652](https://github.com/ClickHouse/ClickHouse/pull/3652)
+* The modulo function works for `Date` and `DateTime` data types. [#3385](https://github.com/ClickHouse/ClickHouse/pull/3385)
+* Added synonyms for the `POWER`, `LN`, `LCASE`, `UCASE`, `REPLACE`, `LOCATE`, `SUBSTR`, and `MID` functions. [#3774](https://github.com/ClickHouse/ClickHouse/pull/3774) [#3763](https://github.com/ClickHouse/ClickHouse/pull/3763) Some function names are case-insensitive for compatibility with the SQL standard. Added syntactic sugar `SUBSTRING(expr FROM start FOR length)` for compatibility with SQL. [#3804](https://github.com/ClickHouse/ClickHouse/pull/3804)
+* Added the ability to `mlock` memory pages corresponding to `clickhouse-server` executable code to prevent it from being forced out of memory. This feature is disabled by default. [#3553](https://github.com/ClickHouse/ClickHouse/pull/3553)
+* Improved performance when reading from `O_DIRECT` (with the `min_bytes_to_use_direct_io` option enabled). [#3405](https://github.com/ClickHouse/ClickHouse/pull/3405)
+* Improved performance of the `dictGet...OrDefault` function for a constant key argument and a non-constant default argument. [Amos Bird](https://github.com/ClickHouse/ClickHouse/pull/3563)
+* The `firstSignificantSubdomain` function now processes the domains `gov`, `mil`, and `edu`. [Igor Hatarist](https://github.com/ClickHouse/ClickHouse/pull/3601) Improved performance. [#3628](https://github.com/ClickHouse/ClickHouse/pull/3628)
* Ability to specify custom environment variables for starting `clickhouse-server` using the `SYS-V init.d` script by defining `CLICKHOUSE_PROGRAM_ENV` in `/etc/default/clickhouse`.
-[Pavlo Bashynskyi](https://github.com/yandex/ClickHouse/pull/3612)
-* Correct return code for the clickhouse-server init script. [#3516](https://github.com/yandex/ClickHouse/pull/3516)
-* The `system.metrics` table now has the `VersionInteger` metric, and `system.build_options` has the added line `VERSION_INTEGER`, which contains the numeric form of the ClickHouse version, such as `18016000`. [#3644](https://github.com/yandex/ClickHouse/pull/3644)
-* Removed the ability to compare the `Date` type with a number to avoid potential errors like `date = 2018-12-17`, where quotes around the date are omitted by mistake. [#3687](https://github.com/yandex/ClickHouse/pull/3687)
-* Fixed the behavior of stateful functions like `rowNumberInAllBlocks`. They previously output a result that was one number larger due to starting during query analysis. [Amos Bird](https://github.com/yandex/ClickHouse/pull/3729)
-* If the `force_restore_data` file can't be deleted, an error message is displayed. [Amos Bird](https://github.com/yandex/ClickHouse/pull/3794)
+[Pavlo Bashynskyi](https://github.com/ClickHouse/ClickHouse/pull/3612)
+* Correct return code for the clickhouse-server init script. [#3516](https://github.com/ClickHouse/ClickHouse/pull/3516)
+* The `system.metrics` table now has the `VersionInteger` metric, and `system.build_options` has the added line `VERSION_INTEGER`, which contains the numeric form of the ClickHouse version, such as `18016000`. [#3644](https://github.com/ClickHouse/ClickHouse/pull/3644)
+* Removed the ability to compare the `Date` type with a number to avoid potential errors like `date = 2018-12-17`, where quotes around the date are omitted by mistake. [#3687](https://github.com/ClickHouse/ClickHouse/pull/3687)
+* Fixed the behavior of stateful functions like `rowNumberInAllBlocks`. They previously output a result that was one number larger due to starting during query analysis. [Amos Bird](https://github.com/ClickHouse/ClickHouse/pull/3729)
+* If the `force_restore_data` file can't be deleted, an error message is displayed. [Amos Bird](https://github.com/ClickHouse/ClickHouse/pull/3794)
### Build improvements:
-* Updated the `jemalloc` library, which fixes a potential memory leak. [Amos Bird](https://github.com/yandex/ClickHouse/pull/3557)
-* Profiling with `jemalloc` is enabled by default in order to debug builds. [2cc82f5c](https://github.com/yandex/ClickHouse/commit/2cc82f5cbe266421cd4c1165286c2c47e5ffcb15)
-* Added the ability to run integration tests when only `Docker` is installed on the system. [#3650](https://github.com/yandex/ClickHouse/pull/3650)
-* Added the fuzz expression test in SELECT queries. [#3442](https://github.com/yandex/ClickHouse/pull/3442)
-* Added a stress test for commits, which performs functional tests in parallel and in random order to detect more race conditions. [#3438](https://github.com/yandex/ClickHouse/pull/3438)
-* Improved the method for starting clickhouse-server in a Docker image. [Elghazal Ahmed](https://github.com/yandex/ClickHouse/pull/3663)
-* For a Docker image, added support for initializing databases using files in the `/docker-entrypoint-initdb.d` directory. [Konstantin Lebedev](https://github.com/yandex/ClickHouse/pull/3695)
-* Fixes for builds on ARM. [#3709](https://github.com/yandex/ClickHouse/pull/3709)
+* Updated the `jemalloc` library, which fixes a potential memory leak. [Amos Bird](https://github.com/ClickHouse/ClickHouse/pull/3557)
+* Profiling with `jemalloc` is enabled by default in order to debug builds. [2cc82f5c](https://github.com/ClickHouse/ClickHouse/commit/2cc82f5cbe266421cd4c1165286c2c47e5ffcb15)
+* Added the ability to run integration tests when only `Docker` is installed on the system. [#3650](https://github.com/ClickHouse/ClickHouse/pull/3650)
+* Added the fuzz expression test in SELECT queries. [#3442](https://github.com/ClickHouse/ClickHouse/pull/3442)
+* Added a stress test for commits, which performs functional tests in parallel and in random order to detect more race conditions. [#3438](https://github.com/ClickHouse/ClickHouse/pull/3438)
+* Improved the method for starting clickhouse-server in a Docker image. [Elghazal Ahmed](https://github.com/ClickHouse/ClickHouse/pull/3663)
+* For a Docker image, added support for initializing databases using files in the `/docker-entrypoint-initdb.d` directory. [Konstantin Lebedev](https://github.com/ClickHouse/ClickHouse/pull/3695)
+* Fixes for builds on ARM. [#3709](https://github.com/ClickHouse/ClickHouse/pull/3709)
### Backward incompatible changes:
-* Removed the ability to compare the `Date` type with a number. Instead of `toDate('2018-12-18') = 17883`, you must use explicit type conversion `= toDate(17883)` [#3687](https://github.com/yandex/ClickHouse/pull/3687)
+* Removed the ability to compare the `Date` type with a number. Instead of `toDate('2018-12-18') = 17883`, you must use explicit type conversion `= toDate(17883)` [#3687](https://github.com/ClickHouse/ClickHouse/pull/3687)
## ClickHouse release 18.14.19, 2018-12-19
### Bug fixes:
-* Fixed an error that led to problems with updating dictionaries with the ODBC source. [#3825](https://github.com/yandex/ClickHouse/issues/3825), [#3829](https://github.com/yandex/ClickHouse/issues/3829)
-* Databases are correctly specified when executing DDL `ON CLUSTER` queries. [#3460](https://github.com/yandex/ClickHouse/pull/3460)
-* Fixed a segfault if the `max_temporary_non_const_columns` limit was exceeded. [#3788](https://github.com/yandex/ClickHouse/pull/3788)
+* Fixed an error that led to problems with updating dictionaries with the ODBC source. [#3825](https://github.com/ClickHouse/ClickHouse/issues/3825), [#3829](https://github.com/ClickHouse/ClickHouse/issues/3829)
+* Databases are correctly specified when executing DDL `ON CLUSTER` queries. [#3460](https://github.com/ClickHouse/ClickHouse/pull/3460)
+* Fixed a segfault if the `max_temporary_non_const_columns` limit was exceeded. [#3788](https://github.com/ClickHouse/ClickHouse/pull/3788)
### Build improvements:
@@ -1551,296 +1551,296 @@ This release contains exactly the same set of patches as 19.3.6.
## ClickHouse release 18.14.18, 2018-12-04
### Bug fixes:
-* Fixed error in `dictGet...` function for dictionaries of type `range`, if one of the arguments is constant and other is not. [#3751](https://github.com/yandex/ClickHouse/pull/3751)
-* Fixed error that caused messages `netlink: '...': attribute type 1 has an invalid length` to be printed in Linux kernel log, that was happening only on fresh enough versions of Linux kernel. [#3749](https://github.com/yandex/ClickHouse/pull/3749)
-* Fixed segfault in function `empty` for argument of `FixedString` type. [Daniel, Dao Quang Minh](https://github.com/yandex/ClickHouse/pull/3703)
-* Fixed excessive memory allocation when using large value of `max_query_size` setting (a memory chunk of `max_query_size` bytes was preallocated at once). [#3720](https://github.com/yandex/ClickHouse/pull/3720)
+* Fixed error in `dictGet...` function for dictionaries of type `range`, if one of the arguments is constant and other is not. [#3751](https://github.com/ClickHouse/ClickHouse/pull/3751)
+* Fixed error that caused messages `netlink: '...': attribute type 1 has an invalid length` to be printed in Linux kernel log, that was happening only on fresh enough versions of Linux kernel. [#3749](https://github.com/ClickHouse/ClickHouse/pull/3749)
+* Fixed segfault in function `empty` for argument of `FixedString` type. [Daniel, Dao Quang Minh](https://github.com/ClickHouse/ClickHouse/pull/3703)
+* Fixed excessive memory allocation when using large value of `max_query_size` setting (a memory chunk of `max_query_size` bytes was preallocated at once). [#3720](https://github.com/ClickHouse/ClickHouse/pull/3720)
### Build changes:
-* Fixed build with LLVM/Clang libraries of version 7 from the OS packages (these libraries are used for runtime query compilation). [#3582](https://github.com/yandex/ClickHouse/pull/3582)
+* Fixed build with LLVM/Clang libraries of version 7 from the OS packages (these libraries are used for runtime query compilation). [#3582](https://github.com/ClickHouse/ClickHouse/pull/3582)
## ClickHouse release 18.14.17, 2018-11-30
### Bug fixes:
-* Fixed cases when the ODBC bridge process did not terminate with the main server process. [#3642](https://github.com/yandex/ClickHouse/pull/3642)
-* Fixed synchronous insertion into the `Distributed` table with a columns list that differs from the column list of the remote table. [#3673](https://github.com/yandex/ClickHouse/pull/3673)
-* Fixed a rare race condition that can lead to a crash when dropping a MergeTree table. [#3643](https://github.com/yandex/ClickHouse/pull/3643)
-* Fixed a query deadlock in case when query thread creation fails with the `Resource temporarily unavailable` error. [#3643](https://github.com/yandex/ClickHouse/pull/3643)
-* Fixed parsing of the `ENGINE` clause when the `CREATE AS table` syntax was used and the `ENGINE` clause was specified before the `AS table` (the error resulted in ignoring the specified engine). [#3692](https://github.com/yandex/ClickHouse/pull/3692)
+* Fixed cases when the ODBC bridge process did not terminate with the main server process. [#3642](https://github.com/ClickHouse/ClickHouse/pull/3642)
+* Fixed synchronous insertion into the `Distributed` table with a columns list that differs from the column list of the remote table. [#3673](https://github.com/ClickHouse/ClickHouse/pull/3673)
+* Fixed a rare race condition that can lead to a crash when dropping a MergeTree table. [#3643](https://github.com/ClickHouse/ClickHouse/pull/3643)
+* Fixed a query deadlock in case when query thread creation fails with the `Resource temporarily unavailable` error. [#3643](https://github.com/ClickHouse/ClickHouse/pull/3643)
+* Fixed parsing of the `ENGINE` clause when the `CREATE AS table` syntax was used and the `ENGINE` clause was specified before the `AS table` (the error resulted in ignoring the specified engine). [#3692](https://github.com/ClickHouse/ClickHouse/pull/3692)
## ClickHouse release 18.14.15, 2018-11-21
### Bug fixes:
-* The size of memory chunk was overestimated while deserializing the column of type `Array(String)` that leads to "Memory limit exceeded" errors. The issue appeared in version 18.12.13. [#3589](https://github.com/yandex/ClickHouse/issues/3589)
+* The size of memory chunk was overestimated while deserializing the column of type `Array(String)` that leads to "Memory limit exceeded" errors. The issue appeared in version 18.12.13. [#3589](https://github.com/ClickHouse/ClickHouse/issues/3589)
## ClickHouse release 18.14.14, 2018-11-20
### Bug fixes:
-* Fixed `ON CLUSTER` queries when cluster configured as secure (flag ``). [#3599](https://github.com/yandex/ClickHouse/pull/3599)
+* Fixed `ON CLUSTER` queries when cluster configured as secure (flag ``). [#3599](https://github.com/ClickHouse/ClickHouse/pull/3599)
### Build changes:
-* Fixed problems (llvm-7 from system, macos) [#3582](https://github.com/yandex/ClickHouse/pull/3582)
+* Fixed problems (llvm-7 from system, macos) [#3582](https://github.com/ClickHouse/ClickHouse/pull/3582)
## ClickHouse release 18.14.13, 2018-11-08
### Bug fixes:
-* Fixed the `Block structure mismatch in MergingSorted stream` error. [#3162](https://github.com/yandex/ClickHouse/issues/3162)
-* Fixed `ON CLUSTER` queries in case when secure connections were turned on in the cluster config (the `` flag). [#3465](https://github.com/yandex/ClickHouse/pull/3465)
-* Fixed an error in queries that used `SAMPLE`, `PREWHERE` and alias columns. [#3543](https://github.com/yandex/ClickHouse/pull/3543)
-* Fixed a rare `unknown compression method` error when the `min_bytes_to_use_direct_io` setting was enabled. [3544](https://github.com/yandex/ClickHouse/pull/3544)
+* Fixed the `Block structure mismatch in MergingSorted stream` error. [#3162](https://github.com/ClickHouse/ClickHouse/issues/3162)
+* Fixed `ON CLUSTER` queries in case when secure connections were turned on in the cluster config (the `` flag). [#3465](https://github.com/ClickHouse/ClickHouse/pull/3465)
+* Fixed an error in queries that used `SAMPLE`, `PREWHERE` and alias columns. [#3543](https://github.com/ClickHouse/ClickHouse/pull/3543)
+* Fixed a rare `unknown compression method` error when the `min_bytes_to_use_direct_io` setting was enabled. [3544](https://github.com/ClickHouse/ClickHouse/pull/3544)
### Performance improvements:
-* Fixed performance regression of queries with `GROUP BY` of columns of UInt16 or Date type when executing on AMD EPYC processors. [Igor Lapko](https://github.com/yandex/ClickHouse/pull/3512)
-* Fixed performance regression of queries that process long strings. [#3530](https://github.com/yandex/ClickHouse/pull/3530)
+* Fixed performance regression of queries with `GROUP BY` of columns of UInt16 or Date type when executing on AMD EPYC processors. [Igor Lapko](https://github.com/ClickHouse/ClickHouse/pull/3512)
+* Fixed performance regression of queries that process long strings. [#3530](https://github.com/ClickHouse/ClickHouse/pull/3530)
### Build improvements:
-* Improvements for simplifying the Arcadia build. [#3475](https://github.com/yandex/ClickHouse/pull/3475), [#3535](https://github.com/yandex/ClickHouse/pull/3535)
+* Improvements for simplifying the Arcadia build. [#3475](https://github.com/ClickHouse/ClickHouse/pull/3475), [#3535](https://github.com/ClickHouse/ClickHouse/pull/3535)
## ClickHouse release 18.14.12, 2018-11-02
### Bug fixes:
-* Fixed a crash on joining two unnamed subqueries. [#3505](https://github.com/yandex/ClickHouse/pull/3505)
-* Fixed generating incorrect queries (with an empty `WHERE` clause) when querying external databases. [hotid](https://github.com/yandex/ClickHouse/pull/3477)
-* Fixed using an incorrect timeout value in ODBC dictionaries. [Marek Vavruša](https://github.com/yandex/ClickHouse/pull/3511)
+* Fixed a crash on joining two unnamed subqueries. [#3505](https://github.com/ClickHouse/ClickHouse/pull/3505)
+* Fixed generating incorrect queries (with an empty `WHERE` clause) when querying external databases. [hotid](https://github.com/ClickHouse/ClickHouse/pull/3477)
+* Fixed using an incorrect timeout value in ODBC dictionaries. [Marek Vavruša](https://github.com/ClickHouse/ClickHouse/pull/3511)
## ClickHouse release 18.14.11, 2018-10-29
### Bug fixes:
-* Fixed the error `Block structure mismatch in UNION stream: different number of columns` in LIMIT queries. [#2156](https://github.com/yandex/ClickHouse/issues/2156)
-* Fixed errors when merging data in tables containing arrays inside Nested structures. [#3397](https://github.com/yandex/ClickHouse/pull/3397)
-* Fixed incorrect query results if the `merge_tree_uniform_read_distribution` setting is disabled (it is enabled by default). [#3429](https://github.com/yandex/ClickHouse/pull/3429)
-* Fixed an error on inserts to a Distributed table in Native format. [#3411](https://github.com/yandex/ClickHouse/issues/3411)
+* Fixed the error `Block structure mismatch in UNION stream: different number of columns` in LIMIT queries. [#2156](https://github.com/ClickHouse/ClickHouse/issues/2156)
+* Fixed errors when merging data in tables containing arrays inside Nested structures. [#3397](https://github.com/ClickHouse/ClickHouse/pull/3397)
+* Fixed incorrect query results if the `merge_tree_uniform_read_distribution` setting is disabled (it is enabled by default). [#3429](https://github.com/ClickHouse/ClickHouse/pull/3429)
+* Fixed an error on inserts to a Distributed table in Native format. [#3411](https://github.com/ClickHouse/ClickHouse/issues/3411)
## ClickHouse release 18.14.10, 2018-10-23
-* The `compile_expressions` setting (JIT compilation of expressions) is disabled by default. [#3410](https://github.com/yandex/ClickHouse/pull/3410)
+* The `compile_expressions` setting (JIT compilation of expressions) is disabled by default. [#3410](https://github.com/ClickHouse/ClickHouse/pull/3410)
* The `enable_optimize_predicate_expression` setting is disabled by default.
## ClickHouse release 18.14.9, 2018-10-16
### New features:
-* The `WITH CUBE` modifier for `GROUP BY` (the alternative syntax `GROUP BY CUBE(...)` is also available). [#3172](https://github.com/yandex/ClickHouse/pull/3172)
-* Added the `formatDateTime` function. [Alexandr Krasheninnikov](https://github.com/yandex/ClickHouse/pull/2770)
-* Added the `JDBC` table engine and `jdbc` table function (requires installing clickhouse-jdbc-bridge). [Alexandr Krasheninnikov](https://github.com/yandex/ClickHouse/pull/3210)
-* Added functions for working with the ISO week number: `toISOWeek`, `toISOYear`, `toStartOfISOYear`, and `toDayOfYear`. [#3146](https://github.com/yandex/ClickHouse/pull/3146)
-* Now you can use `Nullable` columns for `MySQL` and `ODBC` tables. [#3362](https://github.com/yandex/ClickHouse/pull/3362)
-* Nested data structures can be read as nested objects in `JSONEachRow` format. Added the `input_format_import_nested_json` setting. [Veloman Yunkan](https://github.com/yandex/ClickHouse/pull/3144)
-* Parallel processing is available for many `MATERIALIZED VIEW`s when inserting data. See the `parallel_view_processing` setting. [Marek Vavruša](https://github.com/yandex/ClickHouse/pull/3208)
-* Added the `SYSTEM FLUSH LOGS` query (forced log flushes to system tables such as `query_log`) [#3321](https://github.com/yandex/ClickHouse/pull/3321)
-* Now you can use pre-defined `database` and `table` macros when declaring `Replicated` tables. [#3251](https://github.com/yandex/ClickHouse/pull/3251)
-* Added the ability to read `Decimal` type values in engineering notation (indicating powers of ten). [#3153](https://github.com/yandex/ClickHouse/pull/3153)
+* The `WITH CUBE` modifier for `GROUP BY` (the alternative syntax `GROUP BY CUBE(...)` is also available). [#3172](https://github.com/ClickHouse/ClickHouse/pull/3172)
+* Added the `formatDateTime` function. [Alexandr Krasheninnikov](https://github.com/ClickHouse/ClickHouse/pull/2770)
+* Added the `JDBC` table engine and `jdbc` table function (requires installing clickhouse-jdbc-bridge). [Alexandr Krasheninnikov](https://github.com/ClickHouse/ClickHouse/pull/3210)
+* Added functions for working with the ISO week number: `toISOWeek`, `toISOYear`, `toStartOfISOYear`, and `toDayOfYear`. [#3146](https://github.com/ClickHouse/ClickHouse/pull/3146)
+* Now you can use `Nullable` columns for `MySQL` and `ODBC` tables. [#3362](https://github.com/ClickHouse/ClickHouse/pull/3362)
+* Nested data structures can be read as nested objects in `JSONEachRow` format. Added the `input_format_import_nested_json` setting. [Veloman Yunkan](https://github.com/ClickHouse/ClickHouse/pull/3144)
+* Parallel processing is available for many `MATERIALIZED VIEW`s when inserting data. See the `parallel_view_processing` setting. [Marek Vavruša](https://github.com/ClickHouse/ClickHouse/pull/3208)
+* Added the `SYSTEM FLUSH LOGS` query (forced log flushes to system tables such as `query_log`) [#3321](https://github.com/ClickHouse/ClickHouse/pull/3321)
+* Now you can use pre-defined `database` and `table` macros when declaring `Replicated` tables. [#3251](https://github.com/ClickHouse/ClickHouse/pull/3251)
+* Added the ability to read `Decimal` type values in engineering notation (indicating powers of ten). [#3153](https://github.com/ClickHouse/ClickHouse/pull/3153)
### Experimental features:
-* Optimization of the GROUP BY clause for `LowCardinality data types.` [#3138](https://github.com/yandex/ClickHouse/pull/3138)
-* Optimized calculation of expressions for `LowCardinality data types.` [#3200](https://github.com/yandex/ClickHouse/pull/3200)
+* Optimization of the GROUP BY clause for `LowCardinality data types.` [#3138](https://github.com/ClickHouse/ClickHouse/pull/3138)
+* Optimized calculation of expressions for `LowCardinality data types.` [#3200](https://github.com/ClickHouse/ClickHouse/pull/3200)
### Improvements:
-* Significantly reduced memory consumption for queries with `ORDER BY` and `LIMIT`. See the `max_bytes_before_remerge_sort` setting. [#3205](https://github.com/yandex/ClickHouse/pull/3205)
-* In the absence of `JOIN` (`LEFT`, `INNER`, ...), `INNER JOIN` is assumed. [#3147](https://github.com/yandex/ClickHouse/pull/3147)
-* Qualified asterisks work correctly in queries with `JOIN`. [Winter Zhang](https://github.com/yandex/ClickHouse/pull/3202)
-* The `ODBC` table engine correctly chooses the method for quoting identifiers in the SQL dialect of a remote database. [Alexandr Krasheninnikov](https://github.com/yandex/ClickHouse/pull/3210)
+* Significantly reduced memory consumption for queries with `ORDER BY` and `LIMIT`. See the `max_bytes_before_remerge_sort` setting. [#3205](https://github.com/ClickHouse/ClickHouse/pull/3205)
+* In the absence of `JOIN` (`LEFT`, `INNER`, ...), `INNER JOIN` is assumed. [#3147](https://github.com/ClickHouse/ClickHouse/pull/3147)
+* Qualified asterisks work correctly in queries with `JOIN`. [Winter Zhang](https://github.com/ClickHouse/ClickHouse/pull/3202)
+* The `ODBC` table engine correctly chooses the method for quoting identifiers in the SQL dialect of a remote database. [Alexandr Krasheninnikov](https://github.com/ClickHouse/ClickHouse/pull/3210)
* The `compile_expressions` setting (JIT compilation of expressions) is enabled by default.
-* Fixed behavior for simultaneous DROP DATABASE/TABLE IF EXISTS and CREATE DATABASE/TABLE IF NOT EXISTS. Previously, a `CREATE DATABASE ... IF NOT EXISTS` query could return the error message "File ... already exists", and the `CREATE TABLE ... IF NOT EXISTS` and `DROP TABLE IF EXISTS` queries could return `Table ... is creating or attaching right now`. [#3101](https://github.com/yandex/ClickHouse/pull/3101)
-* LIKE and IN expressions with a constant right half are passed to the remote server when querying from MySQL or ODBC tables. [#3182](https://github.com/yandex/ClickHouse/pull/3182)
-* Comparisons with constant expressions in a WHERE clause are passed to the remote server when querying from MySQL and ODBC tables. Previously, only comparisons with constants were passed. [#3182](https://github.com/yandex/ClickHouse/pull/3182)
-* Correct calculation of row width in the terminal for `Pretty` formats, including strings with hieroglyphs. [Amos Bird](https://github.com/yandex/ClickHouse/pull/3257).
+* Fixed behavior for simultaneous DROP DATABASE/TABLE IF EXISTS and CREATE DATABASE/TABLE IF NOT EXISTS. Previously, a `CREATE DATABASE ... IF NOT EXISTS` query could return the error message "File ... already exists", and the `CREATE TABLE ... IF NOT EXISTS` and `DROP TABLE IF EXISTS` queries could return `Table ... is creating or attaching right now`. [#3101](https://github.com/ClickHouse/ClickHouse/pull/3101)
+* LIKE and IN expressions with a constant right half are passed to the remote server when querying from MySQL or ODBC tables. [#3182](https://github.com/ClickHouse/ClickHouse/pull/3182)
+* Comparisons with constant expressions in a WHERE clause are passed to the remote server when querying from MySQL and ODBC tables. Previously, only comparisons with constants were passed. [#3182](https://github.com/ClickHouse/ClickHouse/pull/3182)
+* Correct calculation of row width in the terminal for `Pretty` formats, including strings with hieroglyphs. [Amos Bird](https://github.com/ClickHouse/ClickHouse/pull/3257).
* `ON CLUSTER` can be specified for `ALTER UPDATE` queries.
-* Improved performance for reading data in `JSONEachRow` format. [#3332](https://github.com/yandex/ClickHouse/pull/3332)
-* Added synonyms for the `LENGTH` and `CHARACTER_LENGTH` functions for compatibility. The `CONCAT` function is no longer case-sensitive. [#3306](https://github.com/yandex/ClickHouse/pull/3306)
-* Added the `TIMESTAMP` synonym for the `DateTime` type. [#3390](https://github.com/yandex/ClickHouse/pull/3390)
+* Improved performance for reading data in `JSONEachRow` format. [#3332](https://github.com/ClickHouse/ClickHouse/pull/3332)
+* Added synonyms for the `LENGTH` and `CHARACTER_LENGTH` functions for compatibility. The `CONCAT` function is no longer case-sensitive. [#3306](https://github.com/ClickHouse/ClickHouse/pull/3306)
+* Added the `TIMESTAMP` synonym for the `DateTime` type. [#3390](https://github.com/ClickHouse/ClickHouse/pull/3390)
* There is always space reserved for query_id in the server logs, even if the log line is not related to a query. This makes it easier to parse server text logs with third-party tools.
-* Memory consumption by a query is logged when it exceeds the next level of an integer number of gigabytes. [#3205](https://github.com/yandex/ClickHouse/pull/3205)
-* Added compatibility mode for the case when the client library that uses the Native protocol sends fewer columns by mistake than the server expects for the INSERT query. This scenario was possible when using the clickhouse-cpp library. Previously, this scenario caused the server to crash. [#3171](https://github.com/yandex/ClickHouse/pull/3171)
-* In a user-defined WHERE expression in `clickhouse-copier`, you can now use a `partition_key` alias (for additional filtering by source table partition). This is useful if the partitioning scheme changes during copying, but only changes slightly. [#3166](https://github.com/yandex/ClickHouse/pull/3166)
-* The workflow of the `Kafka` engine has been moved to a background thread pool in order to automatically reduce the speed of data reading at high loads. [Marek Vavruša](https://github.com/yandex/ClickHouse/pull/3215).
-* Support for reading `Tuple` and `Nested` values of structures like `struct` in the `Cap'n'Proto format`. [Marek Vavruša](https://github.com/yandex/ClickHouse/pull/3216)
-* The list of top-level domains for the `firstSignificantSubdomain` function now includes the domain `biz`. [decaseal](https://github.com/yandex/ClickHouse/pull/3219)
-* In the configuration of external dictionaries, `null_value` is interpreted as the value of the default data type. [#3330](https://github.com/yandex/ClickHouse/pull/3330)
-* Support for the `intDiv` and `intDivOrZero` functions for `Decimal`. [b48402e8](https://github.com/yandex/ClickHouse/commit/b48402e8712e2b9b151e0eef8193811d433a1264)
-* Support for the `Date`, `DateTime`, `UUID`, and `Decimal` types as a key for the `sumMap` aggregate function. [#3281](https://github.com/yandex/ClickHouse/pull/3281)
-* Support for the `Decimal` data type in external dictionaries. [#3324](https://github.com/yandex/ClickHouse/pull/3324)
-* Support for the `Decimal` data type in `SummingMergeTree` tables. [#3348](https://github.com/yandex/ClickHouse/pull/3348)
-* Added specializations for `UUID` in `if`. [#3366](https://github.com/yandex/ClickHouse/pull/3366)
-* Reduced the number of `open` and `close` system calls when reading from a `MergeTree table`. [#3283](https://github.com/yandex/ClickHouse/pull/3283)
-* A `TRUNCATE TABLE` query can be executed on any replica (the query is passed to the leader replica). [Kirill Shvakov](https://github.com/yandex/ClickHouse/pull/3375)
+* Memory consumption by a query is logged when it exceeds the next level of an integer number of gigabytes. [#3205](https://github.com/ClickHouse/ClickHouse/pull/3205)
+* Added compatibility mode for the case when the client library that uses the Native protocol sends fewer columns by mistake than the server expects for the INSERT query. This scenario was possible when using the clickhouse-cpp library. Previously, this scenario caused the server to crash. [#3171](https://github.com/ClickHouse/ClickHouse/pull/3171)
+* In a user-defined WHERE expression in `clickhouse-copier`, you can now use a `partition_key` alias (for additional filtering by source table partition). This is useful if the partitioning scheme changes during copying, but only changes slightly. [#3166](https://github.com/ClickHouse/ClickHouse/pull/3166)
+* The workflow of the `Kafka` engine has been moved to a background thread pool in order to automatically reduce the speed of data reading at high loads. [Marek Vavruša](https://github.com/ClickHouse/ClickHouse/pull/3215).
+* Support for reading `Tuple` and `Nested` values of structures like `struct` in the `Cap'n'Proto format`. [Marek Vavruša](https://github.com/ClickHouse/ClickHouse/pull/3216)
+* The list of top-level domains for the `firstSignificantSubdomain` function now includes the domain `biz`. [decaseal](https://github.com/ClickHouse/ClickHouse/pull/3219)
+* In the configuration of external dictionaries, `null_value` is interpreted as the value of the default data type. [#3330](https://github.com/ClickHouse/ClickHouse/pull/3330)
+* Support for the `intDiv` and `intDivOrZero` functions for `Decimal`. [b48402e8](https://github.com/ClickHouse/ClickHouse/commit/b48402e8712e2b9b151e0eef8193811d433a1264)
+* Support for the `Date`, `DateTime`, `UUID`, and `Decimal` types as a key for the `sumMap` aggregate function. [#3281](https://github.com/ClickHouse/ClickHouse/pull/3281)
+* Support for the `Decimal` data type in external dictionaries. [#3324](https://github.com/ClickHouse/ClickHouse/pull/3324)
+* Support for the `Decimal` data type in `SummingMergeTree` tables. [#3348](https://github.com/ClickHouse/ClickHouse/pull/3348)
+* Added specializations for `UUID` in `if`. [#3366](https://github.com/ClickHouse/ClickHouse/pull/3366)
+* Reduced the number of `open` and `close` system calls when reading from a `MergeTree table`. [#3283](https://github.com/ClickHouse/ClickHouse/pull/3283)
+* A `TRUNCATE TABLE` query can be executed on any replica (the query is passed to the leader replica). [Kirill Shvakov](https://github.com/ClickHouse/ClickHouse/pull/3375)
### Bug fixes:
-* Fixed an issue with `Dictionary` tables for `range_hashed` dictionaries. This error occurred in version 18.12.17. [#1702](https://github.com/yandex/ClickHouse/pull/1702)
-* Fixed an error when loading `range_hashed` dictionaries (the message `Unsupported type Nullable (...)`). This error occurred in version 18.12.17. [#3362](https://github.com/yandex/ClickHouse/pull/3362)
-* Fixed errors in the `pointInPolygon` function due to the accumulation of inaccurate calculations for polygons with a large number of vertices located close to each other. [#3331](https://github.com/yandex/ClickHouse/pull/3331) [#3341](https://github.com/yandex/ClickHouse/pull/3341)
-* If after merging data parts, the checksum for the resulting part differs from the result of the same merge in another replica, the result of the merge is deleted and the data part is downloaded from the other replica (this is the correct behavior). But after downloading the data part, it couldn't be added to the working set because of an error that the part already exists (because the data part was deleted with some delay after the merge). This led to cyclical attempts to download the same data. [#3194](https://github.com/yandex/ClickHouse/pull/3194)
-* Fixed incorrect calculation of total memory consumption by queries (because of incorrect calculation, the `max_memory_usage_for_all_queries` setting worked incorrectly and the `MemoryTracking` metric had an incorrect value). This error occurred in version 18.12.13. [Marek Vavruša](https://github.com/yandex/ClickHouse/pull/3344)
-* Fixed the functionality of `CREATE TABLE ... ON CLUSTER ... AS SELECT ...` This error occurred in version 18.12.13. [#3247](https://github.com/yandex/ClickHouse/pull/3247)
-* Fixed unnecessary preparation of data structures for `JOIN`s on the server that initiates the query if the `JOIN` is only performed on remote servers. [#3340](https://github.com/yandex/ClickHouse/pull/3340)
-* Fixed bugs in the `Kafka` engine: deadlocks after exceptions when starting to read data, and locks upon completion [Marek Vavruša](https://github.com/yandex/ClickHouse/pull/3215).
-* For `Kafka` tables, the optional `schema` parameter was not passed (the schema of the `Cap'n'Proto` format). [Vojtech Splichal](https://github.com/yandex/ClickHouse/pull/3150)
-* If the ensemble of ZooKeeper servers has servers that accept the connection but then immediately close it instead of responding to the handshake, ClickHouse chooses to connect another server. Previously, this produced the error `Cannot read all data. Bytes read: 0. Bytes expected: 4.` and the server couldn't start. [8218cf3a](https://github.com/yandex/ClickHouse/commit/8218cf3a5f39a43401953769d6d12a0bb8d29da9)
-* If the ensemble of ZooKeeper servers contains servers for which the DNS query returns an error, these servers are ignored. [17b8e209](https://github.com/yandex/ClickHouse/commit/17b8e209221061325ad7ba0539f03c6e65f87f29)
-* Fixed type conversion between `Date` and `DateTime` when inserting data in the `VALUES` format (if `input_format_values_interpret_expressions = 1`). Previously, the conversion was performed between the numerical value of the number of days in Unix Epoch time and the Unix timestamp, which led to unexpected results. [#3229](https://github.com/yandex/ClickHouse/pull/3229)
-* Corrected type conversion between `Decimal` and integer numbers. [#3211](https://github.com/yandex/ClickHouse/pull/3211)
-* Fixed errors in the `enable_optimize_predicate_expression` setting. [Winter Zhang](https://github.com/yandex/ClickHouse/pull/3231)
-* Fixed a parsing error in CSV format with floating-point numbers if a non-default CSV separator is used, such as `;` [#3155](https://github.com/yandex/ClickHouse/pull/3155)
-* Fixed the `arrayCumSumNonNegative` function (it does not accumulate negative values if the accumulator is less than zero). [Aleksey Studnev](https://github.com/yandex/ClickHouse/pull/3163)
-* Fixed how `Merge` tables work on top of `Distributed` tables when using `PREWHERE`. [#3165](https://github.com/yandex/ClickHouse/pull/3165)
+* Fixed an issue with `Dictionary` tables for `range_hashed` dictionaries. This error occurred in version 18.12.17. [#1702](https://github.com/ClickHouse/ClickHouse/pull/1702)
+* Fixed an error when loading `range_hashed` dictionaries (the message `Unsupported type Nullable (...)`). This error occurred in version 18.12.17. [#3362](https://github.com/ClickHouse/ClickHouse/pull/3362)
+* Fixed errors in the `pointInPolygon` function due to the accumulation of inaccurate calculations for polygons with a large number of vertices located close to each other. [#3331](https://github.com/ClickHouse/ClickHouse/pull/3331) [#3341](https://github.com/ClickHouse/ClickHouse/pull/3341)
+* If after merging data parts, the checksum for the resulting part differs from the result of the same merge in another replica, the result of the merge is deleted and the data part is downloaded from the other replica (this is the correct behavior). But after downloading the data part, it couldn't be added to the working set because of an error that the part already exists (because the data part was deleted with some delay after the merge). This led to cyclical attempts to download the same data. [#3194](https://github.com/ClickHouse/ClickHouse/pull/3194)
+* Fixed incorrect calculation of total memory consumption by queries (because of incorrect calculation, the `max_memory_usage_for_all_queries` setting worked incorrectly and the `MemoryTracking` metric had an incorrect value). This error occurred in version 18.12.13. [Marek Vavruša](https://github.com/ClickHouse/ClickHouse/pull/3344)
+* Fixed the functionality of `CREATE TABLE ... ON CLUSTER ... AS SELECT ...` This error occurred in version 18.12.13. [#3247](https://github.com/ClickHouse/ClickHouse/pull/3247)
+* Fixed unnecessary preparation of data structures for `JOIN`s on the server that initiates the query if the `JOIN` is only performed on remote servers. [#3340](https://github.com/ClickHouse/ClickHouse/pull/3340)
+* Fixed bugs in the `Kafka` engine: deadlocks after exceptions when starting to read data, and locks upon completion [Marek Vavruša](https://github.com/ClickHouse/ClickHouse/pull/3215).
+* For `Kafka` tables, the optional `schema` parameter was not passed (the schema of the `Cap'n'Proto` format). [Vojtech Splichal](https://github.com/ClickHouse/ClickHouse/pull/3150)
+* If the ensemble of ZooKeeper servers has servers that accept the connection but then immediately close it instead of responding to the handshake, ClickHouse chooses to connect another server. Previously, this produced the error `Cannot read all data. Bytes read: 0. Bytes expected: 4.` and the server couldn't start. [8218cf3a](https://github.com/ClickHouse/ClickHouse/commit/8218cf3a5f39a43401953769d6d12a0bb8d29da9)
+* If the ensemble of ZooKeeper servers contains servers for which the DNS query returns an error, these servers are ignored. [17b8e209](https://github.com/ClickHouse/ClickHouse/commit/17b8e209221061325ad7ba0539f03c6e65f87f29)
+* Fixed type conversion between `Date` and `DateTime` when inserting data in the `VALUES` format (if `input_format_values_interpret_expressions = 1`). Previously, the conversion was performed between the numerical value of the number of days in Unix Epoch time and the Unix timestamp, which led to unexpected results. [#3229](https://github.com/ClickHouse/ClickHouse/pull/3229)
+* Corrected type conversion between `Decimal` and integer numbers. [#3211](https://github.com/ClickHouse/ClickHouse/pull/3211)
+* Fixed errors in the `enable_optimize_predicate_expression` setting. [Winter Zhang](https://github.com/ClickHouse/ClickHouse/pull/3231)
+* Fixed a parsing error in CSV format with floating-point numbers if a non-default CSV separator is used, such as `;` [#3155](https://github.com/ClickHouse/ClickHouse/pull/3155)
+* Fixed the `arrayCumSumNonNegative` function (it does not accumulate negative values if the accumulator is less than zero). [Aleksey Studnev](https://github.com/ClickHouse/ClickHouse/pull/3163)
+* Fixed how `Merge` tables work on top of `Distributed` tables when using `PREWHERE`. [#3165](https://github.com/ClickHouse/ClickHouse/pull/3165)
* Bug fixes in the `ALTER UPDATE` query.
-* Fixed bugs in the `odbc` table function that appeared in version 18.12. [#3197](https://github.com/yandex/ClickHouse/pull/3197)
-* Fixed the operation of aggregate functions with `StateArray` combinators. [#3188](https://github.com/yandex/ClickHouse/pull/3188)
-* Fixed a crash when dividing a `Decimal` value by zero. [69dd6609](https://github.com/yandex/ClickHouse/commit/69dd6609193beb4e7acd3e6ad216eca0ccfb8179)
-* Fixed output of types for operations using `Decimal` and integer arguments. [#3224](https://github.com/yandex/ClickHouse/pull/3224)
-* Fixed the segfault during `GROUP BY` on `Decimal128`. [3359ba06](https://github.com/yandex/ClickHouse/commit/3359ba06c39fcd05bfdb87d6c64154819621e13a)
-* The `log_query_threads` setting (logging information about each thread of query execution) now takes effect only if the `log_queries` option (logging information about queries) is set to 1. Since the `log_query_threads` option is enabled by default, information about threads was previously logged even if query logging was disabled. [#3241](https://github.com/yandex/ClickHouse/pull/3241)
-* Fixed an error in the distributed operation of the quantiles aggregate function (the error message `Not found column quantile...`). [292a8855](https://github.com/yandex/ClickHouse/commit/292a885533b8e3b41ce8993867069d14cbd5a664)
-* Fixed the compatibility problem when working on a cluster of version 18.12.17 servers and older servers at the same time. For distributed queries with GROUP BY keys of both fixed and non-fixed length, if there was a large amount of data to aggregate, the returned data was not always fully aggregated (two different rows contained the same aggregation keys). [#3254](https://github.com/yandex/ClickHouse/pull/3254)
-* Fixed handling of substitutions in `clickhouse-performance-test`, if the query contains only part of the substitutions declared in the test. [#3263](https://github.com/yandex/ClickHouse/pull/3263)
-* Fixed an error when using `FINAL` with `PREWHERE`. [#3298](https://github.com/yandex/ClickHouse/pull/3298)
-* Fixed an error when using `PREWHERE` over columns that were added during `ALTER`. [#3298](https://github.com/yandex/ClickHouse/pull/3298)
-* Added a check for the absence of `arrayJoin` for `DEFAULT` and `MATERIALIZED` expressions. Previously, `arrayJoin` led to an error when inserting data. [#3337](https://github.com/yandex/ClickHouse/pull/3337)
-* Added a check for the absence of `arrayJoin` in a `PREWHERE` clause. Previously, this led to messages like `Size ... doesn't match` or `Unknown compression method` when executing queries. [#3357](https://github.com/yandex/ClickHouse/pull/3357)
-* Fixed segfault that could occur in rare cases after optimization that replaced AND chains from equality evaluations with the corresponding IN expression. [liuyimin-bytedance](https://github.com/yandex/ClickHouse/pull/3339)
-* Minor corrections to `clickhouse-benchmark`: previously, client information was not sent to the server; now the number of queries executed is calculated more accurately when shutting down and for limiting the number of iterations. [#3351](https://github.com/yandex/ClickHouse/pull/3351) [#3352](https://github.com/yandex/ClickHouse/pull/3352)
+* Fixed bugs in the `odbc` table function that appeared in version 18.12. [#3197](https://github.com/ClickHouse/ClickHouse/pull/3197)
+* Fixed the operation of aggregate functions with `StateArray` combinators. [#3188](https://github.com/ClickHouse/ClickHouse/pull/3188)
+* Fixed a crash when dividing a `Decimal` value by zero. [69dd6609](https://github.com/ClickHouse/ClickHouse/commit/69dd6609193beb4e7acd3e6ad216eca0ccfb8179)
+* Fixed output of types for operations using `Decimal` and integer arguments. [#3224](https://github.com/ClickHouse/ClickHouse/pull/3224)
+* Fixed the segfault during `GROUP BY` on `Decimal128`. [3359ba06](https://github.com/ClickHouse/ClickHouse/commit/3359ba06c39fcd05bfdb87d6c64154819621e13a)
+* The `log_query_threads` setting (logging information about each thread of query execution) now takes effect only if the `log_queries` option (logging information about queries) is set to 1. Since the `log_query_threads` option is enabled by default, information about threads was previously logged even if query logging was disabled. [#3241](https://github.com/ClickHouse/ClickHouse/pull/3241)
+* Fixed an error in the distributed operation of the quantiles aggregate function (the error message `Not found column quantile...`). [292a8855](https://github.com/ClickHouse/ClickHouse/commit/292a885533b8e3b41ce8993867069d14cbd5a664)
+* Fixed the compatibility problem when working on a cluster of version 18.12.17 servers and older servers at the same time. For distributed queries with GROUP BY keys of both fixed and non-fixed length, if there was a large amount of data to aggregate, the returned data was not always fully aggregated (two different rows contained the same aggregation keys). [#3254](https://github.com/ClickHouse/ClickHouse/pull/3254)
+* Fixed handling of substitutions in `clickhouse-performance-test`, if the query contains only part of the substitutions declared in the test. [#3263](https://github.com/ClickHouse/ClickHouse/pull/3263)
+* Fixed an error when using `FINAL` with `PREWHERE`. [#3298](https://github.com/ClickHouse/ClickHouse/pull/3298)
+* Fixed an error when using `PREWHERE` over columns that were added during `ALTER`. [#3298](https://github.com/ClickHouse/ClickHouse/pull/3298)
+* Added a check for the absence of `arrayJoin` for `DEFAULT` and `MATERIALIZED` expressions. Previously, `arrayJoin` led to an error when inserting data. [#3337](https://github.com/ClickHouse/ClickHouse/pull/3337)
+* Added a check for the absence of `arrayJoin` in a `PREWHERE` clause. Previously, this led to messages like `Size ... doesn't match` or `Unknown compression method` when executing queries. [#3357](https://github.com/ClickHouse/ClickHouse/pull/3357)
+* Fixed segfault that could occur in rare cases after optimization that replaced AND chains from equality evaluations with the corresponding IN expression. [liuyimin-bytedance](https://github.com/ClickHouse/ClickHouse/pull/3339)
+* Minor corrections to `clickhouse-benchmark`: previously, client information was not sent to the server; now the number of queries executed is calculated more accurately when shutting down and for limiting the number of iterations. [#3351](https://github.com/ClickHouse/ClickHouse/pull/3351) [#3352](https://github.com/ClickHouse/ClickHouse/pull/3352)
### Backward incompatible changes:
-* Removed the `allow_experimental_decimal_type` option. The `Decimal` data type is available for default use. [#3329](https://github.com/yandex/ClickHouse/pull/3329)
+* Removed the `allow_experimental_decimal_type` option. The `Decimal` data type is available for default use. [#3329](https://github.com/ClickHouse/ClickHouse/pull/3329)
## ClickHouse release 18.12.17, 2018-09-16
### New features:
-* `invalidate_query` (the ability to specify a query to check whether an external dictionary needs to be updated) is implemented for the `clickhouse` source. [#3126](https://github.com/yandex/ClickHouse/pull/3126)
-* Added the ability to use `UInt*`, `Int*`, and `DateTime` data types (along with the `Date` type) as a `range_hashed` external dictionary key that defines the boundaries of ranges. Now `NULL` can be used to designate an open range. [Vasily Nemkov](https://github.com/yandex/ClickHouse/pull/3123)
-* The `Decimal` type now supports `var*` and `stddev*` aggregate functions. [#3129](https://github.com/yandex/ClickHouse/pull/3129)
-* The `Decimal` type now supports mathematical functions (`exp`, `sin` and so on.) [#3129](https://github.com/yandex/ClickHouse/pull/3129)
-* The `system.part_log` table now has the `partition_id` column. [#3089](https://github.com/yandex/ClickHouse/pull/3089)
+* `invalidate_query` (the ability to specify a query to check whether an external dictionary needs to be updated) is implemented for the `clickhouse` source. [#3126](https://github.com/ClickHouse/ClickHouse/pull/3126)
+* Added the ability to use `UInt*`, `Int*`, and `DateTime` data types (along with the `Date` type) as a `range_hashed` external dictionary key that defines the boundaries of ranges. Now `NULL` can be used to designate an open range. [Vasily Nemkov](https://github.com/ClickHouse/ClickHouse/pull/3123)
+* The `Decimal` type now supports `var*` and `stddev*` aggregate functions. [#3129](https://github.com/ClickHouse/ClickHouse/pull/3129)
+* The `Decimal` type now supports mathematical functions (`exp`, `sin` and so on.) [#3129](https://github.com/ClickHouse/ClickHouse/pull/3129)
+* The `system.part_log` table now has the `partition_id` column. [#3089](https://github.com/ClickHouse/ClickHouse/pull/3089)
### Bug fixes:
-* `Merge` now works correctly on `Distributed` tables. [Winter Zhang](https://github.com/yandex/ClickHouse/pull/3159)
-* Fixed incompatibility (unnecessary dependency on the `glibc` version) that made it impossible to run ClickHouse on `Ubuntu Precise` and older versions. The incompatibility arose in version 18.12.13. [#3130](https://github.com/yandex/ClickHouse/pull/3130)
-* Fixed errors in the `enable_optimize_predicate_expression` setting. [Winter Zhang](https://github.com/yandex/ClickHouse/pull/3107)
-* Fixed a minor issue with backwards compatibility that appeared when working with a cluster of replicas on versions earlier than 18.12.13 and simultaneously creating a new replica of a table on a server with a newer version (shown in the message `Can not clone replica, because the ... updated to new ClickHouse version`, which is logical, but shouldn't happen). [#3122](https://github.com/yandex/ClickHouse/pull/3122)
+* `Merge` now works correctly on `Distributed` tables. [Winter Zhang](https://github.com/ClickHouse/ClickHouse/pull/3159)
+* Fixed incompatibility (unnecessary dependency on the `glibc` version) that made it impossible to run ClickHouse on `Ubuntu Precise` and older versions. The incompatibility arose in version 18.12.13. [#3130](https://github.com/ClickHouse/ClickHouse/pull/3130)
+* Fixed errors in the `enable_optimize_predicate_expression` setting. [Winter Zhang](https://github.com/ClickHouse/ClickHouse/pull/3107)
+* Fixed a minor issue with backwards compatibility that appeared when working with a cluster of replicas on versions earlier than 18.12.13 and simultaneously creating a new replica of a table on a server with a newer version (shown in the message `Can not clone replica, because the ... updated to new ClickHouse version`, which is logical, but shouldn't happen). [#3122](https://github.com/ClickHouse/ClickHouse/pull/3122)
### Backward incompatible changes:
-* The `enable_optimize_predicate_expression` option is enabled by default (which is rather optimistic). If query analysis errors occur that are related to searching for the column names, set `enable_optimize_predicate_expression` to 0. [Winter Zhang](https://github.com/yandex/ClickHouse/pull/3107)
+* The `enable_optimize_predicate_expression` option is enabled by default (which is rather optimistic). If query analysis errors occur that are related to searching for the column names, set `enable_optimize_predicate_expression` to 0. [Winter Zhang](https://github.com/ClickHouse/ClickHouse/pull/3107)
## ClickHouse release 18.12.14, 2018-09-13
### New features:
-* Added support for `ALTER UPDATE` queries. [#3035](https://github.com/yandex/ClickHouse/pull/3035)
-* Added the `allow_ddl` option, which restricts the user's access to DDL queries. [#3104](https://github.com/yandex/ClickHouse/pull/3104)
-* Added the `min_merge_bytes_to_use_direct_io` option for `MergeTree` engines, which allows you to set a threshold for the total size of the merge (when above the threshold, data part files will be handled using O_DIRECT). [#3117](https://github.com/yandex/ClickHouse/pull/3117)
-* The `system.merges` system table now contains the `partition_id` column. [#3099](https://github.com/yandex/ClickHouse/pull/3099)
+* Added support for `ALTER UPDATE` queries. [#3035](https://github.com/ClickHouse/ClickHouse/pull/3035)
+* Added the `allow_ddl` option, which restricts the user's access to DDL queries. [#3104](https://github.com/ClickHouse/ClickHouse/pull/3104)
+* Added the `min_merge_bytes_to_use_direct_io` option for `MergeTree` engines, which allows you to set a threshold for the total size of the merge (when above the threshold, data part files will be handled using O_DIRECT). [#3117](https://github.com/ClickHouse/ClickHouse/pull/3117)
+* The `system.merges` system table now contains the `partition_id` column. [#3099](https://github.com/ClickHouse/ClickHouse/pull/3099)
### Improvements
-* If a data part remains unchanged during mutation, it isn't downloaded by replicas. [#3103](https://github.com/yandex/ClickHouse/pull/3103)
-* Autocomplete is available for names of settings when working with `clickhouse-client`. [#3106](https://github.com/yandex/ClickHouse/pull/3106)
+* If a data part remains unchanged during mutation, it isn't downloaded by replicas. [#3103](https://github.com/ClickHouse/ClickHouse/pull/3103)
+* Autocomplete is available for names of settings when working with `clickhouse-client`. [#3106](https://github.com/ClickHouse/ClickHouse/pull/3106)
### Bug fixes:
-* Added a check for the sizes of arrays that are elements of `Nested` type fields when inserting. [#3118](https://github.com/yandex/ClickHouse/pull/3118)
+* Added a check for the sizes of arrays that are elements of `Nested` type fields when inserting. [#3118](https://github.com/ClickHouse/ClickHouse/pull/3118)
* Fixed an error updating external dictionaries with the `ODBC` source and `hashed` storage. This error occurred in version 18.12.13.
-* Fixed a crash when creating a temporary table from a query with an `IN` condition. [Winter Zhang](https://github.com/yandex/ClickHouse/pull/3098)
-* Fixed an error in aggregate functions for arrays that can have `NULL` elements. [Winter Zhang](https://github.com/yandex/ClickHouse/pull/3097)
+* Fixed a crash when creating a temporary table from a query with an `IN` condition. [Winter Zhang](https://github.com/ClickHouse/ClickHouse/pull/3098)
+* Fixed an error in aggregate functions for arrays that can have `NULL` elements. [Winter Zhang](https://github.com/ClickHouse/ClickHouse/pull/3097)
## ClickHouse release 18.12.13, 2018-09-10
### New features:
-* Added the `DECIMAL(digits, scale)` data type (`Decimal32(scale)`, `Decimal64(scale)`, `Decimal128(scale)`). To enable it, use the setting `allow_experimental_decimal_type`. [#2846](https://github.com/yandex/ClickHouse/pull/2846) [#2970](https://github.com/yandex/ClickHouse/pull/2970) [#3008](https://github.com/yandex/ClickHouse/pull/3008) [#3047](https://github.com/yandex/ClickHouse/pull/3047)
-* New `WITH ROLLUP` modifier for `GROUP BY` (alternative syntax: `GROUP BY ROLLUP(...)`). [#2948](https://github.com/yandex/ClickHouse/pull/2948)
-* In queries with JOIN, the star character expands to a list of columns in all tables, in compliance with the SQL standard. You can restore the old behavior by setting `asterisk_left_columns_only` to 1 on the user configuration level. [Winter Zhang](https://github.com/yandex/ClickHouse/pull/2787)
-* Added support for JOIN with table functions. [Winter Zhang](https://github.com/yandex/ClickHouse/pull/2907)
-* Autocomplete by pressing Tab in clickhouse-client. [Sergey Shcherbin](https://github.com/yandex/ClickHouse/pull/2447)
-* Ctrl+C in clickhouse-client clears a query that was entered. [#2877](https://github.com/yandex/ClickHouse/pull/2877)
-* Added the `join_default_strictness` setting (values: `"`, `'any'`, `'all'`). This allows you to not specify `ANY` or `ALL` for `JOIN`. [#2982](https://github.com/yandex/ClickHouse/pull/2982)
-* Each line of the server log related to query processing shows the query ID. [#2482](https://github.com/yandex/ClickHouse/pull/2482)
-* Now you can get query execution logs in clickhouse-client (use the `send_logs_level` setting). With distributed query processing, logs are cascaded from all the servers. [#2482](https://github.com/yandex/ClickHouse/pull/2482)
-* The `system.query_log` and `system.processes` (`SHOW PROCESSLIST`) tables now have information about all changed settings when you run a query (the nested structure of the `Settings` data). Added the `log_query_settings` setting. [#2482](https://github.com/yandex/ClickHouse/pull/2482)
-* The `system.query_log` and `system.processes` tables now show information about the number of threads that are participating in query execution (see the `thread_numbers` column). [#2482](https://github.com/yandex/ClickHouse/pull/2482)
-* Added `ProfileEvents` counters that measure the time spent on reading and writing over the network and reading and writing to disk, the number of network errors, and the time spent waiting when network bandwidth is limited. [#2482](https://github.com/yandex/ClickHouse/pull/2482)
-* Added `ProfileEvents`counters that contain the system metrics from rusage (you can use them to get information about CPU usage in userspace and the kernel, page faults, and context switches), as well as taskstats metrics (use these to obtain information about I/O wait time, CPU wait time, and the amount of data read and recorded, both with and without page cache). [#2482](https://github.com/yandex/ClickHouse/pull/2482)
-* The `ProfileEvents` counters are applied globally and for each query, as well as for each query execution thread, which allows you to profile resource consumption by query in detail. [#2482](https://github.com/yandex/ClickHouse/pull/2482)
-* Added the `system.query_thread_log` table, which contains information about each query execution thread. Added the `log_query_threads` setting. [#2482](https://github.com/yandex/ClickHouse/pull/2482)
-* The `system.metrics` and `system.events` tables now have built-in documentation. [#3016](https://github.com/yandex/ClickHouse/pull/3016)
-* Added the `arrayEnumerateDense` function. [Amos Bird](https://github.com/yandex/ClickHouse/pull/2975)
-* Added the `arrayCumSumNonNegative` and `arrayDifference` functions. [Aleksey Studnev](https://github.com/yandex/ClickHouse/pull/2942)
-* Added the `retention` aggregate function. [Sundy Li](https://github.com/yandex/ClickHouse/pull/2887)
-* Now you can add (merge) states of aggregate functions by using the plus operator, and multiply the states of aggregate functions by a nonnegative constant. [#3062](https://github.com/yandex/ClickHouse/pull/3062) [#3034](https://github.com/yandex/ClickHouse/pull/3034)
-* Tables in the MergeTree family now have the virtual column `_partition_id`. [#3089](https://github.com/yandex/ClickHouse/pull/3089)
+* Added the `DECIMAL(digits, scale)` data type (`Decimal32(scale)`, `Decimal64(scale)`, `Decimal128(scale)`). To enable it, use the setting `allow_experimental_decimal_type`. [#2846](https://github.com/ClickHouse/ClickHouse/pull/2846) [#2970](https://github.com/ClickHouse/ClickHouse/pull/2970) [#3008](https://github.com/ClickHouse/ClickHouse/pull/3008) [#3047](https://github.com/ClickHouse/ClickHouse/pull/3047)
+* New `WITH ROLLUP` modifier for `GROUP BY` (alternative syntax: `GROUP BY ROLLUP(...)`). [#2948](https://github.com/ClickHouse/ClickHouse/pull/2948)
+* In queries with JOIN, the star character expands to a list of columns in all tables, in compliance with the SQL standard. You can restore the old behavior by setting `asterisk_left_columns_only` to 1 on the user configuration level. [Winter Zhang](https://github.com/ClickHouse/ClickHouse/pull/2787)
+* Added support for JOIN with table functions. [Winter Zhang](https://github.com/ClickHouse/ClickHouse/pull/2907)
+* Autocomplete by pressing Tab in clickhouse-client. [Sergey Shcherbin](https://github.com/ClickHouse/ClickHouse/pull/2447)
+* Ctrl+C in clickhouse-client clears a query that was entered. [#2877](https://github.com/ClickHouse/ClickHouse/pull/2877)
+* Added the `join_default_strictness` setting (values: `"`, `'any'`, `'all'`). This allows you to not specify `ANY` or `ALL` for `JOIN`. [#2982](https://github.com/ClickHouse/ClickHouse/pull/2982)
+* Each line of the server log related to query processing shows the query ID. [#2482](https://github.com/ClickHouse/ClickHouse/pull/2482)
+* Now you can get query execution logs in clickhouse-client (use the `send_logs_level` setting). With distributed query processing, logs are cascaded from all the servers. [#2482](https://github.com/ClickHouse/ClickHouse/pull/2482)
+* The `system.query_log` and `system.processes` (`SHOW PROCESSLIST`) tables now have information about all changed settings when you run a query (the nested structure of the `Settings` data). Added the `log_query_settings` setting. [#2482](https://github.com/ClickHouse/ClickHouse/pull/2482)
+* The `system.query_log` and `system.processes` tables now show information about the number of threads that are participating in query execution (see the `thread_numbers` column). [#2482](https://github.com/ClickHouse/ClickHouse/pull/2482)
+* Added `ProfileEvents` counters that measure the time spent on reading and writing over the network and reading and writing to disk, the number of network errors, and the time spent waiting when network bandwidth is limited. [#2482](https://github.com/ClickHouse/ClickHouse/pull/2482)
+* Added `ProfileEvents`counters that contain the system metrics from rusage (you can use them to get information about CPU usage in userspace and the kernel, page faults, and context switches), as well as taskstats metrics (use these to obtain information about I/O wait time, CPU wait time, and the amount of data read and recorded, both with and without page cache). [#2482](https://github.com/ClickHouse/ClickHouse/pull/2482)
+* The `ProfileEvents` counters are applied globally and for each query, as well as for each query execution thread, which allows you to profile resource consumption by query in detail. [#2482](https://github.com/ClickHouse/ClickHouse/pull/2482)
+* Added the `system.query_thread_log` table, which contains information about each query execution thread. Added the `log_query_threads` setting. [#2482](https://github.com/ClickHouse/ClickHouse/pull/2482)
+* The `system.metrics` and `system.events` tables now have built-in documentation. [#3016](https://github.com/ClickHouse/ClickHouse/pull/3016)
+* Added the `arrayEnumerateDense` function. [Amos Bird](https://github.com/ClickHouse/ClickHouse/pull/2975)
+* Added the `arrayCumSumNonNegative` and `arrayDifference` functions. [Aleksey Studnev](https://github.com/ClickHouse/ClickHouse/pull/2942)
+* Added the `retention` aggregate function. [Sundy Li](https://github.com/ClickHouse/ClickHouse/pull/2887)
+* Now you can add (merge) states of aggregate functions by using the plus operator, and multiply the states of aggregate functions by a nonnegative constant. [#3062](https://github.com/ClickHouse/ClickHouse/pull/3062) [#3034](https://github.com/ClickHouse/ClickHouse/pull/3034)
+* Tables in the MergeTree family now have the virtual column `_partition_id`. [#3089](https://github.com/ClickHouse/ClickHouse/pull/3089)
### Experimental features:
-* Added the `LowCardinality(T)` data type. This data type automatically creates a local dictionary of values and allows data processing without unpacking the dictionary. [#2830](https://github.com/yandex/ClickHouse/pull/2830)
-* Added a cache of JIT-compiled functions and a counter for the number of uses before compiling. To JIT compile expressions, enable the `compile_expressions` setting. [#2990](https://github.com/yandex/ClickHouse/pull/2990) [#3077](https://github.com/yandex/ClickHouse/pull/3077)
+* Added the `LowCardinality(T)` data type. This data type automatically creates a local dictionary of values and allows data processing without unpacking the dictionary. [#2830](https://github.com/ClickHouse/ClickHouse/pull/2830)
+* Added a cache of JIT-compiled functions and a counter for the number of uses before compiling. To JIT compile expressions, enable the `compile_expressions` setting. [#2990](https://github.com/ClickHouse/ClickHouse/pull/2990) [#3077](https://github.com/ClickHouse/ClickHouse/pull/3077)
### Improvements:
* Fixed the problem with unlimited accumulation of the replication log when there are abandoned replicas. Added an effective recovery mode for replicas with a long lag.
* Improved performance of `GROUP BY` with multiple aggregation fields when one of them is string and the others are fixed length.
* Improved performance when using `PREWHERE` and with implicit transfer of expressions in `PREWHERE`.
-* Improved parsing performance for text formats (`CSV`, `TSV`). [Amos Bird](https://github.com/yandex/ClickHouse/pull/2977) [#2980](https://github.com/yandex/ClickHouse/pull/2980)
-* Improved performance of reading strings and arrays in binary formats. [Amos Bird](https://github.com/yandex/ClickHouse/pull/2955)
-* Increased performance and reduced memory consumption for queries to `system.tables` and `system.columns` when there is a very large number of tables on a single server. [#2953](https://github.com/yandex/ClickHouse/pull/2953)
-* Fixed a performance problem in the case of a large stream of queries that result in an error (the ` _dl_addr` function is visible in `perf top`, but the server isn't using much CPU). [#2938](https://github.com/yandex/ClickHouse/pull/2938)
-* Conditions are cast into the View (when `enable_optimize_predicate_expression` is enabled). [Winter Zhang](https://github.com/yandex/ClickHouse/pull/2907)
-* Improvements to the functionality for the `UUID` data type. [#3074](https://github.com/yandex/ClickHouse/pull/3074) [#2985](https://github.com/yandex/ClickHouse/pull/2985)
-* The `UUID` data type is supported in The-Alchemist dictionaries. [#2822](https://github.com/yandex/ClickHouse/pull/2822)
-* The `visitParamExtractRaw` function works correctly with nested structures. [Winter Zhang](https://github.com/yandex/ClickHouse/pull/2974)
-* When the `input_format_skip_unknown_fields` setting is enabled, object fields in `JSONEachRow` format are skipped correctly. [BlahGeek](https://github.com/yandex/ClickHouse/pull/2958)
-* For a `CASE` expression with conditions, you can now omit `ELSE`, which is equivalent to `ELSE NULL`. [#2920](https://github.com/yandex/ClickHouse/pull/2920)
-* The operation timeout can now be configured when working with ZooKeeper. [urykhy](https://github.com/yandex/ClickHouse/pull/2971)
-* You can specify an offset for `LIMIT n, m` as `LIMIT n OFFSET m`. [#2840](https://github.com/yandex/ClickHouse/pull/2840)
-* You can use the `SELECT TOP n` syntax as an alternative for `LIMIT`. [#2840](https://github.com/yandex/ClickHouse/pull/2840)
+* Improved parsing performance for text formats (`CSV`, `TSV`). [Amos Bird](https://github.com/ClickHouse/ClickHouse/pull/2977) [#2980](https://github.com/ClickHouse/ClickHouse/pull/2980)
+* Improved performance of reading strings and arrays in binary formats. [Amos Bird](https://github.com/ClickHouse/ClickHouse/pull/2955)
+* Increased performance and reduced memory consumption for queries to `system.tables` and `system.columns` when there is a very large number of tables on a single server. [#2953](https://github.com/ClickHouse/ClickHouse/pull/2953)
+* Fixed a performance problem in the case of a large stream of queries that result in an error (the ` _dl_addr` function is visible in `perf top`, but the server isn't using much CPU). [#2938](https://github.com/ClickHouse/ClickHouse/pull/2938)
+* Conditions are cast into the View (when `enable_optimize_predicate_expression` is enabled). [Winter Zhang](https://github.com/ClickHouse/ClickHouse/pull/2907)
+* Improvements to the functionality for the `UUID` data type. [#3074](https://github.com/ClickHouse/ClickHouse/pull/3074) [#2985](https://github.com/ClickHouse/ClickHouse/pull/2985)
+* The `UUID` data type is supported in The-Alchemist dictionaries. [#2822](https://github.com/ClickHouse/ClickHouse/pull/2822)
+* The `visitParamExtractRaw` function works correctly with nested structures. [Winter Zhang](https://github.com/ClickHouse/ClickHouse/pull/2974)
+* When the `input_format_skip_unknown_fields` setting is enabled, object fields in `JSONEachRow` format are skipped correctly. [BlahGeek](https://github.com/ClickHouse/ClickHouse/pull/2958)
+* For a `CASE` expression with conditions, you can now omit `ELSE`, which is equivalent to `ELSE NULL`. [#2920](https://github.com/ClickHouse/ClickHouse/pull/2920)
+* The operation timeout can now be configured when working with ZooKeeper. [urykhy](https://github.com/ClickHouse/ClickHouse/pull/2971)
+* You can specify an offset for `LIMIT n, m` as `LIMIT n OFFSET m`. [#2840](https://github.com/ClickHouse/ClickHouse/pull/2840)
+* You can use the `SELECT TOP n` syntax as an alternative for `LIMIT`. [#2840](https://github.com/ClickHouse/ClickHouse/pull/2840)
* Increased the size of the queue to write to system tables, so the `SystemLog parameter queue is full` error doesn't happen as often.
-* The `windowFunnel` aggregate function now supports events that meet multiple conditions. [Amos Bird](https://github.com/yandex/ClickHouse/pull/2801)
-* Duplicate columns can be used in a `USING` clause for `JOIN`. [#3006](https://github.com/yandex/ClickHouse/pull/3006)
-* `Pretty` formats now have a limit on column alignment by width. Use the `output_format_pretty_max_column_pad_width` setting. If a value is wider, it will still be displayed in its entirety, but the other cells in the table will not be too wide. [#3003](https://github.com/yandex/ClickHouse/pull/3003)
-* The `odbc` table function now allows you to specify the database/schema name. [Amos Bird](https://github.com/yandex/ClickHouse/pull/2885)
-* Added the ability to use a username specified in the `clickhouse-client` config file. [Vladimir Kozbin](https://github.com/yandex/ClickHouse/pull/2909)
+* The `windowFunnel` aggregate function now supports events that meet multiple conditions. [Amos Bird](https://github.com/ClickHouse/ClickHouse/pull/2801)
+* Duplicate columns can be used in a `USING` clause for `JOIN`. [#3006](https://github.com/ClickHouse/ClickHouse/pull/3006)
+* `Pretty` formats now have a limit on column alignment by width. Use the `output_format_pretty_max_column_pad_width` setting. If a value is wider, it will still be displayed in its entirety, but the other cells in the table will not be too wide. [#3003](https://github.com/ClickHouse/ClickHouse/pull/3003)
+* The `odbc` table function now allows you to specify the database/schema name. [Amos Bird](https://github.com/ClickHouse/ClickHouse/pull/2885)
+* Added the ability to use a username specified in the `clickhouse-client` config file. [Vladimir Kozbin](https://github.com/ClickHouse/ClickHouse/pull/2909)
* The `ZooKeeperExceptions` counter has been split into three counters: `ZooKeeperUserExceptions`, `ZooKeeperHardwareExceptions`, and `ZooKeeperOtherExceptions`.
* `ALTER DELETE` queries work for materialized views.
* Added randomization when running the cleanup thread periodically for `ReplicatedMergeTree` tables in order to avoid periodic load spikes when there are a very large number of `ReplicatedMergeTree` tables.
-* Support for `ATTACH TABLE ... ON CLUSTER` queries. [#3025](https://github.com/yandex/ClickHouse/pull/3025)
+* Support for `ATTACH TABLE ... ON CLUSTER` queries. [#3025](https://github.com/ClickHouse/ClickHouse/pull/3025)
### Bug fixes:
-* Fixed an issue with `Dictionary` tables (throws the `Size of offsets doesn't match size of column` or `Unknown compression method` exception). This bug appeared in version 18.10.3. [#2913](https://github.com/yandex/ClickHouse/issues/2913)
-* Fixed a bug when merging `CollapsingMergeTree` tables if one of the data parts is empty (these parts are formed during merge or `ALTER DELETE` if all data was deleted), and the `vertical` algorithm was used for the merge. [#3049](https://github.com/yandex/ClickHouse/pull/3049)
-* Fixed a race condition during `DROP` or `TRUNCATE` for `Memory` tables with a simultaneous `SELECT`, which could lead to server crashes. This bug appeared in version 1.1.54388. [#3038](https://github.com/yandex/ClickHouse/pull/3038)
-* Fixed the possibility of data loss when inserting in `Replicated` tables if the `Session is expired` error is returned (data loss can be detected by the `ReplicatedDataLoss` metric). This error occurred in version 1.1.54378. [#2939](https://github.com/yandex/ClickHouse/pull/2939) [#2949](https://github.com/yandex/ClickHouse/pull/2949) [#2964](https://github.com/yandex/ClickHouse/pull/2964)
-* Fixed a segfault during `JOIN ... ON`. [#3000](https://github.com/yandex/ClickHouse/pull/3000)
-* Fixed the error searching column names when the `WHERE` expression consists entirely of a qualified column name, such as `WHERE table.column`. [#2994](https://github.com/yandex/ClickHouse/pull/2994)
-* Fixed the "Not found column" error that occurred when executing distributed queries if a single column consisting of an IN expression with a subquery is requested from a remote server. [#3087](https://github.com/yandex/ClickHouse/pull/3087)
-* Fixed the `Block structure mismatch in UNION stream: different number of columns` error that occurred for distributed queries if one of the shards is local and the other is not, and optimization of the move to `PREWHERE` is triggered. [#2226](https://github.com/yandex/ClickHouse/pull/2226) [#3037](https://github.com/yandex/ClickHouse/pull/3037) [#3055](https://github.com/yandex/ClickHouse/pull/3055) [#3065](https://github.com/yandex/ClickHouse/pull/3065) [#3073](https://github.com/yandex/ClickHouse/pull/3073) [#3090](https://github.com/yandex/ClickHouse/pull/3090) [#3093](https://github.com/yandex/ClickHouse/pull/3093)
-* Fixed the `pointInPolygon` function for certain cases of non-convex polygons. [#2910](https://github.com/yandex/ClickHouse/pull/2910)
-* Fixed the incorrect result when comparing `nan` with integers. [#3024](https://github.com/yandex/ClickHouse/pull/3024)
-* Fixed an error in the `zlib-ng` library that could lead to segfault in rare cases. [#2854](https://github.com/yandex/ClickHouse/pull/2854)
-* Fixed a memory leak when inserting into a table with `AggregateFunction` columns, if the state of the aggregate function is not simple (allocates memory separately), and if a single insertion request results in multiple small blocks. [#3084](https://github.com/yandex/ClickHouse/pull/3084)
+* Fixed an issue with `Dictionary` tables (throws the `Size of offsets doesn't match size of column` or `Unknown compression method` exception). This bug appeared in version 18.10.3. [#2913](https://github.com/ClickHouse/ClickHouse/issues/2913)
+* Fixed a bug when merging `CollapsingMergeTree` tables if one of the data parts is empty (these parts are formed during merge or `ALTER DELETE` if all data was deleted), and the `vertical` algorithm was used for the merge. [#3049](https://github.com/ClickHouse/ClickHouse/pull/3049)
+* Fixed a race condition during `DROP` or `TRUNCATE` for `Memory` tables with a simultaneous `SELECT`, which could lead to server crashes. This bug appeared in version 1.1.54388. [#3038](https://github.com/ClickHouse/ClickHouse/pull/3038)
+* Fixed the possibility of data loss when inserting in `Replicated` tables if the `Session is expired` error is returned (data loss can be detected by the `ReplicatedDataLoss` metric). This error occurred in version 1.1.54378. [#2939](https://github.com/ClickHouse/ClickHouse/pull/2939) [#2949](https://github.com/ClickHouse/ClickHouse/pull/2949) [#2964](https://github.com/ClickHouse/ClickHouse/pull/2964)
+* Fixed a segfault during `JOIN ... ON`. [#3000](https://github.com/ClickHouse/ClickHouse/pull/3000)
+* Fixed the error searching column names when the `WHERE` expression consists entirely of a qualified column name, such as `WHERE table.column`. [#2994](https://github.com/ClickHouse/ClickHouse/pull/2994)
+* Fixed the "Not found column" error that occurred when executing distributed queries if a single column consisting of an IN expression with a subquery is requested from a remote server. [#3087](https://github.com/ClickHouse/ClickHouse/pull/3087)
+* Fixed the `Block structure mismatch in UNION stream: different number of columns` error that occurred for distributed queries if one of the shards is local and the other is not, and optimization of the move to `PREWHERE` is triggered. [#2226](https://github.com/ClickHouse/ClickHouse/pull/2226) [#3037](https://github.com/ClickHouse/ClickHouse/pull/3037) [#3055](https://github.com/ClickHouse/ClickHouse/pull/3055) [#3065](https://github.com/ClickHouse/ClickHouse/pull/3065) [#3073](https://github.com/ClickHouse/ClickHouse/pull/3073) [#3090](https://github.com/ClickHouse/ClickHouse/pull/3090) [#3093](https://github.com/ClickHouse/ClickHouse/pull/3093)
+* Fixed the `pointInPolygon` function for certain cases of non-convex polygons. [#2910](https://github.com/ClickHouse/ClickHouse/pull/2910)
+* Fixed the incorrect result when comparing `nan` with integers. [#3024](https://github.com/ClickHouse/ClickHouse/pull/3024)
+* Fixed an error in the `zlib-ng` library that could lead to segfault in rare cases. [#2854](https://github.com/ClickHouse/ClickHouse/pull/2854)
+* Fixed a memory leak when inserting into a table with `AggregateFunction` columns, if the state of the aggregate function is not simple (allocates memory separately), and if a single insertion request results in multiple small blocks. [#3084](https://github.com/ClickHouse/ClickHouse/pull/3084)
* Fixed a race condition when creating and deleting the same `Buffer` or `MergeTree` table simultaneously.
-* Fixed the possibility of a segfault when comparing tuples made up of certain non-trivial types, such as tuples. [#2989](https://github.com/yandex/ClickHouse/pull/2989)
-* Fixed the possibility of a segfault when running certain `ON CLUSTER` queries. [Winter Zhang](https://github.com/yandex/ClickHouse/pull/2960)
-* Fixed an error in the `arrayDistinct` function for `Nullable` array elements. [#2845](https://github.com/yandex/ClickHouse/pull/2845) [#2937](https://github.com/yandex/ClickHouse/pull/2937)
-* The `enable_optimize_predicate_expression` option now correctly supports cases with `SELECT *`. [Winter Zhang](https://github.com/yandex/ClickHouse/pull/2929)
-* Fixed the segfault when re-initializing the ZooKeeper session. [#2917](https://github.com/yandex/ClickHouse/pull/2917)
+* Fixed the possibility of a segfault when comparing tuples made up of certain non-trivial types, such as tuples. [#2989](https://github.com/ClickHouse/ClickHouse/pull/2989)
+* Fixed the possibility of a segfault when running certain `ON CLUSTER` queries. [Winter Zhang](https://github.com/ClickHouse/ClickHouse/pull/2960)
+* Fixed an error in the `arrayDistinct` function for `Nullable` array elements. [#2845](https://github.com/ClickHouse/ClickHouse/pull/2845) [#2937](https://github.com/ClickHouse/ClickHouse/pull/2937)
+* The `enable_optimize_predicate_expression` option now correctly supports cases with `SELECT *`. [Winter Zhang](https://github.com/ClickHouse/ClickHouse/pull/2929)
+* Fixed the segfault when re-initializing the ZooKeeper session. [#2917](https://github.com/ClickHouse/ClickHouse/pull/2917)
* Fixed potential blocking when working with ZooKeeper.
* Fixed incorrect code for adding nested data structures in a `SummingMergeTree`.
-* When allocating memory for states of aggregate functions, alignment is correctly taken into account, which makes it possible to use operations that require alignment when implementing states of aggregate functions. [chenxing-xc](https://github.com/yandex/ClickHouse/pull/2808)
+* When allocating memory for states of aggregate functions, alignment is correctly taken into account, which makes it possible to use operations that require alignment when implementing states of aggregate functions. [chenxing-xc](https://github.com/ClickHouse/ClickHouse/pull/2808)
### Security fix:
-* Safe use of ODBC data sources. Interaction with ODBC drivers uses a separate `clickhouse-odbc-bridge` process. Errors in third-party ODBC drivers no longer cause problems with server stability or vulnerabilities. [#2828](https://github.com/yandex/ClickHouse/pull/2828) [#2879](https://github.com/yandex/ClickHouse/pull/2879) [#2886](https://github.com/yandex/ClickHouse/pull/2886) [#2893](https://github.com/yandex/ClickHouse/pull/2893) [#2921](https://github.com/yandex/ClickHouse/pull/2921)
-* Fixed incorrect validation of the file path in the `catBoostPool` table function. [#2894](https://github.com/yandex/ClickHouse/pull/2894)
-* The contents of system tables (`tables`, `databases`, `parts`, `columns`, `parts_columns`, `merges`, `mutations`, `replicas`, and `replication_queue`) are filtered according to the user's configured access to databases (`allow_databases`). [Winter Zhang](https://github.com/yandex/ClickHouse/pull/2856)
+* Safe use of ODBC data sources. Interaction with ODBC drivers uses a separate `clickhouse-odbc-bridge` process. Errors in third-party ODBC drivers no longer cause problems with server stability or vulnerabilities. [#2828](https://github.com/ClickHouse/ClickHouse/pull/2828) [#2879](https://github.com/ClickHouse/ClickHouse/pull/2879) [#2886](https://github.com/ClickHouse/ClickHouse/pull/2886) [#2893](https://github.com/ClickHouse/ClickHouse/pull/2893) [#2921](https://github.com/ClickHouse/ClickHouse/pull/2921)
+* Fixed incorrect validation of the file path in the `catBoostPool` table function. [#2894](https://github.com/ClickHouse/ClickHouse/pull/2894)
+* The contents of system tables (`tables`, `databases`, `parts`, `columns`, `parts_columns`, `merges`, `mutations`, `replicas`, and `replication_queue`) are filtered according to the user's configured access to databases (`allow_databases`). [Winter Zhang](https://github.com/ClickHouse/ClickHouse/pull/2856)
### Backward incompatible changes:
@@ -1850,46 +1850,46 @@ This release contains exactly the same set of patches as 19.3.6.
* Most integration tests can now be run by commit.
* Code style checks can also be run by commit.
-* The `memcpy` implementation is chosen correctly when building on CentOS7/Fedora. [Etienne Champetier](https://github.com/yandex/ClickHouse/pull/2912)
-* When using clang to build, some warnings from `-Weverything` have been added, in addition to the regular `-Wall-Wextra -Werror`. [#2957](https://github.com/yandex/ClickHouse/pull/2957)
+* The `memcpy` implementation is chosen correctly when building on CentOS7/Fedora. [Etienne Champetier](https://github.com/ClickHouse/ClickHouse/pull/2912)
+* When using clang to build, some warnings from `-Weverything` have been added, in addition to the regular `-Wall-Wextra -Werror`. [#2957](https://github.com/ClickHouse/ClickHouse/pull/2957)
* Debugging the build uses the `jemalloc` debug option.
-* The interface of the library for interacting with ZooKeeper is declared abstract. [#2950](https://github.com/yandex/ClickHouse/pull/2950)
+* The interface of the library for interacting with ZooKeeper is declared abstract. [#2950](https://github.com/ClickHouse/ClickHouse/pull/2950)
## ClickHouse release 18.10.3, 2018-08-13
### New features:
-* HTTPS can be used for replication. [#2760](https://github.com/yandex/ClickHouse/pull/2760)
-* Added the functions `murmurHash2_64`, `murmurHash3_32`, `murmurHash3_64`, and `murmurHash3_128` in addition to the existing `murmurHash2_32`. [#2791](https://github.com/yandex/ClickHouse/pull/2791)
-* Support for Nullable types in the ClickHouse ODBC driver (`ODBCDriver2` output format). [#2834](https://github.com/yandex/ClickHouse/pull/2834)
+* HTTPS can be used for replication. [#2760](https://github.com/ClickHouse/ClickHouse/pull/2760)
+* Added the functions `murmurHash2_64`, `murmurHash3_32`, `murmurHash3_64`, and `murmurHash3_128` in addition to the existing `murmurHash2_32`. [#2791](https://github.com/ClickHouse/ClickHouse/pull/2791)
+* Support for Nullable types in the ClickHouse ODBC driver (`ODBCDriver2` output format). [#2834](https://github.com/ClickHouse/ClickHouse/pull/2834)
* Support for `UUID` in the key columns.
### Improvements:
-* Clusters can be removed without restarting the server when they are deleted from the config files. [#2777](https://github.com/yandex/ClickHouse/pull/2777)
-* External dictionaries can be removed without restarting the server when they are removed from config files. [#2779](https://github.com/yandex/ClickHouse/pull/2779)
-* Added `SETTINGS` support for the `Kafka` table engine. [Alexander Marshalov](https://github.com/yandex/ClickHouse/pull/2781)
-* Improvements for the `UUID` data type (not yet complete). [#2618](https://github.com/yandex/ClickHouse/pull/2618)
-* Support for empty parts after merges in the `SummingMergeTree`, `CollapsingMergeTree` and `VersionedCollapsingMergeTree` engines. [#2815](https://github.com/yandex/ClickHouse/pull/2815)
-* Old records of completed mutations are deleted (`ALTER DELETE`). [#2784](https://github.com/yandex/ClickHouse/pull/2784)
-* Added the `system.merge_tree_settings` table. [Kirill Shvakov](https://github.com/yandex/ClickHouse/pull/2841)
-* The `system.tables` table now has dependency columns: `dependencies_database` and `dependencies_table`. [Winter Zhang](https://github.com/yandex/ClickHouse/pull/2851)
-* Added the `max_partition_size_to_drop` config option. [#2782](https://github.com/yandex/ClickHouse/pull/2782)
-* Added the `output_format_json_escape_forward_slashes` option. [Alexander Bocharov](https://github.com/yandex/ClickHouse/pull/2812)
-* Added the `max_fetch_partition_retries_count` setting. [#2831](https://github.com/yandex/ClickHouse/pull/2831)
-* Added the `prefer_localhost_replica` setting for disabling the preference for a local replica and going to a local replica without inter-process interaction. [#2832](https://github.com/yandex/ClickHouse/pull/2832)
-* The `quantileExact` aggregate function returns `nan` in the case of aggregation on an empty `Float32` or `Float64` set. [Sundy Li](https://github.com/yandex/ClickHouse/pull/2855)
+* Clusters can be removed without restarting the server when they are deleted from the config files. [#2777](https://github.com/ClickHouse/ClickHouse/pull/2777)
+* External dictionaries can be removed without restarting the server when they are removed from config files. [#2779](https://github.com/ClickHouse/ClickHouse/pull/2779)
+* Added `SETTINGS` support for the `Kafka` table engine. [Alexander Marshalov](https://github.com/ClickHouse/ClickHouse/pull/2781)
+* Improvements for the `UUID` data type (not yet complete). [#2618](https://github.com/ClickHouse/ClickHouse/pull/2618)
+* Support for empty parts after merges in the `SummingMergeTree`, `CollapsingMergeTree` and `VersionedCollapsingMergeTree` engines. [#2815](https://github.com/ClickHouse/ClickHouse/pull/2815)
+* Old records of completed mutations are deleted (`ALTER DELETE`). [#2784](https://github.com/ClickHouse/ClickHouse/pull/2784)
+* Added the `system.merge_tree_settings` table. [Kirill Shvakov](https://github.com/ClickHouse/ClickHouse/pull/2841)
+* The `system.tables` table now has dependency columns: `dependencies_database` and `dependencies_table`. [Winter Zhang](https://github.com/ClickHouse/ClickHouse/pull/2851)
+* Added the `max_partition_size_to_drop` config option. [#2782](https://github.com/ClickHouse/ClickHouse/pull/2782)
+* Added the `output_format_json_escape_forward_slashes` option. [Alexander Bocharov](https://github.com/ClickHouse/ClickHouse/pull/2812)
+* Added the `max_fetch_partition_retries_count` setting. [#2831](https://github.com/ClickHouse/ClickHouse/pull/2831)
+* Added the `prefer_localhost_replica` setting for disabling the preference for a local replica and going to a local replica without inter-process interaction. [#2832](https://github.com/ClickHouse/ClickHouse/pull/2832)
+* The `quantileExact` aggregate function returns `nan` in the case of aggregation on an empty `Float32` or `Float64` set. [Sundy Li](https://github.com/ClickHouse/ClickHouse/pull/2855)
### Bug fixes:
* Removed unnecessary escaping of the connection string parameters for ODBC, which made it impossible to establish a connection. This error occurred in version 18.6.0.
-* Fixed the logic for processing `REPLACE PARTITION` commands in the replication queue. If there are two `REPLACE` commands for the same partition, the incorrect logic could cause one of them to remain in the replication queue and not be executed. [#2814](https://github.com/yandex/ClickHouse/pull/2814)
-* Fixed a merge bug when all data parts were empty (parts that were formed from a merge or from `ALTER DELETE` if all data was deleted). This bug appeared in version 18.1.0. [#2930](https://github.com/yandex/ClickHouse/pull/2930)
-* Fixed an error for concurrent `Set` or `Join`. [Amos Bird](https://github.com/yandex/ClickHouse/pull/2823)
-* Fixed the `Block structure mismatch in UNION stream: different number of columns` error that occurred for `UNION ALL` queries inside a sub-query if one of the `SELECT` queries contains duplicate column names. [Winter Zhang](https://github.com/yandex/ClickHouse/pull/2094)
+* Fixed the logic for processing `REPLACE PARTITION` commands in the replication queue. If there are two `REPLACE` commands for the same partition, the incorrect logic could cause one of them to remain in the replication queue and not be executed. [#2814](https://github.com/ClickHouse/ClickHouse/pull/2814)
+* Fixed a merge bug when all data parts were empty (parts that were formed from a merge or from `ALTER DELETE` if all data was deleted). This bug appeared in version 18.1.0. [#2930](https://github.com/ClickHouse/ClickHouse/pull/2930)
+* Fixed an error for concurrent `Set` or `Join`. [Amos Bird](https://github.com/ClickHouse/ClickHouse/pull/2823)
+* Fixed the `Block structure mismatch in UNION stream: different number of columns` error that occurred for `UNION ALL` queries inside a sub-query if one of the `SELECT` queries contains duplicate column names. [Winter Zhang](https://github.com/ClickHouse/ClickHouse/pull/2094)
* Fixed a memory leak if an exception occurred when connecting to a MySQL server.
* Fixed incorrect clickhouse-client response code in case of a query error.
-* Fixed incorrect behavior of materialized views containing DISTINCT. [#2795](https://github.com/yandex/ClickHouse/issues/2795)
+* Fixed incorrect behavior of materialized views containing DISTINCT. [#2795](https://github.com/ClickHouse/ClickHouse/issues/2795)
### Backward incompatible changes
@@ -1897,10 +1897,10 @@ This release contains exactly the same set of patches as 19.3.6.
### Build changes:
-* The allocator has been replaced: `jemalloc` is now used instead of `tcmalloc`. In some scenarios, this increases speed up to 20%. However, there are queries that have slowed by up to 20%. Memory consumption has been reduced by approximately 10% in some scenarios, with improved stability. With highly competitive loads, CPU usage in userspace and in system shows just a slight increase. [#2773](https://github.com/yandex/ClickHouse/pull/2773)
-* Use of libressl from a submodule. [#1983](https://github.com/yandex/ClickHouse/pull/1983) [#2807](https://github.com/yandex/ClickHouse/pull/2807)
-* Use of unixodbc from a submodule. [#2789](https://github.com/yandex/ClickHouse/pull/2789)
-* Use of mariadb-connector-c from a submodule. [#2785](https://github.com/yandex/ClickHouse/pull/2785)
+* The allocator has been replaced: `jemalloc` is now used instead of `tcmalloc`. In some scenarios, this increases speed up to 20%. However, there are queries that have slowed by up to 20%. Memory consumption has been reduced by approximately 10% in some scenarios, with improved stability. With highly competitive loads, CPU usage in userspace and in system shows just a slight increase. [#2773](https://github.com/ClickHouse/ClickHouse/pull/2773)
+* Use of libressl from a submodule. [#1983](https://github.com/ClickHouse/ClickHouse/pull/1983) [#2807](https://github.com/ClickHouse/ClickHouse/pull/2807)
+* Use of unixodbc from a submodule. [#2789](https://github.com/ClickHouse/ClickHouse/pull/2789)
+* Use of mariadb-connector-c from a submodule. [#2785](https://github.com/ClickHouse/ClickHouse/pull/2785)
* Added functional test files to the repository that depend on the availability of test data (for the time being, without the test data itself).
## ClickHouse release 18.6.0, 2018-08-02
@@ -1909,49 +1909,49 @@ This release contains exactly the same set of patches as 19.3.6.
* Added support for ON expressions for the JOIN ON syntax:
`JOIN ON Expr([table.]column ...) = Expr([table.]column, ...) [AND Expr([table.]column, ...) = Expr([table.]column, ...) ...]`
-The expression must be a chain of equalities joined by the AND operator. Each side of the equality can be an arbitrary expression over the columns of one of the tables. The use of fully qualified column names is supported (`table.name`, `database.table.name`, `table_alias.name`, `subquery_alias.name`) for the right table. [#2742](https://github.com/yandex/ClickHouse/pull/2742)
-* HTTPS can be enabled for replication. [#2760](https://github.com/yandex/ClickHouse/pull/2760)
+The expression must be a chain of equalities joined by the AND operator. Each side of the equality can be an arbitrary expression over the columns of one of the tables. The use of fully qualified column names is supported (`table.name`, `database.table.name`, `table_alias.name`, `subquery_alias.name`) for the right table. [#2742](https://github.com/ClickHouse/ClickHouse/pull/2742)
+* HTTPS can be enabled for replication. [#2760](https://github.com/ClickHouse/ClickHouse/pull/2760)
### Improvements:
-* The server passes the patch component of its version to the client. Data about the patch version component is in `system.processes` and `query_log`. [#2646](https://github.com/yandex/ClickHouse/pull/2646)
+* The server passes the patch component of its version to the client. Data about the patch version component is in `system.processes` and `query_log`. [#2646](https://github.com/ClickHouse/ClickHouse/pull/2646)
## ClickHouse release 18.5.1, 2018-07-31
### New features:
-* Added the hash function `murmurHash2_32` [#2756](https://github.com/yandex/ClickHouse/pull/2756).
+* Added the hash function `murmurHash2_32` [#2756](https://github.com/ClickHouse/ClickHouse/pull/2756).
### Improvements:
-* Now you can use the `from_env` [#2741](https://github.com/yandex/ClickHouse/pull/2741) attribute to set values in config files from environment variables.
-* Added case-insensitive versions of the `coalesce`, `ifNull`, and `nullIf functions` [#2752](https://github.com/yandex/ClickHouse/pull/2752).
+* Now you can use the `from_env` [#2741](https://github.com/ClickHouse/ClickHouse/pull/2741) attribute to set values in config files from environment variables.
+* Added case-insensitive versions of the `coalesce`, `ifNull`, and `nullIf functions` [#2752](https://github.com/ClickHouse/ClickHouse/pull/2752).
### Bug fixes:
-* Fixed a possible bug when starting a replica [#2759](https://github.com/yandex/ClickHouse/pull/2759).
+* Fixed a possible bug when starting a replica [#2759](https://github.com/ClickHouse/ClickHouse/pull/2759).
## ClickHouse release 18.4.0, 2018-07-28
### New features:
-* Added system tables: `formats`, `data_type_families`, `aggregate_function_combinators`, `table_functions`, `table_engines`, `collations` [#2721](https://github.com/yandex/ClickHouse/pull/2721).
-* Added the ability to use a table function instead of a table as an argument of a `remote` or `cluster table function` [#2708](https://github.com/yandex/ClickHouse/pull/2708).
-* Support for `HTTP Basic` authentication in the replication protocol [#2727](https://github.com/yandex/ClickHouse/pull/2727).
-* The `has` function now allows searching for a numeric value in an array of `Enum` values [Maxim Khrisanfov](https://github.com/yandex/ClickHouse/pull/2699).
-* Support for adding arbitrary message separators when reading from `Kafka` [Amos Bird](https://github.com/yandex/ClickHouse/pull/2701).
+* Added system tables: `formats`, `data_type_families`, `aggregate_function_combinators`, `table_functions`, `table_engines`, `collations` [#2721](https://github.com/ClickHouse/ClickHouse/pull/2721).
+* Added the ability to use a table function instead of a table as an argument of a `remote` or `cluster table function` [#2708](https://github.com/ClickHouse/ClickHouse/pull/2708).
+* Support for `HTTP Basic` authentication in the replication protocol [#2727](https://github.com/ClickHouse/ClickHouse/pull/2727).
+* The `has` function now allows searching for a numeric value in an array of `Enum` values [Maxim Khrisanfov](https://github.com/ClickHouse/ClickHouse/pull/2699).
+* Support for adding arbitrary message separators when reading from `Kafka` [Amos Bird](https://github.com/ClickHouse/ClickHouse/pull/2701).
### Improvements:
-* The `ALTER TABLE t DELETE WHERE` query does not rewrite data parts that were not affected by the WHERE condition [#2694](https://github.com/yandex/ClickHouse/pull/2694).
+* The `ALTER TABLE t DELETE WHERE` query does not rewrite data parts that were not affected by the WHERE condition [#2694](https://github.com/ClickHouse/ClickHouse/pull/2694).
* The `use_minimalistic_checksums_in_zookeeper` option for `ReplicatedMergeTree` tables is enabled by default. This setting was added in version 1.1.54378, 2018-04-16. Versions that are older than 1.1.54378 can no longer be installed.
-* Support for running `KILL` and `OPTIMIZE` queries that specify `ON CLUSTER` [Winter Zhang](https://github.com/yandex/ClickHouse/pull/2689).
+* Support for running `KILL` and `OPTIMIZE` queries that specify `ON CLUSTER` [Winter Zhang](https://github.com/ClickHouse/ClickHouse/pull/2689).
### Bug fixes:
-* Fixed the error `Column ... is not under an aggregate function and not in GROUP BY` for aggregation with an IN expression. This bug appeared in version 18.1.0. ([bbdd780b](https://github.com/yandex/ClickHouse/commit/bbdd780be0be06a0f336775941cdd536878dd2c2))
-* Fixed a bug in the `windowFunnel aggregate function` [Winter Zhang](https://github.com/yandex/ClickHouse/pull/2735).
-* Fixed a bug in the `anyHeavy` aggregate function ([a2101df2](https://github.com/yandex/ClickHouse/commit/a2101df25a6a0fba99aa71f8793d762af2b801ee))
+* Fixed the error `Column ... is not under an aggregate function and not in GROUP BY` for aggregation with an IN expression. This bug appeared in version 18.1.0. ([bbdd780b](https://github.com/ClickHouse/ClickHouse/commit/bbdd780be0be06a0f336775941cdd536878dd2c2))
+* Fixed a bug in the `windowFunnel aggregate function` [Winter Zhang](https://github.com/ClickHouse/ClickHouse/pull/2735).
+* Fixed a bug in the `anyHeavy` aggregate function ([a2101df2](https://github.com/ClickHouse/ClickHouse/commit/a2101df25a6a0fba99aa71f8793d762af2b801ee))
* Fixed server crash when using the `countArray()` aggregate function.
### Backward incompatible changes:
@@ -1962,29 +1962,29 @@ The expression must be a chain of equalities joined by the AND operator. Each si
### New features:
-* Support for the `ALTER TABLE t DELETE WHERE` query for non-replicated MergeTree tables ([#2634](https://github.com/yandex/ClickHouse/pull/2634)).
-* Support for arbitrary types for the `uniq*` family of aggregate functions ([#2010](https://github.com/yandex/ClickHouse/issues/2010)).
-* Support for arbitrary types in comparison operators ([#2026](https://github.com/yandex/ClickHouse/issues/2026)).
-* The `users.xml` file allows setting a subnet mask in the format `10.0.0.1/255.255.255.0`. This is necessary for using masks for IPv6 networks with zeros in the middle ([#2637](https://github.com/yandex/ClickHouse/pull/2637)).
-* Added the `arrayDistinct` function ([#2670](https://github.com/yandex/ClickHouse/pull/2670)).
-* The SummingMergeTree engine can now work with AggregateFunction type columns ([Constantin S. Pan](https://github.com/yandex/ClickHouse/pull/2566)).
+* Support for the `ALTER TABLE t DELETE WHERE` query for non-replicated MergeTree tables ([#2634](https://github.com/ClickHouse/ClickHouse/pull/2634)).
+* Support for arbitrary types for the `uniq*` family of aggregate functions ([#2010](https://github.com/ClickHouse/ClickHouse/issues/2010)).
+* Support for arbitrary types in comparison operators ([#2026](https://github.com/ClickHouse/ClickHouse/issues/2026)).
+* The `users.xml` file allows setting a subnet mask in the format `10.0.0.1/255.255.255.0`. This is necessary for using masks for IPv6 networks with zeros in the middle ([#2637](https://github.com/ClickHouse/ClickHouse/pull/2637)).
+* Added the `arrayDistinct` function ([#2670](https://github.com/ClickHouse/ClickHouse/pull/2670)).
+* The SummingMergeTree engine can now work with AggregateFunction type columns ([Constantin S. Pan](https://github.com/ClickHouse/ClickHouse/pull/2566)).
### Improvements:
* Changed the numbering scheme for release versions. Now the first part contains the year of release (A.D., Moscow timezone, minus 2000), the second part contains the number for major changes (increases for most releases), and the third part is the patch version. Releases are still backwards compatible, unless otherwise stated in the changelog.
-* Faster conversions of floating-point numbers to a string ([Amos Bird](https://github.com/yandex/ClickHouse/pull/2664)).
-* If some rows were skipped during an insert due to parsing errors (this is possible with the `input_allow_errors_num` and `input_allow_errors_ratio` settings enabled), the number of skipped rows is now written to the server log ([Leonardo Cecchi](https://github.com/yandex/ClickHouse/pull/2669)).
+* Faster conversions of floating-point numbers to a string ([Amos Bird](https://github.com/ClickHouse/ClickHouse/pull/2664)).
+* If some rows were skipped during an insert due to parsing errors (this is possible with the `input_allow_errors_num` and `input_allow_errors_ratio` settings enabled), the number of skipped rows is now written to the server log ([Leonardo Cecchi](https://github.com/ClickHouse/ClickHouse/pull/2669)).
### Bug fixes:
-* Fixed the TRUNCATE command for temporary tables ([Amos Bird](https://github.com/yandex/ClickHouse/pull/2624)).
-* Fixed a rare deadlock in the ZooKeeper client library that occurred when there was a network error while reading the response ([c315200](https://github.com/yandex/ClickHouse/commit/c315200e64b87e44bdf740707fc857d1fdf7e947)).
-* Fixed an error during a CAST to Nullable types ([#1322](https://github.com/yandex/ClickHouse/issues/1322)).
-* Fixed the incorrect result of the `maxIntersection()` function when the boundaries of intervals coincided ([Michael Furmur](https://github.com/yandex/ClickHouse/pull/2657)).
-* Fixed incorrect transformation of the OR expression chain in a function argument ([chenxing-xc](https://github.com/yandex/ClickHouse/pull/2663)).
-* Fixed performance degradation for queries containing `IN (subquery)` expressions inside another subquery ([#2571](https://github.com/yandex/ClickHouse/issues/2571)).
-* Fixed incompatibility between servers with different versions in distributed queries that use a `CAST` function that isn't in uppercase letters ([fe8c4d6](https://github.com/yandex/ClickHouse/commit/fe8c4d64e434cacd4ceef34faa9005129f2190a5)).
-* Added missing quoting of identifiers for queries to an external DBMS ([#2635](https://github.com/yandex/ClickHouse/issues/2635)).
+* Fixed the TRUNCATE command for temporary tables ([Amos Bird](https://github.com/ClickHouse/ClickHouse/pull/2624)).
+* Fixed a rare deadlock in the ZooKeeper client library that occurred when there was a network error while reading the response ([c315200](https://github.com/ClickHouse/ClickHouse/commit/c315200e64b87e44bdf740707fc857d1fdf7e947)).
+* Fixed an error during a CAST to Nullable types ([#1322](https://github.com/ClickHouse/ClickHouse/issues/1322)).
+* Fixed the incorrect result of the `maxIntersection()` function when the boundaries of intervals coincided ([Michael Furmur](https://github.com/ClickHouse/ClickHouse/pull/2657)).
+* Fixed incorrect transformation of the OR expression chain in a function argument ([chenxing-xc](https://github.com/ClickHouse/ClickHouse/pull/2663)).
+* Fixed performance degradation for queries containing `IN (subquery)` expressions inside another subquery ([#2571](https://github.com/ClickHouse/ClickHouse/issues/2571)).
+* Fixed incompatibility between servers with different versions in distributed queries that use a `CAST` function that isn't in uppercase letters ([fe8c4d6](https://github.com/ClickHouse/ClickHouse/commit/fe8c4d64e434cacd4ceef34faa9005129f2190a5)).
+* Added missing quoting of identifiers for queries to an external DBMS ([#2635](https://github.com/ClickHouse/ClickHouse/issues/2635)).
### Backward incompatible changes:
@@ -1994,44 +1994,44 @@ The expression must be a chain of equalities joined by the AND operator. Each si
### New features:
-* Added the `histogram` aggregate function ([Mikhail Surin](https://github.com/yandex/ClickHouse/pull/2521)).
-* Now `OPTIMIZE TABLE ... FINAL` can be used without specifying partitions for `ReplicatedMergeTree` ([Amos Bird](https://github.com/yandex/ClickHouse/pull/2600)).
+* Added the `histogram` aggregate function ([Mikhail Surin](https://github.com/ClickHouse/ClickHouse/pull/2521)).
+* Now `OPTIMIZE TABLE ... FINAL` can be used without specifying partitions for `ReplicatedMergeTree` ([Amos Bird](https://github.com/ClickHouse/ClickHouse/pull/2600)).
### Bug fixes:
* Fixed a problem with a very small timeout for sockets (one second) for reading and writing when sending and downloading replicated data, which made it impossible to download larger parts if there is a load on the network or disk (it resulted in cyclical attempts to download parts). This error occurred in version 1.1.54388.
* Fixed issues when using chroot in ZooKeeper if you inserted duplicate data blocks in the table.
-* The `has` function now works correctly for an array with Nullable elements ([#2115](https://github.com/yandex/ClickHouse/issues/2115)).
+* The `has` function now works correctly for an array with Nullable elements ([#2115](https://github.com/ClickHouse/ClickHouse/issues/2115)).
* The `system.tables` table now works correctly when used in distributed queries. The `metadata_modification_time` and `engine_full` columns are now non-virtual. Fixed an error that occurred if only these columns were queried from the table.
-* Fixed how an empty `TinyLog` table works after inserting an empty data block ([#2563](https://github.com/yandex/ClickHouse/issues/2563)).
+* Fixed how an empty `TinyLog` table works after inserting an empty data block ([#2563](https://github.com/ClickHouse/ClickHouse/issues/2563)).
* The `system.zookeeper` table works if the value of the node in ZooKeeper is NULL.
## ClickHouse release 1.1.54390, 2018-07-06
### New features:
-* Queries can be sent in `multipart/form-data` format (in the `query` field), which is useful if external data is also sent for query processing ([Olga Hvostikova](https://github.com/yandex/ClickHouse/pull/2490)).
-* Added the ability to enable or disable processing single or double quotes when reading data in CSV format. You can configure this in the `format_csv_allow_single_quotes` and `format_csv_allow_double_quotes` settings ([Amos Bird](https://github.com/yandex/ClickHouse/pull/2574)).
-* Now `OPTIMIZE TABLE ... FINAL` can be used without specifying the partition for non-replicated variants of `MergeTree` ([Amos Bird](https://github.com/yandex/ClickHouse/pull/2599)).
+* Queries can be sent in `multipart/form-data` format (in the `query` field), which is useful if external data is also sent for query processing ([Olga Hvostikova](https://github.com/ClickHouse/ClickHouse/pull/2490)).
+* Added the ability to enable or disable processing single or double quotes when reading data in CSV format. You can configure this in the `format_csv_allow_single_quotes` and `format_csv_allow_double_quotes` settings ([Amos Bird](https://github.com/ClickHouse/ClickHouse/pull/2574)).
+* Now `OPTIMIZE TABLE ... FINAL` can be used without specifying the partition for non-replicated variants of `MergeTree` ([Amos Bird](https://github.com/ClickHouse/ClickHouse/pull/2599)).
### Improvements:
-* Improved performance, reduced memory consumption, and correct memory consumption tracking with use of the IN operator when a table index could be used ([#2584](https://github.com/yandex/ClickHouse/pull/2584)).
+* Improved performance, reduced memory consumption, and correct memory consumption tracking with use of the IN operator when a table index could be used ([#2584](https://github.com/ClickHouse/ClickHouse/pull/2584)).
* Removed redundant checking of checksums when adding a data part. This is important when there are a large number of replicas, because in these cases the total number of checks was equal to N^2.
-* Added support for `Array(Tuple(...))` arguments for the `arrayEnumerateUniq` function ([#2573](https://github.com/yandex/ClickHouse/pull/2573)).
-* Added `Nullable` support for the `runningDifference` function ([#2594](https://github.com/yandex/ClickHouse/pull/2594)).
-* Improved query analysis performance when there is a very large number of expressions ([#2572](https://github.com/yandex/ClickHouse/pull/2572)).
-* Faster selection of data parts for merging in `ReplicatedMergeTree` tables. Faster recovery of the ZooKeeper session ([#2597](https://github.com/yandex/ClickHouse/pull/2597)).
-* The `format_version.txt` file for `MergeTree` tables is re-created if it is missing, which makes sense if ClickHouse is launched after copying the directory structure without files ([Ciprian Hacman](https://github.com/yandex/ClickHouse/pull/2593)).
+* Added support for `Array(Tuple(...))` arguments for the `arrayEnumerateUniq` function ([#2573](https://github.com/ClickHouse/ClickHouse/pull/2573)).
+* Added `Nullable` support for the `runningDifference` function ([#2594](https://github.com/ClickHouse/ClickHouse/pull/2594)).
+* Improved query analysis performance when there is a very large number of expressions ([#2572](https://github.com/ClickHouse/ClickHouse/pull/2572)).
+* Faster selection of data parts for merging in `ReplicatedMergeTree` tables. Faster recovery of the ZooKeeper session ([#2597](https://github.com/ClickHouse/ClickHouse/pull/2597)).
+* The `format_version.txt` file for `MergeTree` tables is re-created if it is missing, which makes sense if ClickHouse is launched after copying the directory structure without files ([Ciprian Hacman](https://github.com/ClickHouse/ClickHouse/pull/2593)).
### Bug fixes:
* Fixed a bug when working with ZooKeeper that could make it impossible to recover the session and readonly states of tables before restarting the server.
* Fixed a bug when working with ZooKeeper that could result in old nodes not being deleted if the session is interrupted.
-* Fixed an error in the `quantileTDigest` function for Float arguments (this bug was introduced in version 1.1.54388) ([Mikhail Surin](https://github.com/yandex/ClickHouse/pull/2553)).
-* Fixed a bug in the index for MergeTree tables if the primary key column is located inside the function for converting types between signed and unsigned integers of the same size ([#2603](https://github.com/yandex/ClickHouse/pull/2603)).
-* Fixed segfault if `macros` are used but they aren't in the config file ([#2570](https://github.com/yandex/ClickHouse/pull/2570)).
-* Fixed switching to the default database when reconnecting the client ([#2583](https://github.com/yandex/ClickHouse/pull/2583)).
+* Fixed an error in the `quantileTDigest` function for Float arguments (this bug was introduced in version 1.1.54388) ([Mikhail Surin](https://github.com/ClickHouse/ClickHouse/pull/2553)).
+* Fixed a bug in the index for MergeTree tables if the primary key column is located inside the function for converting types between signed and unsigned integers of the same size ([#2603](https://github.com/ClickHouse/ClickHouse/pull/2603)).
+* Fixed segfault if `macros` are used but they aren't in the config file ([#2570](https://github.com/ClickHouse/ClickHouse/pull/2570)).
+* Fixed switching to the default database when reconnecting the client ([#2583](https://github.com/ClickHouse/ClickHouse/pull/2583)).
* Fixed a bug that occurred when the `use_index_for_in_with_subqueries` setting was disabled.
### Security fix:
@@ -2044,50 +2044,50 @@ The expression must be a chain of equalities joined by the AND operator. Each si
* Support for the `ALTER TABLE t DELETE WHERE` query for replicated tables. Added the `system.mutations` table to track progress of this type of queries.
* Support for the `ALTER TABLE t [REPLACE|ATTACH] PARTITION` query for \*MergeTree tables.
-* Support for the `TRUNCATE TABLE` query ([Winter Zhang](https://github.com/yandex/ClickHouse/pull/2260))
+* Support for the `TRUNCATE TABLE` query ([Winter Zhang](https://github.com/ClickHouse/ClickHouse/pull/2260))
* Several new `SYSTEM` queries for replicated tables (`RESTART REPLICAS`, `SYNC REPLICA`, `[STOP|START] [MERGES|FETCHES|SENDS REPLICATED|REPLICATION QUEUES]`).
-* Added the ability to write to a table with the MySQL engine and the corresponding table function ([sundy-li](https://github.com/yandex/ClickHouse/pull/2294)).
-* Added the `url()` table function and the `URL` table engine ([Alexander Sapin](https://github.com/yandex/ClickHouse/pull/2501)).
-* Added the `windowFunnel` aggregate function ([sundy-li](https://github.com/yandex/ClickHouse/pull/2352)).
-* New `startsWith` and `endsWith` functions for strings ([Vadim Plakhtinsky](https://github.com/yandex/ClickHouse/pull/2429)).
-* The `numbers()` table function now allows you to specify the offset ([Winter Zhang](https://github.com/yandex/ClickHouse/pull/2535)).
+* Added the ability to write to a table with the MySQL engine and the corresponding table function ([sundy-li](https://github.com/ClickHouse/ClickHouse/pull/2294)).
+* Added the `url()` table function and the `URL` table engine ([Alexander Sapin](https://github.com/ClickHouse/ClickHouse/pull/2501)).
+* Added the `windowFunnel` aggregate function ([sundy-li](https://github.com/ClickHouse/ClickHouse/pull/2352)).
+* New `startsWith` and `endsWith` functions for strings ([Vadim Plakhtinsky](https://github.com/ClickHouse/ClickHouse/pull/2429)).
+* The `numbers()` table function now allows you to specify the offset ([Winter Zhang](https://github.com/ClickHouse/ClickHouse/pull/2535)).
* The password to `clickhouse-client` can be entered interactively.
-* Server logs can now be sent to syslog ([Alexander Krasheninnikov](https://github.com/yandex/ClickHouse/pull/2459)).
-* Support for logging in dictionaries with a shared library source ([Alexander Sapin](https://github.com/yandex/ClickHouse/pull/2472)).
-* Support for custom CSV delimiters ([Ivan Zhukov](https://github.com/yandex/ClickHouse/pull/2263))
+* Server logs can now be sent to syslog ([Alexander Krasheninnikov](https://github.com/ClickHouse/ClickHouse/pull/2459)).
+* Support for logging in dictionaries with a shared library source ([Alexander Sapin](https://github.com/ClickHouse/ClickHouse/pull/2472)).
+* Support for custom CSV delimiters ([Ivan Zhukov](https://github.com/ClickHouse/ClickHouse/pull/2263))
* Added the `date_time_input_format` setting. If you switch this setting to `'best_effort'`, DateTime values will be read in a wide range of formats.
* Added the `clickhouse-obfuscator` utility for data obfuscation. Usage example: publishing data used in performance tests.
### Experimental features:
-* Added the ability to calculate `and` arguments only where they are needed ([Anastasia Tsarkova](https://github.com/yandex/ClickHouse/pull/2272))
-* JIT compilation to native code is now available for some expressions ([pyos](https://github.com/yandex/ClickHouse/pull/2277)).
+* Added the ability to calculate `and` arguments only where they are needed ([Anastasia Tsarkova](https://github.com/ClickHouse/ClickHouse/pull/2272))
+* JIT compilation to native code is now available for some expressions ([pyos](https://github.com/ClickHouse/ClickHouse/pull/2277)).
### Bug fixes:
* Duplicates no longer appear for a query with `DISTINCT` and `ORDER BY`.
* Queries with `ARRAY JOIN` and `arrayFilter` no longer return an incorrect result.
-* Fixed an error when reading an array column from a Nested structure ([#2066](https://github.com/yandex/ClickHouse/issues/2066)).
+* Fixed an error when reading an array column from a Nested structure ([#2066](https://github.com/ClickHouse/ClickHouse/issues/2066)).
* Fixed an error when analyzing queries with a HAVING clause like `HAVING tuple IN (...)`.
* Fixed an error when analyzing queries with recursive aliases.
-* Fixed an error when reading from ReplacingMergeTree with a condition in PREWHERE that filters all rows ([#2525](https://github.com/yandex/ClickHouse/issues/2525)).
+* Fixed an error when reading from ReplacingMergeTree with a condition in PREWHERE that filters all rows ([#2525](https://github.com/ClickHouse/ClickHouse/issues/2525)).
* User profile settings were not applied when using sessions in the HTTP interface.
* Fixed how settings are applied from the command line parameters in clickhouse-local.
* The ZooKeeper client library now uses the session timeout received from the server.
* Fixed a bug in the ZooKeeper client library when the client waited for the server response longer than the timeout.
-* Fixed pruning of parts for queries with conditions on partition key columns ([#2342](https://github.com/yandex/ClickHouse/issues/2342)).
-* Merges are now possible after `CLEAR COLUMN IN PARTITION` ([#2315](https://github.com/yandex/ClickHouse/issues/2315)).
-* Type mapping in the ODBC table function has been fixed ([sundy-li](https://github.com/yandex/ClickHouse/pull/2268)).
-* Type comparisons have been fixed for `DateTime` with and without the time zone ([Alexander Bocharov](https://github.com/yandex/ClickHouse/pull/2400)).
+* Fixed pruning of parts for queries with conditions on partition key columns ([#2342](https://github.com/ClickHouse/ClickHouse/issues/2342)).
+* Merges are now possible after `CLEAR COLUMN IN PARTITION` ([#2315](https://github.com/ClickHouse/ClickHouse/issues/2315)).
+* Type mapping in the ODBC table function has been fixed ([sundy-li](https://github.com/ClickHouse/ClickHouse/pull/2268)).
+* Type comparisons have been fixed for `DateTime` with and without the time zone ([Alexander Bocharov](https://github.com/ClickHouse/ClickHouse/pull/2400)).
* Fixed syntactic parsing and formatting of the `CAST` operator.
-* Fixed insertion into a materialized view for the Distributed table engine ([Babacar Diassé](https://github.com/yandex/ClickHouse/pull/2411)).
-* Fixed a race condition when writing data from the `Kafka` engine to materialized views ([Yangkuan Liu](https://github.com/yandex/ClickHouse/pull/2448)).
+* Fixed insertion into a materialized view for the Distributed table engine ([Babacar Diassé](https://github.com/ClickHouse/ClickHouse/pull/2411)).
+* Fixed a race condition when writing data from the `Kafka` engine to materialized views ([Yangkuan Liu](https://github.com/ClickHouse/ClickHouse/pull/2448)).
* Fixed SSRF in the remote() table function.
-* Fixed exit behavior of `clickhouse-client` in multiline mode ([#2510](https://github.com/yandex/ClickHouse/issues/2510)).
+* Fixed exit behavior of `clickhouse-client` in multiline mode ([#2510](https://github.com/ClickHouse/ClickHouse/issues/2510)).
### Improvements:
-* Background tasks in replicated tables are now performed in a thread pool instead of in separate threads ([Silviu Caragea](https://github.com/yandex/ClickHouse/pull/1722)).
+* Background tasks in replicated tables are now performed in a thread pool instead of in separate threads ([Silviu Caragea](https://github.com/ClickHouse/ClickHouse/pull/1722)).
* Improved LZ4 compression performance.
* Faster analysis for queries with a large number of JOINs and sub-queries.
* The DNS cache is now updated automatically when there are too many network errors.
@@ -2095,8 +2095,8 @@ The expression must be a chain of equalities joined by the AND operator. Each si
* Corrected the discrepancy in the event counters `Query`, `SelectQuery`, and `InsertQuery`.
* Expressions like `tuple IN (SELECT tuple)` are allowed if the tuple types match.
* A server with replicated tables can start even if you haven't configured ZooKeeper.
-* When calculating the number of available CPU cores, limits on cgroups are now taken into account ([Atri Sharma](https://github.com/yandex/ClickHouse/pull/2325)).
-* Added chown for config directories in the systemd config file ([Mikhail Shiryaev](https://github.com/yandex/ClickHouse/pull/2421)).
+* When calculating the number of available CPU cores, limits on cgroups are now taken into account ([Atri Sharma](https://github.com/ClickHouse/ClickHouse/pull/2325)).
+* Added chown for config directories in the systemd config file ([Mikhail Shiryaev](https://github.com/ClickHouse/ClickHouse/pull/2421)).
### Build changes:
@@ -2104,10 +2104,10 @@ The expression must be a chain of equalities joined by the AND operator. Each si
* Added the ability to build llvm from submodule.
* The version of the librdkafka library has been updated to v0.11.4.
* Added the ability to use the system libcpuid library. The library version has been updated to 0.4.0.
-* Fixed the build using the vectorclass library ([Babacar Diassé](https://github.com/yandex/ClickHouse/pull/2274)).
+* Fixed the build using the vectorclass library ([Babacar Diassé](https://github.com/ClickHouse/ClickHouse/pull/2274)).
* Cmake now generates files for ninja by default (like when using `-G Ninja`).
-* Added the ability to use the libtinfo library instead of libtermcap ([Georgy Kondratiev](https://github.com/yandex/ClickHouse/pull/2519)).
-* Fixed a header file conflict in Fedora Rawhide ([#2520](https://github.com/yandex/ClickHouse/issues/2520)).
+* Added the ability to use the libtinfo library instead of libtermcap ([Georgy Kondratiev](https://github.com/ClickHouse/ClickHouse/pull/2519)).
+* Fixed a header file conflict in Fedora Rawhide ([#2520](https://github.com/ClickHouse/ClickHouse/issues/2520)).
### Backward incompatible changes:
@@ -2215,7 +2215,7 @@ The expression must be a chain of equalities joined by the AND operator. Each si
* Added the `system.macros` table and auto updating of macros when the config file is changed.
* Added the `SYSTEM RELOAD CONFIG` query.
-* Added the `maxIntersections(left_col, right_col)` aggregate function, which returns the maximum number of simultaneously intersecting intervals `[left; right]`. The `maxIntersectionsPosition(left, right)` function returns the beginning of the "maximum" interval. ([Michael Furmur](https://github.com/yandex/ClickHouse/pull/2012)).
+* Added the `maxIntersections(left_col, right_col)` aggregate function, which returns the maximum number of simultaneously intersecting intervals `[left; right]`. The `maxIntersectionsPosition(left, right)` function returns the beginning of the "maximum" interval. ([Michael Furmur](https://github.com/ClickHouse/ClickHouse/pull/2012)).
### Improvements:
diff --git a/README.md b/README.md
index 45c626604fd..93e4af1a43d 100644
--- a/README.md
+++ b/README.md
@@ -1,4 +1,4 @@
-[![ClickHouse — open source distributed column-oriented DBMS](https://github.com/yandex/ClickHouse/raw/master/website/images/logo-400x240.png)](https://clickhouse.yandex)
+[![ClickHouse — open source distributed column-oriented DBMS](https://github.com/ClickHouse/ClickHouse/raw/master/website/images/logo-400x240.png)](https://clickhouse.yandex)
ClickHouse is an open-source column-oriented database management system that allows generating analytical data reports in real time.
diff --git a/SECURITY.md b/SECURITY.md
index c3682d6c499..62a70166dab 100644
--- a/SECURITY.md
+++ b/SECURITY.md
@@ -14,4 +14,4 @@ currently being supported with security updates:
## Reporting a Vulnerability
To report a potential vulnerability in ClickHouse please use the security advisory feature of GitHub:
-https://github.com/yandex/ClickHouse/security/advisories
+https://github.com/ClickHouse/ClickHouse/security/advisories
diff --git a/cmake/sanitize.cmake b/cmake/sanitize.cmake
index 9e8ef3e857a..04f2b80d346 100644
--- a/cmake/sanitize.cmake
+++ b/cmake/sanitize.cmake
@@ -14,8 +14,11 @@ if (SANITIZE)
endif ()
elseif (SANITIZE STREQUAL "memory")
- set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${SAN_FLAGS} -fsanitize=memory -fsanitize-memory-track-origins")
- set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${SAN_FLAGS} -fsanitize=memory -fsanitize-memory-track-origins")
+ set (MSAN_FLAGS "-fsanitize=memory -fsanitize-memory-track-origins -fno-optimize-sibling-calls")
+
+ set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${SAN_FLAGS} ${MSAN_FLAGS}")
+ set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${SAN_FLAGS} ${MSAN_FLAGS}")
+
if (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
set (CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -fsanitize=memory")
endif()
@@ -23,6 +26,28 @@ if (SANITIZE)
set (CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -static-libmsan")
endif ()
+ # Temporarily disable many external libraries that don't work under
+ # MemorySanitizer yet.
+ set (ENABLE_HDFS 0 CACHE BOOL "")
+ set (ENABLE_CAPNP 0 CACHE BOOL "")
+ set (ENABLE_RDKAFKA 0 CACHE BOOL "")
+ set (ENABLE_ICU 0 CACHE BOOL "")
+ set (ENABLE_POCO_MONGODB 0 CACHE BOOL "")
+ set (ENABLE_POCO_NETSSL 0 CACHE BOOL "")
+ set (ENABLE_POCO_ODBC 0 CACHE BOOL "")
+ set (ENABLE_ODBC 0 CACHE BOOL "")
+ set (ENABLE_MYSQL 0 CACHE BOOL "")
+ set (ENABLE_EMBEDDED_COMPILER 0 CACHE BOOL "")
+ set (USE_INTERNAL_CAPNP_LIBRARY 0 CACHE BOOL "")
+ set (USE_SIMDJSON 0 CACHE BOOL "")
+ set (ENABLE_READLINE 0 CACHE BOOL "")
+ set (ENABLE_ORC 0 CACHE BOOL "")
+ set (ENABLE_PARQUET 0 CACHE BOOL "")
+ set (USE_CAPNP 0 CACHE BOOL "")
+ set (USE_INTERNAL_ORC_LIBRARY 0 CACHE BOOL "")
+ set (USE_ORC 0 CACHE BOOL "")
+ set (ENABLE_SSL 0 CACHE BOOL "")
+
elseif (SANITIZE STREQUAL "thread")
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${SAN_FLAGS} -fsanitize=thread")
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${SAN_FLAGS} -fsanitize=thread")
diff --git a/cmake/target.cmake b/cmake/target.cmake
index eb11fc57cf1..1be6abe8152 100644
--- a/cmake/target.cmake
+++ b/cmake/target.cmake
@@ -1,12 +1,12 @@
if (CMAKE_SYSTEM_NAME MATCHES "Linux")
set (OS_LINUX 1)
- add_compile_definitions(OS_LINUX)
+ add_definitions(-D OS_LINUX)
elseif (CMAKE_SYSTEM_NAME MATCHES "FreeBSD")
set (OS_FREEBSD 1)
- add_compile_definitions(OS_FREEBSD)
+ add_definitions(-D OS_FREEBSD)
elseif (CMAKE_SYSTEM_NAME MATCHES "Darwin")
set (OS_DARWIN 1)
- add_compile_definitions(OS_DARWIN)
+ add_definitions(-D OS_DARWIN)
endif ()
if (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
diff --git a/dbms/cmake/version.cmake b/dbms/cmake/version.cmake
index a2decdff605..8dcdcf64a7a 100644
--- a/dbms/cmake/version.cmake
+++ b/dbms/cmake/version.cmake
@@ -1,11 +1,11 @@
# This strings autochanged from release_lib.sh:
-set(VERSION_REVISION 54426)
+set(VERSION_REVISION 54427)
set(VERSION_MAJOR 19)
-set(VERSION_MINOR 15)
+set(VERSION_MINOR 16)
set(VERSION_PATCH 1)
-set(VERSION_GITHASH 6f1a8c37abe6ee4e7ee74c0b5cb9c05a87417b61)
-set(VERSION_DESCRIBE v19.15.1.1-prestable)
-set(VERSION_STRING 19.15.1.1)
+set(VERSION_GITHASH 38f65a6a2120d2e76bcf71131068f41195149dfc)
+set(VERSION_DESCRIBE v19.16.1.1-prestable)
+set(VERSION_STRING 19.16.1.1)
# end of autochange
set(VERSION_EXTRA "" CACHE STRING "")
diff --git a/dbms/src/Common/DiskSpaceMonitor.cpp b/dbms/src/Common/DiskSpaceMonitor.cpp
index 5b07e11f31b..967aa34ee40 100644
--- a/dbms/src/Common/DiskSpaceMonitor.cpp
+++ b/dbms/src/Common/DiskSpaceMonitor.cpp
@@ -1,10 +1,12 @@
#include
+#include
+#include
#include
-#include
#include
+
namespace DB
{
@@ -45,7 +47,7 @@ std::filesystem::path getMountPoint(std::filesystem::path absolute_path)
return absolute_path;
}
- /// Returns name of filesystem mounted to mount_point
+/// Returns name of filesystem mounted to mount_point
#if !defined(__linux__)
[[noreturn]]
#endif
@@ -65,7 +67,7 @@ std::string getFilesystemName([[maybe_unused]] const std::string & mount_point)
throw DB::Exception("Cannot find name of filesystem by mount point " + mount_point, ErrorCodes::SYSTEM_ERROR);
return fs_info.mnt_fsname;
#else
- throw DB::Exception("Supported on linux only", ErrorCodes::NOT_IMPLEMENTED);
+ throw DB::Exception("The function getFilesystemName is supported on Linux only", ErrorCodes::NOT_IMPLEMENTED);
#endif
}
@@ -82,7 +84,7 @@ bool Disk::tryReserve(UInt64 bytes) const
std::lock_guard lock(mutex);
if (bytes == 0)
{
- LOG_DEBUG(&Logger::get("DiskSpaceMonitor"), "Reserving 0 bytes on disk " << name);
+ LOG_DEBUG(&Logger::get("DiskSpaceMonitor"), "Reserving 0 bytes on disk " << backQuote(name));
++reservation_count;
return true;
}
@@ -93,7 +95,8 @@ bool Disk::tryReserve(UInt64 bytes) const
{
LOG_DEBUG(
&Logger::get("DiskSpaceMonitor"),
- "Reserving " << bytes << " bytes on disk " << name << " having unreserved " << unreserved_space << " bytes.");
+ "Reserving " << formatReadableSizeWithBinarySuffix(bytes) << " on disk " << backQuote(name)
+ << ", having unreserved " << formatReadableSizeWithBinarySuffix(unreserved_space) << ".");
++reservation_count;
reserved_bytes += bytes;
return true;
@@ -283,14 +286,14 @@ Volume::Volume(
max_data_part_size = static_cast(sum_size * ratio / disks.size());
for (size_t i = 0; i < disks.size(); ++i)
if (sizes[i] < max_data_part_size)
- LOG_WARNING(logger, "Disk " << disks[i]->getName() << " on volume " << config_prefix <<
- " have not enough space (" << sizes[i] <<
+ LOG_WARNING(logger, "Disk " << backQuote(disks[i]->getName()) << " on volume " << backQuote(config_prefix) <<
+ " have not enough space (" << formatReadableSizeWithBinarySuffix(sizes[i]) <<
") for containing part the size of max_data_part_size (" <<
- max_data_part_size << ")");
+ formatReadableSizeWithBinarySuffix(max_data_part_size) << ")");
}
constexpr UInt64 MIN_PART_SIZE = 8u * 1024u * 1024u;
if (max_data_part_size < MIN_PART_SIZE)
- LOG_WARNING(logger, "Volume '" << name << "' max_data_part_size is too low ("
+ LOG_WARNING(logger, "Volume " << backQuote(name) << " max_data_part_size is too low ("
<< formatReadableSizeWithBinarySuffix(max_data_part_size) << " < "
<< formatReadableSizeWithBinarySuffix(MIN_PART_SIZE) << ")");
}
@@ -505,7 +508,7 @@ StoragePolicySelector::StoragePolicySelector(
ErrorCodes::EXCESSIVE_ELEMENT_IN_CONFIG);
policies.emplace(name, std::make_shared(name, config, config_prefix + "." + name, disks));
- LOG_INFO(&Logger::get("StoragePolicySelector"), "Storage policy " << name << " loaded");
+ LOG_INFO(&Logger::get("StoragePolicySelector"), "Storage policy " << backQuote(name) << " loaded");
}
constexpr auto default_storage_policy_name = "default";
diff --git a/dbms/src/Common/ErrorCodes.cpp b/dbms/src/Common/ErrorCodes.cpp
index e2ee9720d68..b9b0d90e505 100644
--- a/dbms/src/Common/ErrorCodes.cpp
+++ b/dbms/src/Common/ErrorCodes.cpp
@@ -457,6 +457,7 @@ namespace ErrorCodes
extern const int UNKNOWN_PROTOCOL = 480;
extern const int PATH_ACCESS_DENIED = 481;
extern const int DICTIONARY_ACCESS_DENIED = 482;
+ extern const int TOO_MANY_REDIRECTS = 483;
extern const int KEEPER_EXCEPTION = 999;
extern const int POCO_EXCEPTION = 1000;
diff --git a/dbms/src/Common/SensitiveDataMasker.h b/dbms/src/Common/SensitiveDataMasker.h
index 942b97181c5..99a5c8c72b4 100644
--- a/dbms/src/Common/SensitiveDataMasker.h
+++ b/dbms/src/Common/SensitiveDataMasker.h
@@ -55,7 +55,7 @@ public:
size_t wipeSensitiveData(std::string & data) const;
/// setInstance is not thread-safe and should be called once in single-thread mode.
- /// https://github.com/yandex/ClickHouse/pull/6810#discussion_r321183367
+ /// https://github.com/ClickHouse/ClickHouse/pull/6810#discussion_r321183367
static void setInstance(std::unique_ptr sensitive_data_masker_);
static SensitiveDataMasker * getInstance();
diff --git a/dbms/src/Common/checkStackSize.cpp b/dbms/src/Common/checkStackSize.cpp
index b2726401937..62f04da70c7 100644
--- a/dbms/src/Common/checkStackSize.cpp
+++ b/dbms/src/Common/checkStackSize.cpp
@@ -35,7 +35,9 @@ void checkStackSize()
// Stack size for the main thread is 8MB on OSX excluding the guard page size.
pthread_t thread = pthread_self();
max_stack_size = pthread_main_np() ? (8 * 1024 * 1024) : pthread_get_stacksize_np(thread);
- stack_address = pthread_get_stackaddr_np(thread);
+
+ // stack_address points to the start of the stack, not the end how it's returned by pthread_get_stackaddr_np
+ stack_address = reinterpret_cast(reinterpret_cast(pthread_get_stackaddr_np(thread)) - max_stack_size);
#else
pthread_attr_t attr;
# if defined(__FreeBSD__)
diff --git a/dbms/src/Core/Block.cpp b/dbms/src/Core/Block.cpp
index b045b9ec1ff..c64cf387a3b 100644
--- a/dbms/src/Core/Block.cpp
+++ b/dbms/src/Core/Block.cpp
@@ -219,11 +219,14 @@ size_t Block::getPositionByName(const std::string & name) const
}
-void Block::checkNumberOfRows() const
+void Block::checkNumberOfRows(bool allow_null_columns) const
{
ssize_t rows = -1;
for (const auto & elem : data)
{
+ if (!elem.column && allow_null_columns)
+ continue;
+
if (!elem.column)
throw Exception("Column " + elem.name + " in block is nullptr, in method checkNumberOfRows."
, ErrorCodes::SIZES_OF_COLUMNS_DOESNT_MATCH);
diff --git a/dbms/src/Core/Block.h b/dbms/src/Core/Block.h
index 4a93e5ed803..ae8b07718dd 100644
--- a/dbms/src/Core/Block.h
+++ b/dbms/src/Core/Block.h
@@ -90,7 +90,7 @@ public:
size_t columns() const { return data.size(); }
/// Checks that every column in block is not nullptr and has same number of elements.
- void checkNumberOfRows() const;
+ void checkNumberOfRows(bool allow_null_columns = false) const;
/// Approximate number of bytes in memory - for profiling and limits.
size_t bytes() const;
diff --git a/dbms/src/Core/Settings.h b/dbms/src/Core/Settings.h
index cacaf883fb7..6d530339a2d 100644
--- a/dbms/src/Core/Settings.h
+++ b/dbms/src/Core/Settings.h
@@ -170,6 +170,8 @@ struct Settings : public SettingsCollection
\
M(SettingBool, add_http_cors_header, false, "Write add http CORS header.") \
\
+ M(SettingUInt64, max_http_get_redirects, 0, "Max number of http GET redirects hops allowed. Make sure additional security measures are in place to prevent a malicious server to redirect your requests to unexpected services.") \
+ \
M(SettingBool, input_format_skip_unknown_fields, false, "Skip columns with unknown names from input data (it works for JSONEachRow, CSVWithNames, TSVWithNames and TSKV formats).") \
M(SettingBool, input_format_with_names_use_header, false, "For TSVWithNames and CSVWithNames input formats this controls whether format parser is to assume that column data appear in the input exactly as they are specified in the header.") \
M(SettingBool, input_format_import_nested_json, false, "Map nested JSON data to nested tables (it works for JSONEachRow format).") \
diff --git a/dbms/src/Functions/FunctionsCoding.h b/dbms/src/Functions/FunctionsCoding.h
index ed34863765d..1ab00d725f6 100644
--- a/dbms/src/Functions/FunctionsCoding.h
+++ b/dbms/src/Functions/FunctionsCoding.h
@@ -946,9 +946,10 @@ public:
{
WhichDataType which(arguments[0]);
- if (!which.isStringOrFixedString()
- && !which.isDateOrDateTime()
- && !which.isUInt())
+ if (!which.isStringOrFixedString() &&
+ !which.isDateOrDateTime() &&
+ !which.isUInt() &&
+ !which.isFloat())
throw Exception("Illegal type " + arguments[0]->getName() + " of argument of function " + getName(),
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
@@ -1021,6 +1022,45 @@ public:
}
}
+ template
+ bool tryExecuteFloat(const IColumn * col, ColumnPtr & col_res)
+ {
+ const ColumnVector * col_vec = checkAndGetColumn>(col);
+
+ static constexpr size_t FLOAT_HEX_LENGTH = sizeof(T) * 2 + 1; /// Including trailing zero byte.
+
+ if (col_vec)
+ {
+ auto col_str = ColumnString::create();
+ ColumnString::Chars & out_vec = col_str->getChars();
+ ColumnString::Offsets & out_offsets = col_str->getOffsets();
+
+ const typename ColumnVector::Container & in_vec = col_vec->getData();
+
+ size_t size = in_vec.size();
+ out_offsets.resize(size);
+ out_vec.resize(size * FLOAT_HEX_LENGTH);
+
+ size_t pos = 0;
+ char * out = reinterpret_cast(&out_vec[0]);
+ for (size_t i = 0; i < size; ++i)
+ {
+ const UInt8 * in_pos = reinterpret_cast(&in_vec[i]);
+ executeOneString(in_pos, in_pos + sizeof(T), out);
+
+ pos += FLOAT_HEX_LENGTH;
+ out_offsets[i] = pos;
+ }
+
+ col_res = std::move(col_str);
+ return true;
+ }
+ else
+ {
+ return false;
+ }
+ }
+
void executeOneString(const UInt8 * pos, const UInt8 * end, char *& out)
{
while (pos < end)
@@ -1135,7 +1175,9 @@ public:
tryExecuteUInt(column, res_column) ||
tryExecuteUInt(column, res_column) ||
tryExecuteString(column, res_column) ||
- tryExecuteFixedString(column, res_column))
+ tryExecuteFixedString(column, res_column) ||
+ tryExecuteFloat(column, res_column) ||
+ tryExecuteFloat(column, res_column))
return;
throw Exception("Illegal column " + block.getByPosition(arguments[0]).column->getName()
diff --git a/dbms/src/Functions/IFunction.cpp b/dbms/src/Functions/IFunction.cpp
index a86ea724f7a..9a3633a9790 100644
--- a/dbms/src/Functions/IFunction.cpp
+++ b/dbms/src/Functions/IFunction.cpp
@@ -337,19 +337,43 @@ static ColumnPtr replaceLowCardinalityColumnsByNestedAndGetDictionaryIndexes(
size_t num_rows = input_rows_count;
ColumnPtr indexes;
+ /// Find first LowCardinality column and replace it to nested dictionary.
for (auto arg : args)
{
ColumnWithTypeAndName & column = block.getByPosition(arg);
if (auto * low_cardinality_column = checkAndGetColumn(column.column.get()))
{
+ /// Single LowCardinality column is supported now.
if (indexes)
throw Exception("Expected single dictionary argument for function.", ErrorCodes::LOGICAL_ERROR);
- indexes = low_cardinality_column->getIndexesPtr();
- num_rows = low_cardinality_column->getDictionary().size();
+ auto * low_cardinality_type = checkAndGetDataType(column.type.get());
+
+ if (!low_cardinality_type)
+ throw Exception("Incompatible type for low cardinality column: " + column.type->getName(),
+ ErrorCodes::LOGICAL_ERROR);
+
+ if (can_be_executed_on_default_arguments)
+ {
+ /// Normal case, when function can be executed on values's default.
+ column.column = low_cardinality_column->getDictionary().getNestedColumn();
+ indexes = low_cardinality_column->getIndexesPtr();
+ }
+ else
+ {
+ /// Special case when default value can't be used. Example: 1 % LowCardinality(Int).
+ /// LowCardinality always contains default, so 1 % 0 will throw exception in normal case.
+ auto dict_encoded = low_cardinality_column->getMinimalDictionaryEncodedColumn(0, low_cardinality_column->size());
+ column.column = dict_encoded.dictionary;
+ indexes = dict_encoded.indexes;
+ }
+
+ num_rows = column.column->size();
+ column.type = low_cardinality_type->getDictionaryType();
}
}
+ /// Change size of constants.
for (auto arg : args)
{
ColumnWithTypeAndName & column = block.getByPosition(arg);
@@ -358,26 +382,12 @@ static ColumnPtr replaceLowCardinalityColumnsByNestedAndGetDictionaryIndexes(
column.column = column_const->removeLowCardinality()->cloneResized(num_rows);
column.type = removeLowCardinality(column.type);
}
- else if (auto * low_cardinality_column = checkAndGetColumn(column.column.get()))
- {
- auto * low_cardinality_type = checkAndGetDataType(column.type.get());
-
- if (!low_cardinality_type)
- throw Exception("Incompatible type for low cardinality column: " + column.type->getName(),
- ErrorCodes::LOGICAL_ERROR);
-
- if (can_be_executed_on_default_arguments)
- column.column = low_cardinality_column->getDictionary().getNestedColumn();
- else
- {
- auto dict_encoded = low_cardinality_column->getMinimalDictionaryEncodedColumn(0, low_cardinality_column->size());
- column.column = dict_encoded.dictionary;
- indexes = dict_encoded.indexes;
- }
- column.type = low_cardinality_type->getDictionaryType();
- }
}
+#ifndef NDEBUG
+ block.checkNumberOfRows(true);
+#endif
+
return indexes;
}
diff --git a/dbms/src/Functions/registerFunctionsString.cpp b/dbms/src/Functions/registerFunctionsString.cpp
index e55e43ba2f9..1f4219b18f2 100644
--- a/dbms/src/Functions/registerFunctionsString.cpp
+++ b/dbms/src/Functions/registerFunctionsString.cpp
@@ -4,6 +4,7 @@ namespace DB
{
class FunctionFactory;
+void registerFunctionRepeat(FunctionFactory &);
void registerFunctionEmpty(FunctionFactory &);
void registerFunctionNotEmpty(FunctionFactory &);
void registerFunctionLength(FunctionFactory &);
@@ -34,6 +35,7 @@ void registerFunctionTryBase64Decode(FunctionFactory &);
void registerFunctionsString(FunctionFactory & factory)
{
+ registerFunctionRepeat(factory);
registerFunctionEmpty(factory);
registerFunctionNotEmpty(factory);
registerFunctionLength(factory);
diff --git a/dbms/src/Functions/repeat.cpp b/dbms/src/Functions/repeat.cpp
new file mode 100644
index 00000000000..112ae452e86
--- /dev/null
+++ b/dbms/src/Functions/repeat.cpp
@@ -0,0 +1,212 @@
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+
+
+namespace DB
+{
+namespace ErrorCodes
+{
+ extern const int ILLEGAL_COLUMN;
+ extern const int ILLEGAL_TYPE_OF_ARGUMENT;
+ extern const int TOO_LARGE_STRING_SIZE;
+}
+
+struct RepeatImpl
+{
+ /// Safety threshold against DoS.
+ static inline void checkRepeatTime(UInt64 repeat_time)
+ {
+ static constexpr UInt64 max_repeat_times = 1000000;
+ if (repeat_time > max_repeat_times)
+ throw Exception("Too many times to repeat (" + std::to_string(repeat_time) + "), maximum is: " + std::to_string(max_repeat_times),
+ ErrorCodes::TOO_LARGE_STRING_SIZE);
+ }
+
+ static void vectorStrConstRepeat(
+ const ColumnString::Chars & data,
+ const ColumnString::Offsets & offsets,
+ ColumnString::Chars & res_data,
+ ColumnString::Offsets & res_offsets,
+ UInt64 repeat_time)
+ {
+ checkRepeatTime(repeat_time);
+
+ UInt64 data_size = 0;
+ res_offsets.assign(offsets);
+ for (UInt64 i = 0; i < offsets.size(); ++i)
+ {
+ data_size += (offsets[i] - offsets[i - 1] - 1) * repeat_time + 1; /// Note that accessing -1th element is valid for PaddedPODArray.
+ res_offsets[i] = data_size;
+ }
+ res_data.resize(data_size);
+ for (UInt64 i = 0; i < res_offsets.size(); ++i)
+ {
+ process(data.data() + offsets[i - 1], res_data.data() + res_offsets[i - 1], offsets[i] - offsets[i - 1], repeat_time);
+ }
+ }
+
+ template
+ static void vectorStrVectorRepeat(
+ const ColumnString::Chars & data,
+ const ColumnString::Offsets & offsets,
+ ColumnString::Chars & res_data,
+ ColumnString::Offsets & res_offsets,
+ const PaddedPODArray & col_num)
+ {
+ UInt64 data_size = 0;
+ res_offsets.assign(offsets);
+ for (UInt64 i = 0; i < col_num.size(); ++i)
+ {
+ data_size += (offsets[i] - offsets[i - 1] - 1) * col_num[i] + 1;
+ res_offsets[i] = data_size;
+ }
+ res_data.resize(data_size);
+
+ for (UInt64 i = 0; i < col_num.size(); ++i)
+ {
+ T repeat_time = col_num[i];
+ checkRepeatTime(repeat_time);
+ process(data.data() + offsets[i - 1], res_data.data() + res_offsets[i - 1], offsets[i] - offsets[i - 1], repeat_time);
+ }
+ }
+
+ template
+ static void constStrVectorRepeat(
+ const StringRef & copy_str,
+ ColumnString::Chars & res_data,
+ ColumnString::Offsets & res_offsets,
+ const PaddedPODArray & col_num)
+ {
+ UInt64 data_size = 0;
+ res_offsets.resize(col_num.size());
+ UInt64 str_size = copy_str.size;
+ UInt64 col_size = col_num.size();
+ for (UInt64 i = 0; i < col_size; ++i)
+ {
+ data_size += str_size * col_num[i] + 1;
+ res_offsets[i] = data_size;
+ }
+ res_data.resize(data_size);
+ for (UInt64 i = 0; i < col_size; ++i)
+ {
+ T repeat_time = col_num[i];
+ checkRepeatTime(repeat_time);
+ process(
+ reinterpret_cast(const_cast(copy_str.data)),
+ res_data.data() + res_offsets[i - 1],
+ str_size + 1,
+ repeat_time);
+ }
+ }
+
+private:
+ static void process(const UInt8 * src, UInt8 * dst, UInt64 size, UInt64 repeat_time)
+ {
+ for (UInt64 i = 0; i < repeat_time; ++i)
+ {
+ memcpy(dst, src, size - 1);
+ dst += size - 1;
+ }
+ *dst = 0;
+ }
+};
+
+
+class FunctionRepeat : public IFunction
+{
+ template
+ static bool castType(const IDataType * type, F && f)
+ {
+ return castTypeToEither(type, std::forward(f));
+ }
+
+public:
+ static constexpr auto name = "repeat";
+ static FunctionPtr create(const Context &) { return std::make_shared(); }
+
+ String getName() const override { return name; }
+
+ size_t getNumberOfArguments() const override { return 2; }
+
+ DataTypePtr getReturnTypeImpl(const DataTypes & arguments) const override
+ {
+ if (!isString(arguments[0]))
+ throw Exception(
+ "Illegal type " + arguments[0]->getName() + " of argument of function " + getName(), ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
+ if (!isUnsignedInteger(arguments[1]))
+ throw Exception(
+ "Illegal type " + arguments[1]->getName() + " of argument of function " + getName(), ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
+ return arguments[0];
+ }
+
+ bool useDefaultImplementationForConstants() const override { return true; }
+
+ void executeImpl(Block & block, const ColumnNumbers & arguments, size_t result, size_t) override
+ {
+ const auto & strcolumn = block.getByPosition(arguments[0]).column;
+ const auto & numcolumn = block.getByPosition(arguments[1]).column;
+
+ if (const ColumnString * col = checkAndGetColumn(strcolumn.get()))
+ {
+ if (const ColumnConst * scale_column_num = checkAndGetColumn(numcolumn.get()))
+ {
+ UInt64 repeat_time = scale_column_num->getValue();
+ auto col_res = ColumnString::create();
+ RepeatImpl::vectorStrConstRepeat(col->getChars(), col->getOffsets(), col_res->getChars(), col_res->getOffsets(), repeat_time);
+ block.getByPosition(result).column = std::move(col_res);
+ return;
+ }
+ else if (castType(block.getByPosition(arguments[1]).type.get(), [&](const auto & type)
+ {
+ using DataType = std::decay_t;
+ using T = typename DataType::FieldType;
+ const ColumnVector * colnum = checkAndGetColumn>(numcolumn.get());
+ auto col_res = ColumnString::create();
+ RepeatImpl::vectorStrVectorRepeat(col->getChars(), col->getOffsets(), col_res->getChars(), col_res->getOffsets(), colnum->getData());
+ block.getByPosition(result).column = std::move(col_res);
+ return true;
+ }))
+ {
+ return;
+ }
+ }
+ else if (const ColumnConst * col_const = checkAndGetColumn(strcolumn.get()))
+ {
+ /// Note that const-const case is handled by useDefaultImplementationForConstants.
+
+ StringRef copy_str = col_const->getDataColumn().getDataAt(0);
+
+ if (castType(block.getByPosition(arguments[1]).type.get(), [&](const auto & type)
+ {
+ using DataType = std::decay_t;
+ using T = typename DataType::FieldType;
+ const ColumnVector * colnum = checkAndGetColumn>(numcolumn.get());
+ auto col_res = ColumnString::create();
+ RepeatImpl::constStrVectorRepeat(copy_str, col_res->getChars(), col_res->getOffsets(), colnum->getData());
+ block.getByPosition(result).column = std::move(col_res);
+ return true;
+ }))
+ {
+ return;
+ }
+ }
+
+ throw Exception(
+ "Illegal column " + block.getByPosition(arguments[0]).column->getName() + " of argument of function " + getName(),
+ ErrorCodes::ILLEGAL_COLUMN);
+ }
+};
+
+
+void registerFunctionRepeat(FunctionFactory & factory)
+{
+ factory.registerFunction();
+}
+
+}
diff --git a/dbms/src/IO/HDFSCommon.cpp b/dbms/src/IO/HDFSCommon.cpp
index 1c9980105f0..2bd420131b0 100644
--- a/dbms/src/IO/HDFSCommon.cpp
+++ b/dbms/src/IO/HDFSCommon.cpp
@@ -17,8 +17,8 @@ HDFSBuilderPtr createHDFSBuilder(const std::string & uri_str)
const Poco::URI uri(uri_str);
auto & host = uri.getHost();
auto port = uri.getPort();
- auto & path = uri.getPath();
- if (host.empty() || path.empty())
+ const std::string path = "//";
+ if (host.empty())
throw Exception("Illegal HDFS URI: " + uri.toString(), ErrorCodes::BAD_ARGUMENTS);
HDFSBuilderPtr builder(hdfsNewBuilder());
diff --git a/dbms/src/IO/HTTPCommon.cpp b/dbms/src/IO/HTTPCommon.cpp
index bd481145eee..de094cd52cd 100644
--- a/dbms/src/IO/HTTPCommon.cpp
+++ b/dbms/src/IO/HTTPCommon.cpp
@@ -40,6 +40,7 @@ namespace ErrorCodes
extern const int RECEIVED_ERROR_TOO_MANY_REQUESTS;
extern const int FEATURE_IS_NOT_ENABLED_AT_BUILD_TIME;
extern const int UNSUPPORTED_URI_SCHEME;
+ extern const int TOO_MANY_REDIRECTS;
}
@@ -216,20 +217,21 @@ PooledHTTPSessionPtr makePooledHTTPSession(const Poco::URI & uri, const Connecti
return HTTPSessionPool::instance().getSession(uri, timeouts, per_endpoint_pool_size);
}
+bool isRedirect(const Poco::Net::HTTPResponse::HTTPStatus status) { return status == Poco::Net::HTTPResponse::HTTP_MOVED_PERMANENTLY || status == Poco::Net::HTTPResponse::HTTP_FOUND || status == Poco::Net::HTTPResponse::HTTP_SEE_OTHER || status == Poco::Net::HTTPResponse::HTTP_TEMPORARY_REDIRECT; }
std::istream * receiveResponse(
- Poco::Net::HTTPClientSession & session, const Poco::Net::HTTPRequest & request, Poco::Net::HTTPResponse & response)
+ Poco::Net::HTTPClientSession & session, const Poco::Net::HTTPRequest & request, Poco::Net::HTTPResponse & response, const bool allow_redirects)
{
auto & istr = session.receiveResponse(response);
- assertResponseIsOk(request, response, istr);
+ assertResponseIsOk(request, response, istr, allow_redirects);
return &istr;
}
-void assertResponseIsOk(const Poco::Net::HTTPRequest & request, Poco::Net::HTTPResponse & response, std::istream & istr)
+void assertResponseIsOk(const Poco::Net::HTTPRequest & request, Poco::Net::HTTPResponse & response, std::istream & istr, const bool allow_redirects)
{
auto status = response.getStatus();
- if (status != Poco::Net::HTTPResponse::HTTP_OK)
+ if (!(status == Poco::Net::HTTPResponse::HTTP_OK || (isRedirect(status) && allow_redirects)))
{
std::stringstream error_message;
error_message << "Received error from remote server " << request.getURI() << ". HTTP status code: " << status << " "
diff --git a/dbms/src/IO/HTTPCommon.h b/dbms/src/IO/HTTPCommon.h
index 412429e59d1..7592c1c31b3 100644
--- a/dbms/src/IO/HTTPCommon.h
+++ b/dbms/src/IO/HTTPCommon.h
@@ -9,7 +9,7 @@
#include
#include
#include
-
+#include
#include
@@ -50,13 +50,14 @@ HTTPSessionPtr makeHTTPSession(const Poco::URI & uri, const ConnectionTimeouts &
/// As previous method creates session, but tooks it from pool
PooledHTTPSessionPtr makePooledHTTPSession(const Poco::URI & uri, const ConnectionTimeouts & timeouts, size_t per_endpoint_pool_size);
+bool isRedirect(const Poco::Net::HTTPResponse::HTTPStatus status);
+
/** Used to receive response (response headers and possibly body)
* after sending data (request headers and possibly body).
* Throws exception in case of non HTTP_OK (200) response code.
* Returned istream lives in 'session' object.
*/
std::istream * receiveResponse(
- Poco::Net::HTTPClientSession & session, const Poco::Net::HTTPRequest & request, Poco::Net::HTTPResponse & response);
-void assertResponseIsOk(const Poco::Net::HTTPRequest & request, Poco::Net::HTTPResponse & response, std::istream & istr);
-
+ Poco::Net::HTTPClientSession & session, const Poco::Net::HTTPRequest & request, Poco::Net::HTTPResponse & response, bool allow_redirects);
+void assertResponseIsOk(const Poco::Net::HTTPRequest & request, Poco::Net::HTTPResponse & response, std::istream & istr, const bool allow_redirects = false);
}
diff --git a/dbms/src/IO/ReadWriteBufferFromHTTP.h b/dbms/src/IO/ReadWriteBufferFromHTTP.h
index 85aced718a1..d2140e14792 100644
--- a/dbms/src/IO/ReadWriteBufferFromHTTP.h
+++ b/dbms/src/IO/ReadWriteBufferFromHTTP.h
@@ -16,6 +16,7 @@
#include
#include
#include
+#include
#define DEFAULT_HTTP_READ_BUFFER_TIMEOUT 1800
@@ -26,41 +27,84 @@ namespace DB
/** Perform HTTP POST request and provide response to read.
*/
+namespace ErrorCodes
+{
+ extern const int TOO_MANY_REDIRECTS;
+}
+
+template
+class UpdatableSessionBase
+{
+protected:
+ SessionPtr session;
+ UInt64 redirects { 0 };
+ Poco::URI initial_uri;
+ const ConnectionTimeouts & timeouts;
+ DB::SettingUInt64 max_redirects;
+
+public:
+ virtual void buildNewSession(const Poco::URI & uri) = 0;
+
+ explicit UpdatableSessionBase(const Poco::URI uri,
+ const ConnectionTimeouts & timeouts_,
+ SettingUInt64 max_redirects_)
+ : initial_uri { uri }
+ , timeouts { timeouts_ }
+ , max_redirects { max_redirects_ }
+ {
+ }
+
+ SessionPtr getSession()
+ {
+ return session;
+ }
+
+ void updateSession(const Poco::URI & uri)
+ {
+ ++redirects;
+ if (redirects <= max_redirects)
+ {
+ buildNewSession(uri);
+ }
+ else
+ {
+ std::stringstream error_message;
+ error_message << "Too many redirects while trying to access " << initial_uri.toString();
+
+ throw Exception(error_message.str(), ErrorCodes::TOO_MANY_REDIRECTS);
+ }
+ }
+
+ virtual ~UpdatableSessionBase()
+ {
+ }
+};
+
+
namespace detail
{
- template
+ template
class ReadWriteBufferFromHTTPBase : public ReadBuffer
{
protected:
Poco::URI uri;
std::string method;
- SessionPtr session;
+ UpdatableSessionPtr session;
std::istream * istr; /// owned by session
std::unique_ptr impl;
+ std::function out_stream_callback;
+ const Poco::Net::HTTPBasicCredentials & credentials;
std::vector cookies;
- public:
- using OutStreamCallback = std::function;
-
- explicit ReadWriteBufferFromHTTPBase(
- SessionPtr session_,
- Poco::URI uri_,
- const std::string & method_ = {},
- OutStreamCallback out_stream_callback = {},
- const Poco::Net::HTTPBasicCredentials & credentials = {},
- size_t buffer_size_ = DBMS_DEFAULT_BUFFER_SIZE)
- : ReadBuffer(nullptr, 0)
- , uri{uri_}
- , method{!method_.empty() ? method_ : out_stream_callback ? Poco::Net::HTTPRequest::HTTP_POST : Poco::Net::HTTPRequest::HTTP_GET}
- , session{session_}
+ std::istream * call(const Poco::URI uri_, Poco::Net::HTTPResponse & response)
{
// With empty path poco will send "POST HTTP/1.1" its bug.
if (uri.getPath().empty())
uri.setPath("/");
- Poco::Net::HTTPRequest request(method, uri.getPathAndQuery(), Poco::Net::HTTPRequest::HTTP_1_1);
- request.setHost(uri.getHost()); // use original, not resolved host name in header
+ Poco::Net::HTTPRequest request(method, uri_.getPathAndQuery(), Poco::Net::HTTPRequest::HTTP_1_1);
+ request.setHost(uri_.getHost()); // use original, not resolved host name in header
if (out_stream_callback)
request.setChunkedTransferEncoding(true);
@@ -68,27 +112,71 @@ namespace detail
if (!credentials.getUsername().empty())
credentials.authenticate(request);
- Poco::Net::HTTPResponse response;
-
LOG_TRACE((&Logger::get("ReadWriteBufferFromHTTP")), "Sending request to " << uri.toString());
+ auto sess = session->getSession();
+
try
{
- auto & stream_out = session->sendRequest(request);
+ auto & stream_out = sess->sendRequest(request);
if (out_stream_callback)
out_stream_callback(stream_out);
- istr = receiveResponse(*session, request, response);
+ istr = receiveResponse(*sess, request, response, true);
response.getCookies(cookies);
+ return istr;
+
+ }
+ catch (const Poco::Exception & e)
+ {
+ /// We use session data storage as storage for exception text
+ /// Depend on it we can deduce to reconnect session or reresolve session host
+ sess->attachSessionData(e.message());
+ throw;
+ }
+ }
+
+ public:
+ using OutStreamCallback = std::function;
+
+ explicit ReadWriteBufferFromHTTPBase(UpdatableSessionPtr session_,
+ Poco::URI uri_,
+ const std::string & method_ = {},
+ OutStreamCallback out_stream_callback_ = {},
+ const Poco::Net::HTTPBasicCredentials & credentials_ = {},
+ size_t buffer_size_ = DBMS_DEFAULT_BUFFER_SIZE)
+ : ReadBuffer(nullptr, 0)
+ , uri {uri_}
+ , method {!method_.empty() ? method_ : out_stream_callback_ ? Poco::Net::HTTPRequest::HTTP_POST : Poco::Net::HTTPRequest::HTTP_GET}
+ , session {session_}
+ , out_stream_callback {out_stream_callback_}
+ , credentials {credentials_}
+ {
+ Poco::Net::HTTPResponse response;
+
+ istr = call(uri, response);
+
+ while (isRedirect(response.getStatus()))
+ {
+ Poco::URI uri_redirect(response.get("Location"));
+
+ session->updateSession(uri_redirect);
+
+ istr = call(uri_redirect,response);
+ }
+
+ try
+ {
impl = std::make_unique(*istr, buffer_size_);
}
catch (const Poco::Exception & e)
{
/// We use session data storage as storage for exception text
/// Depend on it we can deduce to reconnect session or reresolve session host
- session->attachSessionData(e.message());
+ auto sess = session->getSession();
+ sess->attachSessionData(e.message());
throw;
}
}
@@ -112,42 +200,88 @@ namespace detail
};
}
-class ReadWriteBufferFromHTTP : public detail::ReadWriteBufferFromHTTPBase
+class UpdatableSession : public UpdatableSessionBase
{
- using Parent = detail::ReadWriteBufferFromHTTPBase;
+ using Parent = UpdatableSessionBase;
+
+public:
+ explicit UpdatableSession(const Poco::URI uri,
+ const ConnectionTimeouts & timeouts_,
+ const SettingUInt64 max_redirects_)
+ : Parent(uri, timeouts_, max_redirects_)
+ {
+ session = makeHTTPSession(initial_uri, timeouts);
+ }
+
+ void buildNewSession(const Poco::URI & uri) override
+ {
+ session = makeHTTPSession(uri, timeouts);
+ }
+};
+
+class ReadWriteBufferFromHTTP : public detail::ReadWriteBufferFromHTTPBase>
+{
+ using Parent = detail::ReadWriteBufferFromHTTPBase>;
public:
explicit ReadWriteBufferFromHTTP(Poco::URI uri_,
const std::string & method_ = {},
- OutStreamCallback out_stream_callback = {},
+ OutStreamCallback out_stream_callback_ = {},
const ConnectionTimeouts & timeouts = {},
- const Poco::Net::HTTPBasicCredentials & credentials = {},
+ const DB::SettingUInt64 max_redirects = 0,
+ const Poco::Net::HTTPBasicCredentials & credentials_ = {},
size_t buffer_size_ = DBMS_DEFAULT_BUFFER_SIZE)
- : Parent(makeHTTPSession(uri_, timeouts), uri_, method_, out_stream_callback, credentials, buffer_size_)
+ : Parent(std::make_shared(uri_, timeouts, max_redirects), uri_, method_, out_stream_callback_, credentials_, buffer_size_)
{
}
};
-class PooledReadWriteBufferFromHTTP : public detail::ReadWriteBufferFromHTTPBase
+
+class UpdatablePooledSession : public UpdatableSessionBase
{
- using Parent = detail::ReadWriteBufferFromHTTPBase;
+ using Parent = UpdatableSessionBase;
+
+private:
+ size_t per_endpoint_pool_size;
+
+public:
+ explicit UpdatablePooledSession(const Poco::URI uri,
+ const ConnectionTimeouts & timeouts_,
+ const SettingUInt64 max_redirects_,
+ size_t per_endpoint_pool_size_)
+ : Parent(uri, timeouts_, max_redirects_)
+ , per_endpoint_pool_size { per_endpoint_pool_size_ }
+ {
+ session = makePooledHTTPSession(initial_uri, timeouts, per_endpoint_pool_size);
+ }
+
+ void buildNewSession(const Poco::URI & uri) override
+ {
+ session = makePooledHTTPSession(uri, timeouts, per_endpoint_pool_size);
+ }
+};
+
+class PooledReadWriteBufferFromHTTP : public detail::ReadWriteBufferFromHTTPBase>
+{
+ using Parent = detail::ReadWriteBufferFromHTTPBase>;
public:
explicit PooledReadWriteBufferFromHTTP(Poco::URI uri_,
const std::string & method_ = {},
- OutStreamCallback out_stream_callback = {},
- const ConnectionTimeouts & timeouts = {},
- const Poco::Net::HTTPBasicCredentials & credentials = {},
+ OutStreamCallback out_stream_callback_ = {},
+ const ConnectionTimeouts & timeouts_ = {},
+ const Poco::Net::HTTPBasicCredentials & credentials_ = {},
size_t buffer_size_ = DBMS_DEFAULT_BUFFER_SIZE,
+ const DB::SettingUInt64 max_redirects = 0,
size_t max_connections_per_endpoint = DEFAULT_COUNT_OF_HTTP_CONNECTIONS_PER_ENDPOINT)
- : Parent(makePooledHTTPSession(uri_, timeouts, max_connections_per_endpoint),
+ : Parent(std::make_shared(uri_, timeouts_, max_redirects, max_connections_per_endpoint),
uri_,
method_,
- out_stream_callback,
- credentials,
+ out_stream_callback_,
+ credentials_,
buffer_size_)
{
}
};
-
}
+
diff --git a/dbms/src/IO/WriteBufferFromHDFS.cpp b/dbms/src/IO/WriteBufferFromHDFS.cpp
index 2cd83ee6479..9733d761ee4 100644
--- a/dbms/src/IO/WriteBufferFromHDFS.cpp
+++ b/dbms/src/IO/WriteBufferFromHDFS.cpp
@@ -15,6 +15,7 @@ namespace ErrorCodes
extern const int NETWORK_ERROR;
extern const int CANNOT_OPEN_FILE;
extern const int CANNOT_FSYNC;
+extern const int BAD_ARGUMENTS;
}
@@ -32,10 +33,12 @@ struct WriteBufferFromHDFS::WriteBufferFromHDFSImpl
{
const size_t begin_of_path = hdfs_uri.find('/', hdfs_uri.find("//") + 2);
const std::string path = hdfs_uri.substr(begin_of_path);
- if (path.find("*?{") != std::string::npos)
+ if (path.find_first_of("*?{") != std::string::npos)
throw Exception("URI '" + hdfs_uri + "' contains globs, so the table is in readonly mode", ErrorCodes::CANNOT_OPEN_FILE);
- fout = hdfsOpenFile(fs.get(), path.c_str(), O_WRONLY, 0, 0, 0);
+ if (!hdfsExists(fs.get(), path.c_str()))
+ throw Exception("File: " + path + " is already exists", ErrorCodes::BAD_ARGUMENTS);
+ fout = hdfsOpenFile(fs.get(), path.c_str(), O_WRONLY, 0, 0, 0); /// O_WRONLY meaning create or overwrite i.e., implies O_TRUNCAT here
if (fout == nullptr)
{
diff --git a/dbms/src/IO/WriteBufferFromHTTP.cpp b/dbms/src/IO/WriteBufferFromHTTP.cpp
index c74c74a0bd0..0a8095b960f 100644
--- a/dbms/src/IO/WriteBufferFromHTTP.cpp
+++ b/dbms/src/IO/WriteBufferFromHTTP.cpp
@@ -22,7 +22,7 @@ WriteBufferFromHTTP::WriteBufferFromHTTP(
void WriteBufferFromHTTP::finalize()
{
- receiveResponse(*session, request, response);
+ receiveResponse(*session, request, response, false);
/// TODO: Response body is ignored.
}
diff --git a/dbms/src/IO/WriteBufferFromS3.cpp b/dbms/src/IO/WriteBufferFromS3.cpp
index 1ef6f3b19a0..9604b6ce199 100644
--- a/dbms/src/IO/WriteBufferFromS3.cpp
+++ b/dbms/src/IO/WriteBufferFromS3.cpp
@@ -15,6 +15,10 @@ namespace DB
{
const int DEFAULT_S3_MAX_FOLLOW_PUT_REDIRECT = 2;
+
+// S3 protocol does not allow to have multipart upload with more than 10000 parts.
+// In case server does not return an error on exceeding that number, we print a warning
+// because custom S3 implementation may allow relaxed requirements on that.
const int S3_WARN_MAX_PARTS = 10000;
@@ -166,7 +170,7 @@ void WriteBufferFromS3::writePart(const String & data)
if (part_tags.size() == S3_WARN_MAX_PARTS)
{
// Don't throw exception here by ourselves but leave the decision to take by S3 server.
- LOG_WARNING(&Logger::get("WriteBufferFromS3"), "Maximum part number in S3 protocol has reached (too much parts). Server may not accept this whole upload.");
+ LOG_WARNING(&Logger::get("WriteBufferFromS3"), "Maximum part number in S3 protocol has reached (too many parts). Server may not accept this whole upload.");
}
for (int i = 0; i < DEFAULT_S3_MAX_FOLLOW_PUT_REDIRECT; ++i)
diff --git a/dbms/src/IO/WriteBufferFromS3.h b/dbms/src/IO/WriteBufferFromS3.h
index 0eb689e468f..9afda1d14e2 100644
--- a/dbms/src/IO/WriteBufferFromS3.h
+++ b/dbms/src/IO/WriteBufferFromS3.h
@@ -1,25 +1,16 @@
#pragma once
-#include
#include
#include
#include
#include
#include
#include
-#include
-#include
#include
#include
#include
-#include
#include
-#include
#include
-#include
-#include
-#include
-#include
namespace DB
@@ -36,6 +27,9 @@ private:
String buffer_string;
std::unique_ptr temporary_buffer;
size_t last_part_size;
+
+ /// Upload in S3 is made in parts.
+ /// We initiate upload, then upload each part and get ETag as a response, and then finish upload with listing all our parts.
String upload_id;
std::vector part_tags;
diff --git a/dbms/src/IO/WriteHelpers.cpp b/dbms/src/IO/WriteHelpers.cpp
index 5dc2358c4c0..0b5bce27b46 100644
--- a/dbms/src/IO/WriteHelpers.cpp
+++ b/dbms/src/IO/WriteHelpers.cpp
@@ -67,4 +67,25 @@ void writeException(const Exception & e, WriteBuffer & buf, bool with_stack_trac
writeException(Exception(Exception::CreateFromPoco, *e.nested()), buf, with_stack_trace);
}
+
+String backQuoteIfNeed(const String & x)
+{
+ String res(x.size(), '\0');
+ {
+ WriteBufferFromString wb(res);
+ writeProbablyBackQuotedString(x, wb);
+ }
+ return res;
+}
+
+String backQuote(const String & x)
+{
+ String res(x.size(), '\0');
+ {
+ WriteBufferFromString wb(res);
+ writeBackQuotedString(x, wb);
+ }
+ return res;
+}
+
}
diff --git a/dbms/src/IO/WriteHelpers.h b/dbms/src/IO/WriteHelpers.h
index 44d7b7ab540..ab3fad08860 100644
--- a/dbms/src/IO/WriteHelpers.h
+++ b/dbms/src/IO/WriteHelpers.h
@@ -906,4 +906,10 @@ inline String toString(const T & x)
return buf.str();
}
+
+/// Quote the identifier with backquotes, if required.
+String backQuoteIfNeed(const String & x);
+/// Quote the identifier with backquotes.
+String backQuote(const String & x);
+
}
diff --git a/dbms/src/Interpreters/ClusterProxy/executeQuery.cpp b/dbms/src/Interpreters/ClusterProxy/executeQuery.cpp
index 989595b3647..9a0494cca45 100644
--- a/dbms/src/Interpreters/ClusterProxy/executeQuery.cpp
+++ b/dbms/src/Interpreters/ClusterProxy/executeQuery.cpp
@@ -18,8 +18,6 @@ Context removeUserRestrictionsFromSettings(const Context & context, const Settin
{
Settings new_settings = settings;
new_settings.queue_max_wait_ms = Cluster::saturate(new_settings.queue_max_wait_ms, settings.max_execution_time);
- new_settings.connection_pool_max_wait_ms = Cluster::saturate(new_settings.connection_pool_max_wait_ms, settings.max_execution_time);
- new_settings.replace_running_query_max_wait_ms = Cluster::saturate(new_settings.replace_running_query_max_wait_ms, settings.max_execution_time);
/// Does not matter on remote servers, because queries are sent under different user.
new_settings.max_concurrent_queries_for_user = 0;
@@ -39,8 +37,8 @@ Context removeUserRestrictionsFromSettings(const Context & context, const Settin
}
BlockInputStreams executeQuery(
- IStreamFactory & stream_factory, const ClusterPtr & cluster,
- const ASTPtr & query_ast, const Context & context, const Settings & settings)
+ IStreamFactory & stream_factory, const ClusterPtr & cluster,
+ const ASTPtr & query_ast, const Context & context, const Settings & settings)
{
BlockInputStreams res;
diff --git a/dbms/src/Interpreters/IJoin.h b/dbms/src/Interpreters/IJoin.h
index 9773f7ae979..af6d643cc2b 100644
--- a/dbms/src/Interpreters/IJoin.h
+++ b/dbms/src/Interpreters/IJoin.h
@@ -25,7 +25,7 @@ public:
/// Could be called from different threads in parallel.
virtual void joinBlock(Block & block) = 0;
- virtual bool hasTotals() const { return false; }
+ virtual bool hasTotals() const = 0;
virtual void setTotals(const Block & block) = 0;
virtual void joinTotals(Block & block) const = 0;
diff --git a/dbms/src/Interpreters/MergeJoin.h b/dbms/src/Interpreters/MergeJoin.h
index 72a431c8d59..89165c70890 100644
--- a/dbms/src/Interpreters/MergeJoin.h
+++ b/dbms/src/Interpreters/MergeJoin.h
@@ -24,6 +24,7 @@ public:
void joinBlock(Block &) override;
void joinTotals(Block &) const override;
void setTotals(const Block &) override;
+ bool hasTotals() const override { return totals; }
size_t getTotalRowCount() const override { return right_blocks_row_count; }
private:
diff --git a/dbms/src/Interpreters/PredicateExpressionsOptimizer.h b/dbms/src/Interpreters/PredicateExpressionsOptimizer.h
index 302c2e62a65..ca2c8b8766d 100644
--- a/dbms/src/Interpreters/PredicateExpressionsOptimizer.h
+++ b/dbms/src/Interpreters/PredicateExpressionsOptimizer.h
@@ -20,7 +20,7 @@ class Context;
* - Query after optimization :
* SELECT id_1, name_1 FROM (SELECT id_1, name_1 FROM table_a WHERE id_1 = 1 UNION ALL SELECT id_2, name_2 FROM table_b WHERE id_2 = 1)
* WHERE id_1 = 1
- * For more details : https://github.com/yandex/ClickHouse/pull/2015#issuecomment-374283452
+ * For more details : https://github.com/ClickHouse/ClickHouse/pull/2015#issuecomment-374283452
*/
class PredicateExpressionsOptimizer
{
diff --git a/dbms/src/Interpreters/RowRefs.h b/dbms/src/Interpreters/RowRefs.h
index ffcaf370b00..03309831322 100644
--- a/dbms/src/Interpreters/RowRefs.h
+++ b/dbms/src/Interpreters/RowRefs.h
@@ -242,7 +242,7 @@ private:
// Lookups can be stored in a HashTable because it is memmovable
// A std::variant contains a currently active type id (memmovable), together with a union of the types
// The types are all std::unique_ptr, which contains a single pointer, which is memmovable.
- // Source: https://github.com/yandex/ClickHouse/issues/4906
+ // Source: https://github.com/ClickHouse/ClickHouse/issues/4906
Lookups lookups;
};
diff --git a/dbms/src/Interpreters/Set.cpp b/dbms/src/Interpreters/Set.cpp
index 4313decd36d..68c219c3a91 100644
--- a/dbms/src/Interpreters/Set.cpp
+++ b/dbms/src/Interpreters/Set.cpp
@@ -470,7 +470,7 @@ MergeTreeSetIndex::MergeTreeSetIndex(const Columns & set_elements, std::vector & key_ranges, const DataTypes & data_types)
+BoolMask MergeTreeSetIndex::mayBeTrueInRange(const std::vector & key_ranges, const DataTypes & data_types)
{
size_t tuple_size = indexes_mapping.size();
diff --git a/dbms/src/Interpreters/Set.h b/dbms/src/Interpreters/Set.h
index a252f1ebc1e..987252e37ba 100644
--- a/dbms/src/Interpreters/Set.h
+++ b/dbms/src/Interpreters/Set.h
@@ -170,7 +170,7 @@ using Sets = std::vector;
class IFunction;
using FunctionPtr = std::shared_ptr;
-/// Class for checkInRange function.
+/// Class for mayBeTrueInRange function.
class MergeTreeSetIndex
{
public:
@@ -188,7 +188,7 @@ public:
size_t size() const { return ordered_set.at(0)->size(); }
- BoolMask checkInRange(const std::vector & key_ranges, const DataTypes & data_types);
+ BoolMask mayBeTrueInRange(const std::vector & key_ranges, const DataTypes & data_types);
private:
Columns ordered_set;
diff --git a/dbms/src/Parsers/IAST.cpp b/dbms/src/Parsers/IAST.cpp
index 0cc15c51c23..eb9f1462666 100644
--- a/dbms/src/Parsers/IAST.cpp
+++ b/dbms/src/Parsers/IAST.cpp
@@ -26,26 +26,6 @@ const char * IAST::hilite_substitution = "\033[1;36m";
const char * IAST::hilite_none = "\033[0m";
-String backQuoteIfNeed(const String & x)
-{
- String res(x.size(), '\0');
- {
- WriteBufferFromString wb(res);
- writeProbablyBackQuotedString(x, wb);
- }
- return res;
-}
-
-String backQuote(const String & x)
-{
- String res(x.size(), '\0');
- {
- WriteBufferFromString wb(res);
- writeBackQuotedString(x, wb);
- }
- return res;
-}
-
size_t IAST::size() const
{
size_t res = 1;
diff --git a/dbms/src/Parsers/IAST.h b/dbms/src/Parsers/IAST.h
index f30bdef0e2b..c896ed2ce3f 100644
--- a/dbms/src/Parsers/IAST.h
+++ b/dbms/src/Parsers/IAST.h
@@ -5,6 +5,7 @@
#include
#include
#include
+#include /// backQuote, backQuoteIfNeed
#include
#include
@@ -223,9 +224,4 @@ private:
};
-/// Quote the identifier with backquotes, if required.
-String backQuoteIfNeed(const String & x);
-/// Quote the identifier with backquotes.
-String backQuote(const String & x);
-
}
diff --git a/dbms/src/Storages/MergeTree/DataPartsExchange.cpp b/dbms/src/Storages/MergeTree/DataPartsExchange.cpp
index 1ab4c92042f..073f33441ac 100644
--- a/dbms/src/Storages/MergeTree/DataPartsExchange.cpp
+++ b/dbms/src/Storages/MergeTree/DataPartsExchange.cpp
@@ -219,6 +219,7 @@ MergeTreeData::MutableDataPartPtr Fetcher::fetchPart(
timeouts,
creds,
DBMS_DEFAULT_BUFFER_SIZE,
+ 0, /* no redirects */
data_settings->replicated_max_parallel_fetches_for_host
};
diff --git a/dbms/src/Storages/MergeTree/KeyCondition.cpp b/dbms/src/Storages/MergeTree/KeyCondition.cpp
index 0ebe8f79aba..b3e4c776605 100644
--- a/dbms/src/Storages/MergeTree/KeyCondition.cpp
+++ b/dbms/src/Storages/MergeTree/KeyCondition.cpp
@@ -886,7 +886,7 @@ String KeyCondition::toString() const
*/
template
-static BoolMask forAnyParallelogram(
+static bool forAnyParallelogram(
size_t key_size,
const Field * key_left,
const Field * key_right,
@@ -942,15 +942,16 @@ static BoolMask forAnyParallelogram(
for (size_t i = prefix_size + 1; i < key_size; ++i)
parallelogram[i] = Range();
- BoolMask result(false, false);
- result = result | callback(parallelogram);
+ if (callback(parallelogram))
+ return true;
/// [x1] x [y1 .. +inf)
if (left_bounded)
{
parallelogram[prefix_size] = Range(key_left[prefix_size]);
- result = result | forAnyParallelogram(key_size, key_left, key_right, true, false, parallelogram, prefix_size + 1, callback);
+ if (forAnyParallelogram(key_size, key_left, key_right, true, false, parallelogram, prefix_size + 1, callback))
+ return true;
}
/// [x2] x (-inf .. y2]
@@ -958,14 +959,15 @@ static BoolMask forAnyParallelogram(
if (right_bounded)
{
parallelogram[prefix_size] = Range(key_right[prefix_size]);
- result = result | forAnyParallelogram(key_size, key_left, key_right, false, true, parallelogram, prefix_size + 1, callback);
+ if (forAnyParallelogram(key_size, key_left, key_right, false, true, parallelogram, prefix_size + 1, callback))
+ return true;
}
- return result;
+ return false;
}
-BoolMask KeyCondition::checkInRange(
+bool KeyCondition::mayBeTrueInRange(
size_t used_key_size,
const Field * left_key,
const Field * right_key,
@@ -991,7 +993,7 @@ BoolMask KeyCondition::checkInRange(
return forAnyParallelogram(used_key_size, left_key, right_key, true, right_bounded, key_ranges, 0,
[&] (const std::vector & key_ranges_parallelogram)
{
- auto res = checkInParallelogram(key_ranges_parallelogram, data_types);
+ auto res = mayBeTrueInParallelogram(key_ranges_parallelogram, data_types);
/* std::cerr << "Parallelogram: ";
for (size_t i = 0, size = key_ranges.size(); i != size; ++i)
@@ -1002,11 +1004,11 @@ BoolMask KeyCondition::checkInRange(
});
}
-
std::optional KeyCondition::applyMonotonicFunctionsChainToRange(
Range key_range,
MonotonicFunctionsChain & functions,
- DataTypePtr current_type)
+ DataTypePtr current_type
+)
{
for (auto & func : functions)
{
@@ -1039,7 +1041,7 @@ std::optional KeyCondition::applyMonotonicFunctionsChainToRange(
return key_range;
}
-BoolMask KeyCondition::checkInParallelogram(const std::vector & parallelogram, const DataTypes & data_types) const
+bool KeyCondition::mayBeTrueInParallelogram(const std::vector & parallelogram, const DataTypes & data_types) const
{
std::vector rpn_stack;
for (size_t i = 0; i < rpn.size(); ++i)
@@ -1087,7 +1089,7 @@ BoolMask KeyCondition::checkInParallelogram(const std::vector & parallelo
if (!element.set_index)
throw Exception("Set for IN is not created yet", ErrorCodes::LOGICAL_ERROR);
- rpn_stack.emplace_back(element.set_index->checkInRange(parallelogram, data_types));
+ rpn_stack.emplace_back(element.set_index->mayBeTrueInRange(parallelogram, data_types));
if (element.function == RPNElement::FUNCTION_NOT_IN_SET)
rpn_stack.back() = !rpn_stack.back();
}
@@ -1122,23 +1124,22 @@ BoolMask KeyCondition::checkInParallelogram(const std::vector & parallelo
}
if (rpn_stack.size() != 1)
- throw Exception("Unexpected stack size in KeyCondition::checkInRange", ErrorCodes::LOGICAL_ERROR);
+ throw Exception("Unexpected stack size in KeyCondition::mayBeTrueInRange", ErrorCodes::LOGICAL_ERROR);
- return rpn_stack[0];
+ return rpn_stack[0].can_be_true;
}
-BoolMask KeyCondition::checkInRange(
+bool KeyCondition::mayBeTrueInRange(
size_t used_key_size, const Field * left_key, const Field * right_key, const DataTypes & data_types) const
{
- return checkInRange(used_key_size, left_key, right_key, data_types, true);
+ return mayBeTrueInRange(used_key_size, left_key, right_key, data_types, true);
}
-
-BoolMask KeyCondition::getMaskAfter(
+bool KeyCondition::mayBeTrueAfter(
size_t used_key_size, const Field * left_key, const DataTypes & data_types) const
{
- return checkInRange(used_key_size, left_key, nullptr, data_types, false);
+ return mayBeTrueInRange(used_key_size, left_key, nullptr, data_types, false);
}
diff --git a/dbms/src/Storages/MergeTree/KeyCondition.h b/dbms/src/Storages/MergeTree/KeyCondition.h
index 2a5c520b243..61989d1b2d9 100644
--- a/dbms/src/Storages/MergeTree/KeyCondition.h
+++ b/dbms/src/Storages/MergeTree/KeyCondition.h
@@ -235,17 +235,17 @@ public:
const Names & key_column_names,
const ExpressionActionsPtr & key_expr);
- /// Whether the condition and its negation are (independently) feasible in the key range.
+ /// Whether the condition is feasible in the key range.
/// left_key and right_key must contain all fields in the sort_descr in the appropriate order.
/// data_types - the types of the key columns.
- BoolMask checkInRange(size_t used_key_size, const Field * left_key, const Field * right_key, const DataTypes & data_types) const;
+ bool mayBeTrueInRange(size_t used_key_size, const Field * left_key, const Field * right_key, const DataTypes & data_types) const;
- /// Whether the condition and its negation are feasible in the direct product of single column ranges specified by `parallelogram`.
- BoolMask checkInParallelogram(const std::vector & parallelogram, const DataTypes & data_types) const;
+ /// Whether the condition is feasible in the direct product of single column ranges specified by `parallelogram`.
+ bool mayBeTrueInParallelogram(const std::vector & parallelogram, const DataTypes & data_types) const;
- /// Are the condition and its negation valid in a semi-infinite (not limited to the right) key range.
+ /// Is the condition valid in a semi-infinite (not limited to the right) key range.
/// left_key must contain all the fields in the sort_descr in the appropriate order.
- BoolMask getMaskAfter(size_t used_key_size, const Field * left_key, const DataTypes & data_types) const;
+ bool mayBeTrueAfter(size_t used_key_size, const Field * left_key, const DataTypes & data_types) const;
/// Checks that the index can not be used.
bool alwaysUnknownOrTrue() const;
@@ -330,7 +330,7 @@ public:
static const AtomMap atom_map;
private:
- BoolMask checkInRange(
+ bool mayBeTrueInRange(
size_t used_key_size,
const Field * left_key,
const Field * right_key,
diff --git a/dbms/src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp b/dbms/src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp
index 40dc0bf6b52..99b4a49d111 100644
--- a/dbms/src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp
+++ b/dbms/src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp
@@ -269,8 +269,8 @@ BlockInputStreams MergeTreeDataSelectExecutor::readFromParts(
if (part->isEmpty())
continue;
- if (minmax_idx_condition && !minmax_idx_condition->checkInParallelogram(
- part->minmax_idx.parallelogram, data.minmax_idx_column_types).can_be_true)
+ if (minmax_idx_condition && !minmax_idx_condition->mayBeTrueInParallelogram(
+ part->minmax_idx.parallelogram, data.minmax_idx_column_types))
continue;
if (max_block_numbers_to_read)
@@ -1200,8 +1200,8 @@ MarkRanges MergeTreeDataSelectExecutor::markRangesFromPKRange(
for (size_t i = 0; i < used_key_size; ++i)
index[i]->get(range.begin, index_left[i]);
- may_be_true = key_condition.getMaskAfter(
- used_key_size, index_left.data(), data.primary_key_data_types).can_be_true;
+ may_be_true = key_condition.mayBeTrueAfter(
+ used_key_size, index_left.data(), data.primary_key_data_types);
}
else
{
@@ -1214,8 +1214,8 @@ MarkRanges MergeTreeDataSelectExecutor::markRangesFromPKRange(
index[i]->get(range.end, index_right[i]);
}
- may_be_true = key_condition.checkInRange(
- used_key_size, index_left.data(), index_right.data(), data.primary_key_data_types).can_be_true;
+ may_be_true = key_condition.mayBeTrueInRange(
+ used_key_size, index_left.data(), index_right.data(), data.primary_key_data_types);
}
if (!may_be_true)
diff --git a/dbms/src/Storages/MergeTree/MergeTreeIndexFullText.cpp b/dbms/src/Storages/MergeTree/MergeTreeIndexFullText.cpp
index 246ad6784b2..264c91cd890 100644
--- a/dbms/src/Storages/MergeTree/MergeTreeIndexFullText.cpp
+++ b/dbms/src/Storages/MergeTree/MergeTreeIndexFullText.cpp
@@ -378,11 +378,11 @@ bool MergeTreeConditionFullText::mayBeTrueOnGranule(MergeTreeIndexGranulePtr idx
rpn_stack.emplace_back(true, false);
}
else
- throw Exception("Unexpected function type in BloomFilterCondition::RPNElement", ErrorCodes::LOGICAL_ERROR);
+ throw Exception("Unexpected function type in KeyCondition::RPNElement", ErrorCodes::LOGICAL_ERROR);
}
if (rpn_stack.size() != 1)
- throw Exception("Unexpected stack size in BloomFilterCondition::mayBeTrueOnGranule", ErrorCodes::LOGICAL_ERROR);
+ throw Exception("Unexpected stack size in KeyCondition::mayBeTrueInRange", ErrorCodes::LOGICAL_ERROR);
return rpn_stack[0].can_be_true;
}
diff --git a/dbms/src/Storages/MergeTree/MergeTreeIndexMinMax.cpp b/dbms/src/Storages/MergeTree/MergeTreeIndexMinMax.cpp
index 360e69eacc6..37c094db215 100644
--- a/dbms/src/Storages/MergeTree/MergeTreeIndexMinMax.cpp
+++ b/dbms/src/Storages/MergeTree/MergeTreeIndexMinMax.cpp
@@ -143,7 +143,7 @@ bool MergeTreeIndexConditionMinMax::mayBeTrueOnGranule(MergeTreeIndexGranulePtr
for (const auto & range : granule->parallelogram)
if (range.left.isNull() || range.right.isNull())
return true;
- return condition.checkInParallelogram(granule->parallelogram, index.data_types).can_be_true;
+ return condition.mayBeTrueInParallelogram(granule->parallelogram, index.data_types);
}
diff --git a/dbms/src/Storages/MergeTree/ReplicatedMergeTreePartCheckThread.cpp b/dbms/src/Storages/MergeTree/ReplicatedMergeTreePartCheckThread.cpp
index 045aa8a6461..abb7e8e88ce 100644
--- a/dbms/src/Storages/MergeTree/ReplicatedMergeTreePartCheckThread.cpp
+++ b/dbms/src/Storages/MergeTree/ReplicatedMergeTreePartCheckThread.cpp
@@ -45,7 +45,7 @@ void ReplicatedMergeTreePartCheckThread::start()
void ReplicatedMergeTreePartCheckThread::stop()
{
- //based on discussion on https://github.com/yandex/ClickHouse/pull/1489#issuecomment-344756259
+ //based on discussion on https://github.com/ClickHouse/ClickHouse/pull/1489#issuecomment-344756259
//using the schedule pool there is no problem in case stop is called two time in row and the start multiple times
std::lock_guard lock(start_stop_mutex);
diff --git a/dbms/src/Storages/StorageFile.cpp b/dbms/src/Storages/StorageFile.cpp
index d5d8cd0856e..d38f1f76ccd 100644
--- a/dbms/src/Storages/StorageFile.cpp
+++ b/dbms/src/Storages/StorageFile.cpp
@@ -48,10 +48,11 @@ namespace ErrorCodes
namespace
{
+
/* Recursive directory listing with matched paths as a result.
* Have the same method in StorageHDFS.
*/
-std::vector LSWithRegexpMatching(const std::string & path_for_ls, const std::string & for_match)
+std::vector listFilesWithRegexpMatching(const std::string & path_for_ls, const std::string & for_match)
{
const size_t first_glob = for_match.find_first_of("*?{");
@@ -86,7 +87,8 @@ std::vector LSWithRegexpMatching(const std::string & path_for_ls, c
{
if (re2::RE2::FullMatch(file_name, matcher))
{
- Strings result_part = LSWithRegexpMatching(full_path + "/", suffix_with_globs.substr(next_slash));
+ /// Recursion depth is limited by pattern. '*' works only for depth = 1, for depth = 2 pattern path is '*/*'. So we do not need additional check.
+ Strings result_part = listFilesWithRegexpMatching(full_path + "/", suffix_with_globs.substr(next_slash));
std::move(result_part.begin(), result_part.end(), std::back_inserter(result));
}
}
@@ -143,7 +145,7 @@ StorageFile::StorageFile(
poco_path = Poco::Path(db_dir_path, poco_path);
const std::string path = poco_path.absolute().toString();
- paths = LSWithRegexpMatching("/", path);
+ paths = listFilesWithRegexpMatching("/", path);
for (const auto & cur_path : paths)
checkCreationIsAllowed(context_global, db_dir_path, cur_path);
is_db_table = false;
@@ -282,7 +284,7 @@ public:
else
{
if (storage.paths.size() != 1)
- throw Exception("Table '" + storage.table_name + "' is in readonly mode", ErrorCodes::DATABASE_ACCESS_DENIED);
+ throw Exception("Table '" + storage.table_name + "' is in readonly mode because of globs in filepath", ErrorCodes::DATABASE_ACCESS_DENIED);
write_buf = std::make_unique(storage.paths[0], DBMS_DEFAULT_BUFFER_SIZE, O_WRONLY | O_APPEND | O_CREAT);
}
diff --git a/dbms/src/Storages/StorageHDFS.cpp b/dbms/src/Storages/StorageHDFS.cpp
index 2a029463706..21beb4f79b4 100644
--- a/dbms/src/Storages/StorageHDFS.cpp
+++ b/dbms/src/Storages/StorageHDFS.cpp
@@ -171,6 +171,7 @@ Strings LSWithRegexpMatching(const String & path_for_ls, const HDFSFSPtr & fs, c
if (re2::RE2::FullMatch(file_name, matcher))
{
Strings result_part = LSWithRegexpMatching(full_path + "/", fs, suffix_with_globs.substr(next_slash));
+ /// Recursion depth is limited by pattern. '*' works only for depth = 1, for depth = 2 pattern path is '*/*'. So we do not need additional check.
std::move(result_part.begin(), result_part.end(), std::back_inserter(result));
}
}
diff --git a/dbms/src/Storages/StorageS3.cpp b/dbms/src/Storages/StorageS3.cpp
index 59b2ef589a9..c135a16775a 100644
--- a/dbms/src/Storages/StorageS3.cpp
+++ b/dbms/src/Storages/StorageS3.cpp
@@ -84,7 +84,7 @@ namespace
const ConnectionTimeouts & timeouts)
: sample_block(sample_block_)
{
- auto minimum_upload_part_size = context.getConfigRef().getUInt64("s3_minimum_upload_part_size", 512'000'000);
+ auto minimum_upload_part_size = context.getConfigRef().getUInt64("s3.minimum_upload_part_size", 512 * 1024 * 1024);
write_buf = std::make_unique(uri, minimum_upload_part_size, timeouts);
writer = FormatFactory::instance().getOutput(format, *write_buf, sample_block, context);
}
@@ -119,14 +119,36 @@ namespace
}
-BlockInputStreams StorageS3::read(const Names & column_names,
+StorageS3::StorageS3(
+ const Poco::URI & uri_,
+ const std::string & database_name_,
+ const std::string & table_name_,
+ const String & format_name_,
+ const ColumnsDescription & columns_,
+ const ConstraintsDescription & constraints_,
+ Context & context_)
+ : IStorage(columns_)
+ , uri(uri_)
+ , context_global(context_)
+ , format_name(format_name_)
+ , database_name(database_name_)
+ , table_name(table_name_)
+{
+ setColumns(columns_);
+ setConstraints(constraints_);
+}
+
+
+BlockInputStreams StorageS3::read(
+ const Names & column_names,
const SelectQueryInfo & /*query_info*/,
const Context & context,
QueryProcessingStage::Enum /*processed_stage*/,
size_t max_block_size,
unsigned /*num_streams*/)
{
- BlockInputStreamPtr block_input = std::make_shared(uri,
+ BlockInputStreamPtr block_input = std::make_shared(
+ uri,
format_name,
getName(),
getHeaderBlock(column_names),
@@ -171,7 +193,7 @@ void registerStorageS3(StorageFactory & factory)
String format_name = engine_args[1]->as().value.safeGet();
- return StorageS3::create(uri, args.database_name, args.table_name, format_name, args.columns, args.context);
+ return StorageS3::create(uri, args.database_name, args.table_name, format_name, args.columns, args.constraints, args.context);
});
}
}
diff --git a/dbms/src/Storages/StorageS3.h b/dbms/src/Storages/StorageS3.h
index ad073aaa14c..05a69f439d0 100644
--- a/dbms/src/Storages/StorageS3.h
+++ b/dbms/src/Storages/StorageS3.h
@@ -5,6 +5,7 @@
#include
#include
+
namespace DB
{
/**
@@ -15,22 +16,14 @@ namespace DB
class StorageS3 : public ext::shared_ptr_helper, public IStorage
{
public:
- StorageS3(const Poco::URI & uri_,
+ StorageS3(
+ const Poco::URI & uri_,
const std::string & database_name_,
const std::string & table_name_,
const String & format_name_,
const ColumnsDescription & columns_,
- Context & context_
- )
- : IStorage(columns_)
- , uri(uri_)
- , context_global(context_)
- , format_name(format_name_)
- , database_name(database_name_)
- , table_name(table_name_)
- {
- setColumns(columns_);
- }
+ const ConstraintsDescription & constraints_,
+ Context & context_);
String getName() const override
{
@@ -47,7 +40,8 @@ public:
return table_name;
}
- BlockInputStreams read(const Names & column_names,
+ BlockInputStreams read(
+ const Names & column_names,
const SelectQueryInfo & query_info,
const Context & context,
QueryProcessingStage::Enum processed_stage,
@@ -58,11 +52,10 @@ public:
void rename(const String & new_path_to_db, const String & new_database_name, const String & new_table_name, TableStructureWriteLockHolder &) override;
-protected:
+private:
Poco::URI uri;
const Context & context_global;
-private:
String format_name;
String database_name;
String table_name;
diff --git a/dbms/src/Storages/StorageURL.cpp b/dbms/src/Storages/StorageURL.cpp
index 4f3d41604f5..074e99c533b 100644
--- a/dbms/src/Storages/StorageURL.cpp
+++ b/dbms/src/Storages/StorageURL.cpp
@@ -54,8 +54,7 @@ namespace
const ConnectionTimeouts & timeouts)
: name(name_)
{
- read_buf = std::make_unique(uri, method, callback, timeouts);
-
+ read_buf = std::make_unique(uri, method, callback, timeouts, context.getSettingsRef().max_http_get_redirects);
reader = FormatFactory::instance().getInput(format, *read_buf, sample_block, context, max_block_size);
}
diff --git a/dbms/src/Storages/System/StorageSystemContributors.generated.cpp b/dbms/src/Storages/System/StorageSystemContributors.generated.cpp
index debd1fe2dc6..3822b648842 100644
--- a/dbms/src/Storages/System/StorageSystemContributors.generated.cpp
+++ b/dbms/src/Storages/System/StorageSystemContributors.generated.cpp
@@ -2,10 +2,12 @@
const char * auto_contributors[] {
"0xflotus",
"821008736@qq.com",
+ "Akazz",
"Alberto",
"Aleksandra (Ася)",
"Alex Bocharov",
"Alex Krash",
+ "Alex Ryndin",
"Alex Zatelepin",
"Alexander Avdonkin",
"Alexander Ermolaev",
@@ -18,9 +20,11 @@ const char * auto_contributors[] {
"Alexander Lukin",
"Alexander Makarov",
"Alexander Marshalov",
+ "Alexander Mezhov",
"Alexander Millin",
"Alexander Mochalin",
"Alexander Prudaev",
+ "Alexander Rodin",
"Alexander Sapin",
"Alexander Tokmakov",
"Alexander Tretiakov",
@@ -69,6 +73,7 @@ const char * auto_contributors[] {
"Bakhtiyor Ruziev",
"BanyRule",
"BayoNet",
+ "Big Elephant",
"BlahGeek",
"Bogdan",
"Bogdan Voronin",
@@ -96,6 +101,8 @@ const char * auto_contributors[] {
"Dmitry Petukhov",
"Dmitry Rubashkin",
"Dmitry S..ky / skype: dvska-at-skype",
+ "Doge",
+ "Eldar Zaitov",
"Elghazal Ahmed",
"Emmanuel Donin de Rosière",
"Eric",
@@ -105,9 +112,11 @@ const char * auto_contributors[] {
"Evgeniy Gatov",
"Evgeniy Udodov",
"Evgeny Konkov",
+ "Fabian Stäber",
"Fadi Hadzh",
"FeehanG",
"Flowyi",
+ "Francisco Barón",
"Fruit of Eden",
"Gary Dotzler",
"George",
@@ -121,7 +130,10 @@ const char * auto_contributors[] {
"Hiroaki Nakamura",
"Igor",
"Igor Hatarist",
+ "Igor Mineev",
"Igor Strykhar",
+ "Igr",
+ "Igr Mineev",
"Ildar Musin",
"Ildus Kurbangaliev",
"Ilya",
@@ -235,6 +247,8 @@ const char * auto_contributors[] {
"Pawel Rog",
"Persiyanov Dmitriy Andreevich",
"Quid37",
+ "Rafael David Tinoco",
+ "Ramazan Polat",
"Ravengg",
"Reto Kromer",
"Roman Lipovsky",
@@ -246,6 +260,7 @@ const char * auto_contributors[] {
"SaltTan",
"Samuel Chou",
"Serge Rider",
+ "Sergei Bocharov",
"Sergei Semin",
"Sergei Tsetlin (rekub)",
"Sergey Elantsev",
@@ -263,6 +278,7 @@ const char * auto_contributors[] {
"Stanislav Pavlovichev",
"Stas Pavlovichev",
"Stefan Thies",
+ "Stepan Herold",
"Stupnikov Andrey",
"SuperBot",
"Sébastien Launay",
@@ -271,6 +287,7 @@ const char * auto_contributors[] {
"The-Alchemist",
"Tobias Adamson",
"Tsarkova Anastasia",
+ "VDimir",
"Vadim",
"Vadim Plakhtinskiy",
"Vadim Skipin",
@@ -284,6 +301,7 @@ const char * auto_contributors[] {
"Victor Tarnavsky",
"Vitaliy Karnienko",
"Vitaliy Lyudvichenko",
+ "Vitaliy Zakaznikov",
"Vitaly Baranov",
"Vitaly Samigullin",
"Vivien Maisonneuve",
@@ -296,6 +314,7 @@ const char * auto_contributors[] {
"Vladislav Smirnov",
"Vojtech Splichal",
"Vsevolod Orlov",
+ "Vxider",
"Vyacheslav Alipov",
"Weiqing Xu",
"William Shallum",
@@ -312,9 +331,11 @@ const char * auto_contributors[] {
"abdrakhmanov",
"abyss7",
"achulkov2",
+ "akazz",
"akonyaev",
"akuzm",
"alesapin",
+ "alex-zaitsev",
"alexander kozhikhov",
"alexey-milovidov",
"andrewsg",
@@ -336,9 +357,13 @@ const char * auto_contributors[] {
"chertus",
"coraxster",
"daoready",
+ "dasmfm",
"davydovska",
"decaseal",
+ "dependabot[bot]",
"dimarub2000",
+ "dmitrii",
+ "dmitriiut",
"dmitry kuzmin",
"eejoin",
"egatov",
@@ -346,6 +371,7 @@ const char * auto_contributors[] {
"ezhaka",
"f1yegor",
"felixoid",
+ "fenglv",
"fessmage",
"filimonov",
"flow",
@@ -363,8 +389,10 @@ const char * auto_contributors[] {
"javi",
"javi santana",
"kmeaw",
+ "kreuzerkrieg",
"ks1322",
"kshvakov",
+ "l",
"leozhang",
"levushkin aleksej",
"levysh",
@@ -375,10 +403,13 @@ const char * auto_contributors[] {
"lomberts",
"luc1ph3r",
"maiha",
+ "malkfilipp",
+ "maqroll",
"maxkuzn",
"mf5137",
"mfridental",
"miha-g",
+ "millb",
"morty",
"moscas",
"never lee",
@@ -388,7 +419,9 @@ const char * auto_contributors[] {
"ogorbacheva",
"olegkv",
"orantius",
+ "palasonicq",
"peshkurov",
+ "philip.han",
"proller",
"pyos",
"qianlixiang",
@@ -399,6 +432,8 @@ const char * auto_contributors[] {
"santaux",
"sdk2",
"serebrserg",
+ "sev7e0",
+ "sfod",
"shangshujie",
"shedx",
"simon-says",
@@ -408,6 +443,7 @@ const char * auto_contributors[] {
"sundyli",
"svladykin",
"tai",
+ "tavplubix",
"topvisor",
"unknown",
"urgordeadbeef",
@@ -427,4 +463,6 @@ const char * auto_contributors[] {
"张健",
"张风啸",
"谢磊",
+ "黄朝晖",
+ "박현우",
nullptr};
diff --git a/dbms/src/TableFunctions/TableFunctionS3.cpp b/dbms/src/TableFunctions/TableFunctionS3.cpp
index 38ca0830e5b..31a66a91af2 100644
--- a/dbms/src/TableFunctions/TableFunctionS3.cpp
+++ b/dbms/src/TableFunctions/TableFunctionS3.cpp
@@ -5,15 +5,17 @@
namespace DB
{
+
StoragePtr TableFunctionS3::getStorage(
const String & source, const String & format, const ColumnsDescription & columns, Context & global_context, const std::string & table_name) const
{
Poco::URI uri(source);
- return StorageS3::create(uri, getDatabaseName(), table_name, format, columns, global_context);
+ return StorageS3::create(uri, getDatabaseName(), table_name, format, columns, ConstraintsDescription{}, global_context);
}
void registerTableFunctionS3(TableFunctionFactory & factory)
{
factory.registerFunction();
}
+
}
diff --git a/dbms/src/TableFunctions/TableFunctionS3.h b/dbms/src/TableFunctions/TableFunctionS3.h
index a4966be13c7..ecb9ea03197 100644
--- a/dbms/src/TableFunctions/TableFunctionS3.h
+++ b/dbms/src/TableFunctions/TableFunctionS3.h
@@ -1,12 +1,13 @@
#pragma once
#include
-#include
-#include
namespace DB
{
+
+class Context;
+
/* s3(source, format, structure) - creates a temporary storage for a file in S3
*/
class TableFunctionS3 : public ITableFunctionFileLike
@@ -20,6 +21,11 @@ public:
private:
StoragePtr getStorage(
- const String & source, const String & format, const ColumnsDescription & columns, Context & global_context, const std::string & table_name) const override;
+ const String & source,
+ const String & format,
+ const ColumnsDescription & columns,
+ Context & global_context,
+ const std::string & table_name) const override;
};
+
}
diff --git a/dbms/tests/instructions/developer_instruction_ru.md b/dbms/tests/instructions/developer_instruction_ru.md
index 411287e4072..7aa110b1b94 100644
--- a/dbms/tests/instructions/developer_instruction_ru.md
+++ b/dbms/tests/instructions/developer_instruction_ru.md
@@ -12,7 +12,7 @@
Если аккаунта нет - зарегистрируйтесь на https://github.com/. Создайте ssh ключи, если их нет, и загрузите публичные ключи на GitHub. Это потребуется для отправки изменений. Для работы с GitHub можно использовать такие же ssh ключи, как и для работы с другими ssh серверами - скорее всего, они уже у вас есть.
-Создайте fork репозитория ClickHouse. Для этого, на странице https://github.com/yandex/ClickHouse нажмите на кнопку "fork" в правом верхнем углу. Вы получите полную копию репозитория ClickHouse на своём аккаунте, которая называется "форк". Процесс разработки состоит в том, чтобы внести нужные изменения в свой форк репозитория, а затем создать "pull request" для принятия изменений в основной репозиторий.
+Создайте fork репозитория ClickHouse. Для этого, на странице https://github.com/ClickHouse/ClickHouse нажмите на кнопку "fork" в правом верхнем углу. Вы получите полную копию репозитория ClickHouse на своём аккаунте, которая называется "форк". Процесс разработки состоит в том, чтобы внести нужные изменения в свой форк репозитория, а затем создать "pull request" для принятия изменений в основной репозиторий.
Для работы с git репозиториями, установите `git`.
@@ -61,7 +61,7 @@ and the repository exists.
Вы также можете клонировать репозиторий по протоколу https:
```
-git clone https://github.com/yandex/ClickHouse.git
+git clone https://github.com/ClickHouse/ClickHouse.git
```
Этот вариант не подходит для отправки изменений на сервер. Вы можете временно его использовать, а затем добавить ssh ключи и заменить адрес репозитория с помощью команды `git remote`.
@@ -228,7 +228,7 @@ sudo -u clickhouse ClickHouse/build/dbms/programs/clickhouse server --config-fil
Разработка тестов: https://clickhouse.yandex/docs/ru/development/tests/
-Список задач: https://github.com/yandex/ClickHouse/blob/master/dbms/tests/instructions/easy_tasks_sorted_ru.md
+Список задач: https://github.com/ClickHouse/ClickHouse/blob/master/dbms/tests/instructions/easy_tasks_sorted_ru.md
# Тестовые данные
diff --git a/dbms/tests/instructions/easy_tasks_sorted_ru.md b/dbms/tests/instructions/easy_tasks_sorted_ru.md
index 43d86b709c3..6b9c0e18f9d 100644
--- a/dbms/tests/instructions/easy_tasks_sorted_ru.md
+++ b/dbms/tests/instructions/easy_tasks_sorted_ru.md
@@ -134,7 +134,7 @@ Geohash - способ преобразования географических
Энтропию следует считать по гистограмме. Пример расчёта гистограммы смотрите в реализации функции `quantileExact`.
-https://github.com/yandex/ClickHouse/issues/3266
+https://github.com/ClickHouse/ClickHouse/issues/3266
## Функции создания и обновления состояния агрегатной функции по одному кортежу аргументов.
@@ -152,7 +152,7 @@ https://github.com/yandex/ClickHouse/issues/3266
## Корректное сравнение Date и DateTime.
-https://github.com/yandex/ClickHouse/issues/2011
+https://github.com/ClickHouse/ClickHouse/issues/2011
Нужно сравнивать Date и DateTime так, как будто Date расширено до DateTime на начало суток в том же часовом поясе.
diff --git a/dbms/tests/integration/test_block_structure_mismatch/test.py b/dbms/tests/integration/test_block_structure_mismatch/test.py
index ad96a9aa853..fa9272b8401 100644
--- a/dbms/tests/integration/test_block_structure_mismatch/test.py
+++ b/dbms/tests/integration/test_block_structure_mismatch/test.py
@@ -9,7 +9,7 @@ cluster = ClickHouseCluster(__file__)
node1 = cluster.add_instance('node1', main_configs=['configs/remote_servers.xml'])
node2 = cluster.add_instance('node2', main_configs=['configs/remote_servers.xml'])
-#test reproducing issue https://github.com/yandex/ClickHouse/issues/3162
+#test reproducing issue https://github.com/ClickHouse/ClickHouse/issues/3162
@pytest.fixture(scope="module")
def started_cluster():
try:
diff --git a/dbms/tests/integration/test_redirect_url_storage/__init__.py b/dbms/tests/integration/test_redirect_url_storage/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/dbms/tests/integration/test_redirect_url_storage/test.py b/dbms/tests/integration/test_redirect_url_storage/test.py
new file mode 100644
index 00000000000..cf64e84b96b
--- /dev/null
+++ b/dbms/tests/integration/test_redirect_url_storage/test.py
@@ -0,0 +1,45 @@
+import pytest
+from helpers.hdfs_api import HDFSApi
+
+from helpers.cluster import ClickHouseCluster
+
+cluster = ClickHouseCluster(__file__)
+node1 = cluster.add_instance('node1', with_zookeeper=False, with_hdfs=True)
+
+@pytest.fixture(scope="module")
+def started_cluster():
+ try:
+ cluster.start()
+
+ yield cluster
+ finally:
+ cluster.shutdown()
+
+def test_url_without_redirect(started_cluster):
+ hdfs_api = HDFSApi("root")
+ hdfs_api.write_data("/simple_storage", "1\tMark\t72.53\n")
+ assert hdfs_api.read_data("/simple_storage") == "1\tMark\t72.53\n"
+
+ # access datanode port directly
+ node1.query("create table WebHDFSStorage (id UInt32, name String, weight Float64) ENGINE = URL('http://hdfs1:50075/webhdfs/v1/simple_storage?op=OPEN&namenoderpcaddress=hdfs1:9000&offset=0', 'TSV')")
+ assert node1.query("select * from WebHDFSStorage") == "1\tMark\t72.53\n"
+
+def test_url_with_redirect_not_allowed(started_cluster):
+ hdfs_api = HDFSApi("root")
+ hdfs_api.write_data("/simple_storage", "1\tMark\t72.53\n")
+ assert hdfs_api.read_data("/simple_storage") == "1\tMark\t72.53\n"
+
+ # access proxy port without allowing redirects
+ node1.query("create table WebHDFSStorageWithoutRedirect (id UInt32, name String, weight Float64) ENGINE = URL('http://hdfs1:50070/webhdfs/v1/simple_storage?op=OPEN&namenoderpcaddress=hdfs1:9000&offset=0', 'TSV')")
+ with pytest.raises(Exception):
+ assert node1.query("select * from WebHDFSStorageWithoutRedirect") == "1\tMark\t72.53\n"
+
+def test_url_with_redirect_allowed(started_cluster):
+ hdfs_api = HDFSApi("root")
+ hdfs_api.write_data("/simple_storage", "1\tMark\t72.53\n")
+ assert hdfs_api.read_data("/simple_storage") == "1\tMark\t72.53\n"
+
+ # access proxy port with allowing redirects
+ # http://localhost:50070/webhdfs/v1/b?op=OPEN&namenoderpcaddress=hdfs1:9000&offset=0
+ node1.query("create table WebHDFSStorageWithRedirect (id UInt32, name String, weight Float64) ENGINE = URL('http://hdfs1:50070/webhdfs/v1/simple_storage?op=OPEN&namenoderpcaddress=hdfs1:9000&offset=0', 'TSV')")
+ assert node1.query("SET max_http_get_redirects=1; select * from WebHDFSStorageWithRedirect") == "1\tMark\t72.53\n"
diff --git a/dbms/tests/integration/test_storage_hdfs/test.py b/dbms/tests/integration/test_storage_hdfs/test.py
index 55ef98f6fde..cf4205115ff 100644
--- a/dbms/tests/integration/test_storage_hdfs/test.py
+++ b/dbms/tests/integration/test_storage_hdfs/test.py
@@ -29,7 +29,6 @@ def started_cluster():
def test_read_write_storage(started_cluster):
hdfs_api = HDFSApi("root")
- hdfs_api.write_data("/simple_storage", "1\tMark\t72.53\n")
node1.query("create table SimpleHDFSStorage (id UInt32, name String, weight Float64) ENGINE = HDFS('hdfs://hdfs1:9000/simple_storage', 'TSV')")
node1.query("insert into SimpleHDFSStorage values (1, 'Mark', 72.53)")
@@ -39,19 +38,40 @@ def test_read_write_storage(started_cluster):
def test_read_write_storage_with_globs(started_cluster):
hdfs_api = HDFSApi("root")
- for i in ["1", "2", "3"]:
- hdfs_api.write_data("/storage" + i, i + "\tMark\t72.53\n")
- assert hdfs_api.read_data("/storage" + i) == i + "\tMark\t72.53\n"
-
node1.query("create table HDFSStorageWithRange (id UInt32, name String, weight Float64) ENGINE = HDFS('hdfs://hdfs1:9000/storage{1..5}', 'TSV')")
node1.query("create table HDFSStorageWithEnum (id UInt32, name String, weight Float64) ENGINE = HDFS('hdfs://hdfs1:9000/storage{1,2,3,4,5}', 'TSV')")
node1.query("create table HDFSStorageWithQuestionMark (id UInt32, name String, weight Float64) ENGINE = HDFS('hdfs://hdfs1:9000/storage?', 'TSV')")
node1.query("create table HDFSStorageWithAsterisk (id UInt32, name String, weight Float64) ENGINE = HDFS('hdfs://hdfs1:9000/storage*', 'TSV')")
- assert node1.query("select count(*) from HDFSStorageWithRange") == '3\n'
- assert node1.query("select count(*) from HDFSStorageWithEnum") == '3\n'
- assert node1.query("select count(*) from HDFSStorageWithQuestionMark") == '3\n'
- assert node1.query("select count(*) from HDFSStorageWithAsterisk") == '3\n'
+ for i in ["1", "2", "3"]:
+ hdfs_api.write_data("/storage" + i, i + "\tMark\t72.53\n")
+ assert hdfs_api.read_data("/storage" + i) == i + "\tMark\t72.53\n"
+
+ assert node1.query("select count(*) from HDFSStorageWithRange") == "3\n"
+ assert node1.query("select count(*) from HDFSStorageWithEnum") == "3\n"
+ assert node1.query("select count(*) from HDFSStorageWithQuestionMark") == "3\n"
+ assert node1.query("select count(*) from HDFSStorageWithAsterisk") == "3\n"
+
+ try:
+ node1.query("insert into HDFSStorageWithEnum values (1, 'NEW', 4.2)")
+ assert False, "Exception have to be thrown"
+ except Exception as ex:
+ print ex
+ assert "in readonly mode" in str(ex)
+
+ try:
+ node1.query("insert into HDFSStorageWithQuestionMark values (1, 'NEW', 4.2)")
+ assert False, "Exception have to be thrown"
+ except Exception as ex:
+ print ex
+ assert "in readonly mode" in str(ex)
+
+ try:
+ node1.query("insert into HDFSStorageWithAsterisk values (1, 'NEW', 4.2)")
+ assert False, "Exception have to be thrown"
+ except Exception as ex:
+ print ex
+ assert "in readonly mode" in str(ex)
def test_read_write_table(started_cluster):
hdfs_api = HDFSApi("root")
@@ -78,18 +98,18 @@ def test_bad_hdfs_uri(started_cluster):
node1.query("create table BadStorage1 (id UInt32, name String, weight Float64) ENGINE = HDFS('hads:hgsdfs100500:9000/other_storage', 'TSV')")
except Exception as ex:
print ex
- assert 'Illegal HDFS URI' in str(ex)
+ assert "Illegal HDFS URI" in str(ex)
try:
node1.query("create table BadStorage2 (id UInt32, name String, weight Float64) ENGINE = HDFS('hdfs://hdfs100500:9000/other_storage', 'TSV')")
except Exception as ex:
print ex
- assert 'Unable to create builder to connect to HDFS' in str(ex)
+ assert "Unable to create builder to connect to HDFS" in str(ex)
try:
node1.query("create table BadStorage3 (id UInt32, name String, weight Float64) ENGINE = HDFS('hdfs://hdfs1:9000/<>', 'TSV')")
except Exception as ex:
print ex
- assert 'Unable to open HDFS file' in str(ex)
+ assert "Unable to open HDFS file" in str(ex)
def test_globs_in_read_table(started_cluster):
hdfs_api = HDFSApi("root")
diff --git a/dbms/tests/integration/test_storage_s3/configs/min_chunk_size.xml b/dbms/tests/integration/test_storage_s3/configs/min_chunk_size.xml
index f61fcd2c5c9..2a9c465a7b8 100644
--- a/dbms/tests/integration/test_storage_s3/configs/min_chunk_size.xml
+++ b/dbms/tests/integration/test_storage_s3/configs/min_chunk_size.xml
@@ -1,3 +1,5 @@
- 1000000
+
+ 1000000
+
diff --git a/dbms/tests/performance/if_array_string.xml b/dbms/tests/performance/if_array_string.xml
index 3654cedd879..c135cf9c8ce 100644
--- a/dbms/tests/performance/if_array_string.xml
+++ b/dbms/tests/performance/if_array_string.xml
@@ -17,5 +17,5 @@
SELECT count() FROM system.numbers WHERE NOT ignore(rand() % 2 ? ['Hello', 'World'] : materialize(['a', 'b', 'c']))
SELECT count() FROM system.numbers WHERE NOT ignore(rand() % 2 ? materialize(['Hello', 'World']) : materialize(['a', 'b', 'c']))
SELECT count() FROM system.numbers WHERE NOT ignore(rand() % 2 ? materialize(['', '']) : emptyArrayString())
- SELECT count() FROM system.numbers WHERE NOT ignore(rand() % 2 ? materialize(['https://github.com/yandex/ClickHouse/pull/1070', 'https://www.google.ru/search?newwindow=1&site=&source=hp&q=zookeeper+wire+protocol+exists&oq=zookeeper+wire+protocol+exists&gs_l=psy-ab.3...330.6300.0.6687.33.28.0.0.0.0.386.4838.0j5j9j5.19.0....0...1.1.64.psy-ab..14.17.4448.0..0j35i39k1j0i131k1j0i22i30k1j0i19k1j33i21k1.r_3uFoNOrSU']) : emptyArrayString())
+ SELECT count() FROM system.numbers WHERE NOT ignore(rand() % 2 ? materialize(['https://github.com/ClickHouse/ClickHouse/pull/1070', 'https://www.google.ru/search?newwindow=1&site=&source=hp&q=zookeeper+wire+protocol+exists&oq=zookeeper+wire+protocol+exists&gs_l=psy-ab.3...330.6300.0.6687.33.28.0.0.0.0.386.4838.0j5j9j5.19.0....0...1.1.64.psy-ab..14.17.4448.0..0j35i39k1j0i131k1j0i22i30k1j0i19k1j33i21k1.r_3uFoNOrSU']) : emptyArrayString())
diff --git a/dbms/tests/queries/0_stateless/00504_insert_miss_columns.sh b/dbms/tests/queries/0_stateless/00504_insert_miss_columns.sh
index 49ecb5ef315..35290bfbb45 100755
--- a/dbms/tests/queries/0_stateless/00504_insert_miss_columns.sh
+++ b/dbms/tests/queries/0_stateless/00504_insert_miss_columns.sh
@@ -3,7 +3,7 @@
CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
. $CURDIR/../shell_config.sh
-# https://github.com/yandex/ClickHouse/issues/1300
+# https://github.com/ClickHouse/ClickHouse/issues/1300
$CLICKHOUSE_CLIENT -q "DROP TABLE IF EXISTS advertiser";
$CLICKHOUSE_CLIENT -q "DROP TABLE IF EXISTS advertiser_test";
diff --git a/dbms/tests/queries/0_stateless/00506_union_distributed.sql b/dbms/tests/queries/0_stateless/00506_union_distributed.sql
index 7fd921a6a65..0bd4dd43ac9 100644
--- a/dbms/tests/queries/0_stateless/00506_union_distributed.sql
+++ b/dbms/tests/queries/0_stateless/00506_union_distributed.sql
@@ -1,5 +1,5 @@
--- https://github.com/yandex/ClickHouse/issues/1059
+-- https://github.com/ClickHouse/ClickHouse/issues/1059
DROP TABLE IF EXISTS union1;
DROP TABLE IF EXISTS union2;
diff --git a/dbms/tests/queries/0_stateless/00702_join_on_dups.sql b/dbms/tests/queries/0_stateless/00702_join_on_dups.sql
index ca774a0a6c3..852378f543f 100644
--- a/dbms/tests/queries/0_stateless/00702_join_on_dups.sql
+++ b/dbms/tests/queries/0_stateless/00702_join_on_dups.sql
@@ -38,7 +38,7 @@ select s.*, j.* from (select * from X) as s full join (select * from Y) as j on
--select X.*, Y.* from X full join Y on (X.id + 1) = (Y.id + 1) order by id;
select 'self inner';
-select X.*, s.* from X inner join (select * from X) as s on X.id = s.id order by X.id;
+select X.*, s.* from X inner join (select * from X) as s on X.id = s.id order by X.id, X.x_a, s.x_a;
select 'self inner nullable';
select X.*, s.* from X inner join (select * from X) as s on X.x_b = s.x_b order by X.id;
select 'self inner nullable vs not nullable';
@@ -48,7 +48,7 @@ select 'self inner nullable vs not nullable 2';
select Y.*, s.* from Y inner join (select * from Y) as s on concat('n', Y.y_a) = s.y_b order by id;
select 'self left';
-select X.*, s.* from X left join (select * from X) as s on X.id = s.id order by X.id;
+select X.*, s.* from X left join (select * from X) as s on X.id = s.id order by X.id, X.x_a, s.x_a;
select 'self left nullable';
select X.*, s.* from X left join (select * from X) as s on X.x_b = s.x_b order by X.id;
select 'self left nullable vs not nullable';
@@ -58,7 +58,7 @@ select 'self left nullable vs not nullable 2';
select Y.*, s.* from Y left join (select * from Y) as s on concat('n', Y.y_a) = s.y_b order by id;
select 'self right';
-select X.*, s.* from X right join (select * from X) as s on X.id = s.id order by X.id;
+select X.*, s.* from X right join (select * from X) as s on X.id = s.id order by X.id, X.x_a, s.x_a;
select 'self right nullable';
select X.*, s.* from X right join (select * from X) as s on X.x_b = s.x_b order by X.id;
select 'self right nullable vs not nullable';
@@ -67,7 +67,7 @@ select X.*, s.* from X right join (select * from X) as s on X.id = s.x_b order b
--select Y.*, s.* from Y right join (select * from Y) as s on concat('n', Y.y_a) = s.y_b order by id;
select 'self full';
-select X.*, s.* from X full join (select * from X) as s on X.id = s.id order by X.id;
+select X.*, s.* from X full join (select * from X) as s on X.id = s.id order by X.id, X.x_a, s.x_a;
select 'self full nullable';
select X.*, s.* from X full join (select * from X) as s on X.x_b = s.x_b order by X.id;
select 'self full nullable vs not nullable';
diff --git a/dbms/tests/queries/0_stateless/00847_multiple_join_same_column.sql b/dbms/tests/queries/0_stateless/00847_multiple_join_same_column.sql
index 44b3fe202d3..c7f0c6383c2 100644
--- a/dbms/tests/queries/0_stateless/00847_multiple_join_same_column.sql
+++ b/dbms/tests/queries/0_stateless/00847_multiple_join_same_column.sql
@@ -12,7 +12,9 @@ insert into y values (1,1);
select t.a, s.b, s.a, s.b, y.a, y.b from t
left join s on (t.a = s.a and t.b = s.b)
-left join y on (y.a = s.a and y.b = s.b) format Vertical;
+left join y on (y.a = s.a and y.b = s.b)
+order by t.a
+format Vertical;
select t.a, s.b, s.a, s.b, y.a, y.b from t
left join s on (t.a = s.a and s.b = t.b)
diff --git a/dbms/tests/queries/0_stateless/00849_multiple_comma_join.sql b/dbms/tests/queries/0_stateless/00849_multiple_comma_join.sql
index 6fcf4ebb563..d1d247a0174 100644
--- a/dbms/tests/queries/0_stateless/00849_multiple_comma_join.sql
+++ b/dbms/tests/queries/0_stateless/00849_multiple_comma_join.sql
@@ -37,21 +37,30 @@ INSERT INTO t3_00849 values (1,1), (1, Null);
INSERT INTO t4_00849 values (1,1), (1, Null);
SELECT 'SELECT * FROM t1, t2';
-SELECT * FROM t1_00849, t2_00849;
+SELECT * FROM t1_00849, t2_00849
+ORDER BY t1_00849.a, t2_00849.b;
SELECT 'SELECT * FROM t1, t2 WHERE t1.a = t2.a';
-SELECT * FROM t1_00849, t2_00849 WHERE t1_00849.a = t2_00849.a;
+SELECT * FROM t1_00849, t2_00849 WHERE t1_00849.a = t2_00849.a
+ORDER BY t1_00849.a, t2_00849.b;
SELECT 'SELECT t1.a, t2.a FROM t1, t2 WHERE t1.b = t2.b';
SELECT t1_00849.a, t2_00849.b FROM t1_00849, t2_00849 WHERE t1_00849.b = t2_00849.b;
SELECT 'SELECT t1.a, t2.b, t3.b FROM t1, t2, t3 WHERE t1.a = t2.a AND t1.a = t3.a';
-SELECT t1_00849.a, t2_00849.b, t3_00849.b FROM t1_00849, t2_00849, t3_00849 WHERE t1_00849.a = t2_00849.a AND t1_00849.a = t3_00849.a;
+SELECT t1_00849.a, t2_00849.b, t3_00849.b FROM t1_00849, t2_00849, t3_00849
+WHERE t1_00849.a = t2_00849.a AND t1_00849.a = t3_00849.a
+ORDER BY t2_00849.b, t3_00849.b;
SELECT 'SELECT t1.a, t2.b, t3.b FROM t1, t2, t3 WHERE t1.b = t2.b AND t1.b = t3.b';
SELECT t1_00849.a, t2_00849.b, t3_00849.b FROM t1_00849, t2_00849, t3_00849 WHERE t1_00849.b = t2_00849.b AND t1_00849.b = t3_00849.b;
SELECT 'SELECT t1.a, t2.b, t3.b, t4.b FROM t1, t2, t3, t4 WHERE t1.a = t2.a AND t1.a = t3.a AND t1.a = t4.a';
-SELECT t1_00849.a, t2_00849.b, t3_00849.b, t4_00849.b FROM t1_00849, t2_00849, t3_00849, t4_00849 WHERE t1_00849.a = t2_00849.a AND t1_00849.a = t3_00849.a AND t1_00849.a = t4_00849.a;
+SELECT t1_00849.a, t2_00849.b, t3_00849.b, t4_00849.b FROM t1_00849, t2_00849, t3_00849, t4_00849
+WHERE t1_00849.a = t2_00849.a AND t1_00849.a = t3_00849.a AND t1_00849.a = t4_00849.a
+ORDER BY t2_00849.b, t3_00849.b, t4_00849.b;
SELECT 'SELECT t1.a, t2.b, t3.b, t4.b FROM t1, t2, t3, t4 WHERE t1.b = t2.b AND t1.b = t3.b AND t1.b = t4.b';
-SELECT t1_00849.a, t2_00849.b, t3_00849.b, t4_00849.b FROM t1_00849, t2_00849, t3_00849, t4_00849 WHERE t1_00849.b = t2_00849.b AND t1_00849.b = t3_00849.b AND t1_00849.b = t4_00849.b;
+SELECT t1_00849.a, t2_00849.b, t3_00849.b, t4_00849.b FROM t1_00849, t2_00849, t3_00849, t4_00849
+WHERE t1_00849.b = t2_00849.b AND t1_00849.b = t3_00849.b AND t1_00849.b = t4_00849.b;
SELECT 'SELECT t1.a, t2.b, t3.b, t4.b FROM t1, t2, t3, t4 WHERE t1.a = t2.a AND t2.a = t3.a AND t3.a = t4.a';
-SELECT t1_00849.a, t2_00849.b, t3_00849.b, t4_00849.b FROM t1_00849, t2_00849, t3_00849, t4_00849 WHERE t1_00849.a = t2_00849.a AND t2_00849.a = t3_00849.a AND t3_00849.a = t4_00849.a;
+SELECT t1_00849.a, t2_00849.b, t3_00849.b, t4_00849.b FROM t1_00849, t2_00849, t3_00849, t4_00849
+WHERE t1_00849.a = t2_00849.a AND t2_00849.a = t3_00849.a AND t3_00849.a = t4_00849.a
+ORDER BY t2_00849.b, t3_00849.b, t4_00849.b;
DROP TABLE t1_00849;
DROP TABLE t2_00849;
diff --git a/dbms/tests/queries/0_stateless/01013_hex_float.reference b/dbms/tests/queries/0_stateless/01013_hex_float.reference
new file mode 100644
index 00000000000..c3e4ec26847
--- /dev/null
+++ b/dbms/tests/queries/0_stateless/01013_hex_float.reference
@@ -0,0 +1,12 @@
+000000000000F03F
+0000000000405940
+00C84E676DC1AB43
+2342920CA19CC73B
+7DC39425AD49B254
+2C616D8C9DF0423F
+3BDF4F8D97FE5EC0
+0A57C742
+00004843
+00004943
+0000000000406940
+0000000000606940
diff --git a/dbms/tests/queries/0_stateless/01013_hex_float.sql b/dbms/tests/queries/0_stateless/01013_hex_float.sql
new file mode 100644
index 00000000000..30869529d87
--- /dev/null
+++ b/dbms/tests/queries/0_stateless/01013_hex_float.sql
@@ -0,0 +1,10 @@
+SELECT hex(1.0);
+SELECT hex(101.);
+SELECT hex(1e+18);
+SELECT hex(1e-20);
+SELECT hex(1e+100);
+SELECT hex(0.000578);
+SELECT hex(-123.978);
+SELECT hex(toFloat32(99.67));
+SELECT hex(toFloat32(number)) FROM numbers(200, 2);
+SELECT hex(toFloat64(number)) FROM numbers(202, 2);
diff --git a/dbms/tests/queries/0_stateless/01013_repeat_function.reference b/dbms/tests/queries/0_stateless/01013_repeat_function.reference
new file mode 100644
index 00000000000..46bb248a99a
--- /dev/null
+++ b/dbms/tests/queries/0_stateless/01013_repeat_function.reference
@@ -0,0 +1,38 @@
+abcabcabcabcabcabcabcabcabcabc
+abcabcabc
+sdfggsdfgg
+xywq
+
+abcabcabcabcabcabcabcabcabcabcabcabc
+sdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfgg
+xywqxywqxywqxywq
+plkfplkfplkfplkfplkf
+abcabcabcabc
+sdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfgg
+xywqxywqxywqxywqxywqxywqxywqxywqxywq
+plkfplkfplkfplkfplkfplkfplkf
+abcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabc
+sdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfgg
+xywqxywqxywqxywqxywq
+plkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkfplkf
+abcabcabcabcabcabcabcabcabcabc
+sdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfggsdfgg
+xywqxywqxywqxywqxywqxywqxywqxywqxywqxywq
+plkfplkfplkfplkfplkfplkfplkfplkfplkfplkf
+abcabcabc
+abcabc
+abc
+
+abcabcabcabcabcabcabcabcabcabcabcabc
+abcabcabcabcabcabcabcabcabcabc
+abcabcabcabc
+abcabcabcabcabc
+abcabcabcabc
+abcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabc
+abcabcabcabcabcabcabcabcabc
+abcabcabcabcabcabcabc
+abcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabc
+abcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabc
+abcabcabcabcabc
+abcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabc
+Hello, world! Hello, world! Hello, world!
diff --git a/dbms/tests/queries/0_stateless/01013_repeat_function.sql b/dbms/tests/queries/0_stateless/01013_repeat_function.sql
new file mode 100644
index 00000000000..85b0c16b4ab
--- /dev/null
+++ b/dbms/tests/queries/0_stateless/01013_repeat_function.sql
@@ -0,0 +1,26 @@
+SELECT repeat('abc', 10);
+DROP TABLE IF EXISTS defaults;
+CREATE TABLE defaults
+(
+ strings String,
+ u8 UInt8,
+ u16 UInt16,
+ u32 UInt32,
+ u64 UInt64
+)ENGINE = Memory();
+
+INSERT INTO defaults values ('abc', 3, 12, 4, 56) ('sdfgg', 2, 10, 21, 200) ('xywq', 1, 4, 9, 5) ('plkf', 0, 5, 7,77);
+
+SELECT repeat(strings, u8) FROM defaults;
+SELECT repeat(strings, u16) FROM defaults;
+SELECT repeat(strings, u32) from defaults;
+SELECT repeat(strings, u64) FROM defaults;
+SELECT repeat(strings, 10) FROM defaults;
+SELECT repeat('abc', u8) FROM defaults;
+SELECT repeat('abc', u16) FROM defaults;
+SELECT repeat('abc', u32) FROM defaults;
+SELECT repeat('abc', u64) FROM defaults;
+
+SELECT repeat('Hello, world! ', 3);
+
+DROP TABLE defaults;
diff --git a/dbms/tests/queries/0_stateless/01014_function_repeat_corner_cases.reference b/dbms/tests/queries/0_stateless/01014_function_repeat_corner_cases.reference
new file mode 100644
index 00000000000..50cc34ce29c
--- /dev/null
+++ b/dbms/tests/queries/0_stateless/01014_function_repeat_corner_cases.reference
@@ -0,0 +1,13 @@
+1000000
+0
+
+1
+22
+333
+4444
+55555
+666666
+7777777
+88888888
+999999999
+10101010101010101010
diff --git a/dbms/tests/queries/0_stateless/01014_function_repeat_corner_cases.sql b/dbms/tests/queries/0_stateless/01014_function_repeat_corner_cases.sql
new file mode 100644
index 00000000000..53e55a63702
--- /dev/null
+++ b/dbms/tests/queries/0_stateless/01014_function_repeat_corner_cases.sql
@@ -0,0 +1,6 @@
+SELECT length(repeat('x', 1000000));
+SELECT length(repeat('', 1000000));
+SELECT length(repeat('x', 1000001)); -- { serverError 131 }
+SET max_memory_usage = 100000000;
+SELECT length(repeat(repeat('Hello, world!', 1000000), 10)); -- { serverError 241 }
+SELECT repeat(toString(number), number) FROM system.numbers LIMIT 11;
diff --git a/debian/changelog b/debian/changelog
index 563be7c48eb..131741b202f 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,5 +1,5 @@
-clickhouse (19.15.1.1) unstable; urgency=low
+clickhouse (19.16.1.1) unstable; urgency=low
* Modified source code
- -- clickhouse-release Fri, 06 Sep 2019 17:58:30 +0300
+ -- clickhouse-release Tue, 24 Sep 2019 14:47:28 +0300
diff --git a/docker/client/Dockerfile b/docker/client/Dockerfile
index 9fde85b9fb0..3134686b0c0 100644
--- a/docker/client/Dockerfile
+++ b/docker/client/Dockerfile
@@ -1,7 +1,7 @@
FROM ubuntu:18.04
ARG repository="deb http://repo.yandex.ru/clickhouse/deb/stable/ main/"
-ARG version=19.15.1.*
+ARG version=19.16.1.*
RUN apt-get update \
&& apt-get install --yes --no-install-recommends \
diff --git a/docker/client/README.md b/docker/client/README.md
index 385858d7f7b..b8cb51adb1e 100644
--- a/docker/client/README.md
+++ b/docker/client/README.md
@@ -4,4 +4,4 @@ For more information see [ClickHouse Server Docker Image](https://hub.docker.com
## License
-View [license information](https://github.com/yandex/ClickHouse/blob/master/LICENSE) for the software contained in this image.
+View [license information](https://github.com/ClickHouse/ClickHouse/blob/master/LICENSE) for the software contained in this image.
diff --git a/docker/images.json b/docker/images.json
index fef364a942f..a3094251a5a 100644
--- a/docker/images.json
+++ b/docker/images.json
@@ -12,5 +12,6 @@
"docker/test/stateless_with_coverage": "yandex/clickhouse-stateless-test-with-coverage",
"docker/test/unit": "yandex/clickhouse-unit-test",
"docker/test/stress": "yandex/clickhouse-stress-test",
+ "docker/test/split_build_smoke_test": "yandex/clickhouse-split-build-smoke-test",
"dbms/tests/integration/image": "yandex/clickhouse-integration-tests-runner"
}
diff --git a/docker/packager/binary/Dockerfile b/docker/packager/binary/Dockerfile
index 1ffeab01440..2ad696cd279 100644
--- a/docker/packager/binary/Dockerfile
+++ b/docker/packager/binary/Dockerfile
@@ -59,7 +59,7 @@ RUN apt-get update -y \
gdb \
rename \
wget
-
+
# Build and install tools for cross-linking to Darwin
ENV CC=clang-8
diff --git a/docker/server/Dockerfile b/docker/server/Dockerfile
index c65a0ddb550..4b5420a3e5a 100644
--- a/docker/server/Dockerfile
+++ b/docker/server/Dockerfile
@@ -1,7 +1,7 @@
FROM ubuntu:18.04
ARG repository="deb http://repo.yandex.ru/clickhouse/deb/stable/ main/"
-ARG version=19.15.1.*
+ARG version=19.16.1.*
ARG gosu_ver=1.10
RUN apt-get update \
diff --git a/docker/server/README.md b/docker/server/README.md
index 4ed73a0fb9b..b1225e8a27a 100644
--- a/docker/server/README.md
+++ b/docker/server/README.md
@@ -59,4 +59,4 @@ EOSQL
## License
-View [license information](https://github.com/yandex/ClickHouse/blob/master/LICENSE) for the software contained in this image.
+View [license information](https://github.com/ClickHouse/ClickHouse/blob/master/LICENSE) for the software contained in this image.
diff --git a/docker/test/Dockerfile b/docker/test/Dockerfile
index 0106d877feb..189e76d6c6b 100644
--- a/docker/test/Dockerfile
+++ b/docker/test/Dockerfile
@@ -1,7 +1,7 @@
FROM ubuntu:18.04
ARG repository="deb http://repo.yandex.ru/clickhouse/deb/stable/ main/"
-ARG version=19.15.1.*
+ARG version=19.16.1.*
RUN apt-get update && \
apt-get install -y apt-transport-https dirmngr && \
diff --git a/docker/test/README.md b/docker/test/README.md
index 0833aacb822..563cfd837e9 100644
--- a/docker/test/README.md
+++ b/docker/test/README.md
@@ -2,4 +2,4 @@
## License
-View [license information](https://github.com/yandex/ClickHouse/blob/master/LICENSE) for the software contained in this image.
+View [license information](https://github.com/ClickHouse/ClickHouse/blob/master/LICENSE) for the software contained in this image.
diff --git a/docker/test/split_build_smoke_test/Dockerfile b/docker/test/split_build_smoke_test/Dockerfile
new file mode 100644
index 00000000000..c77db1c6c88
--- /dev/null
+++ b/docker/test/split_build_smoke_test/Dockerfile
@@ -0,0 +1,6 @@
+# docker build -t yandex/clickhouse-split-build-smoke-test .
+FROM yandex/clickhouse-binary-builder
+
+COPY run.sh /run.sh
+
+CMD /run.sh
diff --git a/docker/test/split_build_smoke_test/run.sh b/docker/test/split_build_smoke_test/run.sh
new file mode 100755
index 00000000000..63cd5fada5b
--- /dev/null
+++ b/docker/test/split_build_smoke_test/run.sh
@@ -0,0 +1,17 @@
+#!/bin/bash
+
+set -x
+
+install_and_run_server() {
+ mkdir /unpacked
+ tar -xzf /package_folder/shared_build.tgz -C /unpacked --strip 1
+ LD_LIBRARY_PATH=/unpacked /unpacked/clickhouse-server --config /unpacked/config/config.xml >/var/log/clickhouse-server/stderr.log 2>&1 &
+ sleep 5
+}
+
+run_client() {
+ LD_LIBRARY_PATH=/unpacked /unpacked/clickhouse-client --query "select 'OK'" 2>/var/log/clickhouse-server/clientstderr.log || echo "FAIL"
+}
+
+install_and_run_server
+run_client
diff --git a/docs/en/data_types/array.md b/docs/en/data_types/array.md
index 3338d2648d2..4e9c7d5930e 100644
--- a/docs/en/data_types/array.md
+++ b/docs/en/data_types/array.md
@@ -8,42 +8,34 @@ Array of `T`-type items.
You can use a function to create an array:
-```
+```sql
array(T)
```
You can also use square brackets.
-```
+```sql
[]
```
Example of creating an array:
+```sql
+SELECT array(1, 2) AS x, toTypeName(x)
```
-:) SELECT array(1, 2) AS x, toTypeName(x)
-
-SELECT
- [1, 2] AS x,
- toTypeName(x)
-
+```text
┌─x─────┬─toTypeName(array(1, 2))─┐
│ [1,2] │ Array(UInt8) │
└───────┴─────────────────────────┘
-
-1 rows in set. Elapsed: 0.002 sec.
-
-:) SELECT [1, 2] AS x, toTypeName(x)
-
-SELECT
- [1, 2] AS x,
- toTypeName(x)
-
+```
+```sql
+SELECT [1, 2] AS x, toTypeName(x)
+```
+```text
┌─x─────┬─toTypeName([1, 2])─┐
│ [1,2] │ Array(UInt8) │
└───────┴────────────────────┘
-1 rows in set. Elapsed: 0.002 sec.
```
## Working with data types
@@ -54,31 +46,23 @@ If ClickHouse couldn't determine the data type, it will generate an exception. F
Examples of automatic data type detection:
-```
-:) SELECT array(1, 2, NULL) AS x, toTypeName(x)
-
-SELECT
- [1, 2, NULL] AS x,
- toTypeName(x)
-
+```sql
+SELECT array(1, 2, NULL) AS x, toTypeName(x)
+```
+```text
┌─x──────────┬─toTypeName(array(1, 2, NULL))─┐
│ [1,2,NULL] │ Array(Nullable(UInt8)) │
└────────────┴───────────────────────────────┘
-
-1 rows in set. Elapsed: 0.002 sec.
```
If you try to create an array of incompatible data types, ClickHouse throws an exception:
+```sql
+SELECT array(1, 'a')
```
-:) SELECT array(1, 'a')
-
-SELECT [1, 'a']
-
+```text
Received exception from server (version 1.1.54388):
Code: 386. DB::Exception: Received from localhost:9000, 127.0.0.1. DB::Exception: There is no supertype for types UInt8, String because some of them are String/FixedString and some of them are not.
-
-0 rows in set. Elapsed: 0.246 sec.
```
diff --git a/docs/en/data_types/decimal.md b/docs/en/data_types/decimal.md
index 136297f8d48..262330dcff9 100644
--- a/docs/en/data_types/decimal.md
+++ b/docs/en/data_types/decimal.md
@@ -51,36 +51,36 @@ Some functions on Decimal return result as Float64 (for example, var or stddev).
During calculations on Decimal, integer overflows might happen. Excessive digits in fraction are discarded (not rounded). Excessive digits in integer part will lead to exception.
-```
+```sql
SELECT toDecimal32(2, 4) AS x, x / 3
```
-```
+```text
┌──────x─┬─divide(toDecimal32(2, 4), 3)─┐
│ 2.0000 │ 0.6666 │
└────────┴──────────────────────────────┘
```
-```
+```sql
SELECT toDecimal32(4.2, 8) AS x, x * x
```
-```
+```text
DB::Exception: Scale is out of bounds.
```
-```
+```sql
SELECT toDecimal32(4.2, 8) AS x, 6 * x
```
-```
+```text
DB::Exception: Decimal math overflow.
```
Overflow checks lead to operations slowdown. If it is known that overflows are not possible, it makes sense to disable checks using `decimal_check_overflow` setting. When checks are disabled and overflow happens, the result will be incorrect:
-```
+```sql
SET decimal_check_overflow = 0;
SELECT toDecimal32(4.2, 8) AS x, 6 * x
```
-```
+```text
┌──────────x─┬─multiply(6, toDecimal32(4.2, 8))─┐
│ 4.20000000 │ -17.74967296 │
└────────────┴──────────────────────────────────┘
@@ -88,10 +88,10 @@ SELECT toDecimal32(4.2, 8) AS x, 6 * x
Overflow checks happen not only on arithmetic operations, but also on value comparison:
-```
+```sql
SELECT toDecimal32(1, 8) < 100
```
-```
+```text
DB::Exception: Can't compare.
```
diff --git a/docs/en/data_types/domains/ipv4.md b/docs/en/data_types/domains/ipv4.md
index dd7e08d673e..1ee6903b897 100644
--- a/docs/en/data_types/domains/ipv4.md
+++ b/docs/en/data_types/domains/ipv4.md
@@ -4,13 +4,13 @@
### Basic Usage
-``` sql
+```sql
CREATE TABLE hits (url String, from IPv4) ENGINE = MergeTree() ORDER BY url;
DESCRIBE TABLE hits;
```
-```
+```text
┌─name─┬─type───┬─default_type─┬─default_expression─┬─comment─┬─codec_expression─┐
│ url │ String │ │ │ │ │
│ from │ IPv4 │ │ │ │ │
@@ -19,19 +19,19 @@ DESCRIBE TABLE hits;
OR you can use IPv4 domain as a key:
-``` sql
+```sql
CREATE TABLE hits (url String, from IPv4) ENGINE = MergeTree() ORDER BY from;
```
`IPv4` domain supports custom input format as IPv4-strings:
-``` sql
+```sql
INSERT INTO hits (url, from) VALUES ('https://wikipedia.org', '116.253.40.133')('https://clickhouse.yandex', '183.247.232.58')('https://clickhouse.yandex/docs/en/', '116.106.34.242');
SELECT * FROM hits;
```
-```
+```text
┌─url────────────────────────────────┬───────────from─┐
│ https://clickhouse.yandex/docs/en/ │ 116.106.34.242 │
│ https://wikipedia.org │ 116.253.40.133 │
@@ -41,11 +41,11 @@ SELECT * FROM hits;
Values are stored in compact binary form:
-``` sql
+```sql
SELECT toTypeName(from), hex(from) FROM hits LIMIT 1;
```
-```
+```text
┌─toTypeName(from)─┬─hex(from)─┐
│ IPv4 │ B7F7E83A │
└──────────────────┴───────────┘
@@ -54,7 +54,7 @@ SELECT toTypeName(from), hex(from) FROM hits LIMIT 1;
Domain values are not implicitly convertible to types other than `UInt32`.
If you want to convert `IPv4` value to a string, you have to do that explicitly with `IPv4NumToString()` function:
-``` sql
+```sql
SELECT toTypeName(s), IPv4NumToString(from) as s FROM hits LIMIT 1;
```
@@ -66,11 +66,11 @@ SELECT toTypeName(s), IPv4NumToString(from) as s FROM hits LIMIT 1;
Or cast to a `UInt32` value:
-``` sql
+```sql
SELECT toTypeName(i), CAST(from as UInt32) as i FROM hits LIMIT 1;
```
-```
+```text
┌─toTypeName(CAST(from, 'UInt32'))─┬──────────i─┐
│ UInt32 │ 3086477370 │
└──────────────────────────────────┴────────────┘
diff --git a/docs/en/data_types/domains/ipv6.md b/docs/en/data_types/domains/ipv6.md
index 1bfbe3400b5..e2abaff0172 100644
--- a/docs/en/data_types/domains/ipv6.md
+++ b/docs/en/data_types/domains/ipv6.md
@@ -4,13 +4,13 @@
### Basic Usage
-``` sql
+```sql
CREATE TABLE hits (url String, from IPv6) ENGINE = MergeTree() ORDER BY url;
DESCRIBE TABLE hits;
```
-```
+```text
┌─name─┬─type───┬─default_type─┬─default_expression─┬─comment─┬─codec_expression─┐
│ url │ String │ │ │ │ │
│ from │ IPv6 │ │ │ │ │
@@ -19,19 +19,19 @@ DESCRIBE TABLE hits;
OR you can use `IPv6` domain as a key:
-``` sql
+```sql
CREATE TABLE hits (url String, from IPv6) ENGINE = MergeTree() ORDER BY from;
```
`IPv6` domain supports custom input as IPv6-strings:
-``` sql
+```sql
INSERT INTO hits (url, from) VALUES ('https://wikipedia.org', '2a02:aa08:e000:3100::2')('https://clickhouse.yandex', '2001:44c8:129:2632:33:0:252:2')('https://clickhouse.yandex/docs/en/', '2a02:e980:1e::1');
SELECT * FROM hits;
```
-```
+```text
┌─url────────────────────────────────┬─from──────────────────────────┐
│ https://clickhouse.yandex │ 2001:44c8:129:2632:33:0:252:2 │
│ https://clickhouse.yandex/docs/en/ │ 2a02:e980:1e::1 │
@@ -41,11 +41,11 @@ SELECT * FROM hits;
Values are stored in compact binary form:
-``` sql
+```sql
SELECT toTypeName(from), hex(from) FROM hits LIMIT 1;
```
-```
+```text
┌─toTypeName(from)─┬─hex(from)────────────────────────┐
│ IPv6 │ 200144C8012926320033000002520002 │
└──────────────────┴──────────────────────────────────┘
@@ -54,11 +54,11 @@ SELECT toTypeName(from), hex(from) FROM hits LIMIT 1;
Domain values are not implicitly convertible to types other than `FixedString(16)`.
If you want to convert `IPv6` value to a string, you have to do that explicitly with `IPv6NumToString()` function:
-``` sql
+```sql
SELECT toTypeName(s), IPv6NumToString(from) as s FROM hits LIMIT 1;
```
-```
+```text
┌─toTypeName(IPv6NumToString(from))─┬─s─────────────────────────────┐
│ String │ 2001:44c8:129:2632:33:0:252:2 │
└───────────────────────────────────┴───────────────────────────────┘
@@ -66,11 +66,11 @@ SELECT toTypeName(s), IPv6NumToString(from) as s FROM hits LIMIT 1;
Or cast to a `FixedString(16)` value:
-``` sql
+```sql
SELECT toTypeName(i), CAST(from as FixedString(16)) as i FROM hits LIMIT 1;
```
-```
+```text
┌─toTypeName(CAST(from, 'FixedString(16)'))─┬─i───────┐
│ FixedString(16) │ ��� │
└───────────────────────────────────────────┴─────────┘
diff --git a/docs/en/data_types/enum.md b/docs/en/data_types/enum.md
index 692cbad39b1..3b27b2e6c7e 100644
--- a/docs/en/data_types/enum.md
+++ b/docs/en/data_types/enum.md
@@ -26,19 +26,15 @@ ENGINE = TinyLog
Column `x` can only store values that are listed in the type definition: `'hello'` or `'world'`. If you try to save any other value, ClickHouse will raise an exception. 8-bit size for this `Enum` is chosen automatically.
```sql
-:) INSERT INTO t_enum VALUES ('hello'), ('world'), ('hello')
-
-INSERT INTO t_enum VALUES
-
+INSERT INTO t_enum VALUES ('hello'), ('world'), ('hello')
+```
+```text
Ok.
-
-3 rows in set. Elapsed: 0.002 sec.
-
-:) insert into t_enum values('a')
-
-INSERT INTO t_enum VALUES
-
-
+```
+```sql
+INSERT INTO t_enum values('a')
+```
+```text
Exception on client:
Code: 49. DB::Exception: Unknown element 'a' for type Enum('hello' = 1, 'world' = 2)
```
@@ -47,7 +43,8 @@ When you query data from the table, ClickHouse outputs the string values from `E
```sql
SELECT * FROM t_enum
-
+```
+```text
┌─x─────┐
│ hello │
│ world │
@@ -59,7 +56,8 @@ If you need to see the numeric equivalents of the rows, you must cast the `Enum`
```sql
SELECT CAST(x, 'Int8') FROM t_enum
-
+```
+```text
┌─CAST(x, 'Int8')─┐
│ 1 │
│ 2 │
@@ -71,7 +69,8 @@ To create an Enum value in a query, you also need to use `CAST`.
```sql
SELECT toTypeName(CAST('a', 'Enum(\'a\' = 1, \'b\' = 2)'))
-
+```
+```text
┌─toTypeName(CAST('a', 'Enum(\'a\' = 1, \'b\' = 2)'))─┐
│ Enum8('a' = 1, 'b' = 2) │
└─────────────────────────────────────────────────────┘
@@ -85,7 +84,7 @@ Neither the string nor the numeric value in an `Enum` can be [NULL](../query_lan
An `Enum` can be contained in [Nullable](nullable.md) type. So if you create a table using the query
-```
+```sql
CREATE TABLE t_enum_nullable
(
x Nullable( Enum8('hello' = 1, 'world' = 2) )
@@ -95,7 +94,7 @@ ENGINE = TinyLog
it can store not only `'hello'` and `'world'`, but `NULL`, as well.
-```
+```sql
INSERT INTO t_enum_nullable Values('hello'),('world'),(NULL)
```
diff --git a/docs/en/data_types/fixedstring.md b/docs/en/data_types/fixedstring.md
index 9725c71a779..fd2f1a17a37 100644
--- a/docs/en/data_types/fixedstring.md
+++ b/docs/en/data_types/fixedstring.md
@@ -4,7 +4,7 @@ A fixed-length string of `N` bytes (neither characters nor code points).
To declare a column of `FixedString` type, use the following syntax:
-```
+```sql
FixedString(N)
```
@@ -30,7 +30,7 @@ When selecting the data, ClickHouse does not remove the null bytes at the end of
Let's consider the following table with the single `FixedString(2)` column:
-```
+```text
┌─name──┐
│ b │
└───────┘
@@ -38,15 +38,14 @@ Let's consider the following table with the single `FixedString(2)` column:
The query `SELECT * FROM FixedStringTable WHERE a = 'b'` does not return any data as a result. We should complement the filter pattern with null bytes.
-```
+```sql
SELECT * FROM FixedStringTable
WHERE a = 'b\0'
-
+```
+```text
┌─a─┐
│ b │
└───┘
-
-1 rows in set. Elapsed: 0.002 sec.
```
This behavior differs from MySQL behavior for the `CHAR` type (where strings are padded with spaces, and the spaces are removed for output).
diff --git a/docs/en/data_types/float.md b/docs/en/data_types/float.md
index 29ff604bb02..a365c80b4d1 100644
--- a/docs/en/data_types/float.md
+++ b/docs/en/data_types/float.md
@@ -13,11 +13,11 @@ We recommend that you store data in integer form whenever possible. For example,
- Computations with floating-point numbers might produce a rounding error.
-``` sql
+```sql
SELECT 1 - 0.9
```
-```
+```text
┌───────minus(1, 0.9)─┐
│ 0.09999999999999998 │
└─────────────────────┘
@@ -33,11 +33,11 @@ In contrast to standard SQL, ClickHouse supports the following categories of flo
- `Inf` – Infinity.
-``` sql
+```sql
SELECT 0.5 / 0
```
-```
+```text
┌─divide(0.5, 0)─┐
│ inf │
└────────────────┘
@@ -45,11 +45,11 @@ SELECT 0.5 / 0
- `-Inf` – Negative infinity.
-``` sql
+```sql
SELECT -0.5 / 0
```
-```
+```text
┌─divide(-0.5, 0)─┐
│ -inf │
└─────────────────┘
@@ -57,11 +57,11 @@ SELECT -0.5 / 0
- `NaN` – Not a number.
-```
+```sql
SELECT 0 / 0
```
-```
+```text
┌─divide(0, 0)─┐
│ nan │
└──────────────┘
diff --git a/docs/en/data_types/nested_data_structures/aggregatefunction.md b/docs/en/data_types/nested_data_structures/aggregatefunction.md
index 95ed4d751e6..46e43ea5d91 100644
--- a/docs/en/data_types/nested_data_structures/aggregatefunction.md
+++ b/docs/en/data_types/nested_data_structures/aggregatefunction.md
@@ -33,7 +33,7 @@ To insert data, use `INSERT SELECT` with aggregate `-State`- functions.
**Function examples**
-```
+```sql
uniqState(UserID)
quantilesState(0.5, 0.9)(SendTiming)
```
diff --git a/docs/en/data_types/nested_data_structures/nested.md b/docs/en/data_types/nested_data_structures/nested.md
index 1d90048d626..3083d0f915c 100644
--- a/docs/en/data_types/nested_data_structures/nested.md
+++ b/docs/en/data_types/nested_data_structures/nested.md
@@ -4,7 +4,7 @@ A nested data structure is like a nested table. The parameters of a nested data
Example:
-``` sql
+```sql
CREATE TABLE test.visits
(
CounterID UInt32,
@@ -35,7 +35,7 @@ In most cases, when working with a nested data structure, its individual columns
Example:
-``` sql
+```sql
SELECT
Goals.ID,
Goals.EventTime
@@ -44,7 +44,7 @@ WHERE CounterID = 101500 AND length(Goals.ID) < 5
LIMIT 10
```
-```
+```text
┌─Goals.ID───────────────────────┬─Goals.EventTime───────────────────────────────────────────────────────────────────────────┐
│ [1073752,591325,591325] │ ['2014-03-17 16:38:10','2014-03-17 16:38:48','2014-03-17 16:42:27'] │
│ [1073752] │ ['2014-03-17 00:28:25'] │
@@ -63,7 +63,7 @@ It is easiest to think of a nested data structure as a set of multiple column ar
The only place where a SELECT query can specify the name of an entire nested data structure instead of individual columns is the ARRAY JOIN clause. For more information, see "ARRAY JOIN clause". Example:
-``` sql
+```sql
SELECT
Goal.ID,
Goal.EventTime
@@ -73,7 +73,7 @@ WHERE CounterID = 101500 AND length(Goals.ID) < 5
LIMIT 10
```
-```
+```text
┌─Goal.ID─┬──────Goal.EventTime─┐
│ 1073752 │ 2014-03-17 16:38:10 │
│ 591325 │ 2014-03-17 16:38:48 │
diff --git a/docs/en/data_types/nullable.md b/docs/en/data_types/nullable.md
index 3030f71a9c6..a46bcbcb3f2 100644
--- a/docs/en/data_types/nullable.md
+++ b/docs/en/data_types/nullable.md
@@ -17,39 +17,20 @@ To store `Nullable` type values in table column, ClickHouse uses a separate file
## Usage example
+```sql
+CREATE TABLE t_null(x Int8, y Nullable(Int8)) ENGINE TinyLog
```
-:) CREATE TABLE t_null(x Int8, y Nullable(Int8)) ENGINE TinyLog
-
-CREATE TABLE t_null
-(
- x Int8,
- y Nullable(Int8)
-)
-ENGINE = TinyLog
-
-Ok.
-
-0 rows in set. Elapsed: 0.012 sec.
-
-:) INSERT INTO t_null VALUES (1, NULL), (2, 3)
-
-INSERT INTO t_null VALUES
-
-Ok.
-
-1 rows in set. Elapsed: 0.007 sec.
-
-:) SELECT x + y FROM t_null
-
-SELECT x + y
-FROM t_null
-
+```sql
+INSERT INTO t_null VALUES (1, NULL), (2, 3)
+```
+```sql
+SELECT x + y FROM t_null
+```
+```text
┌─plus(x, y)─┐
│ ᴺᵁᴸᴸ │
│ 5 │
└────────────┘
-
-2 rows in set. Elapsed: 0.144 sec.
```
[Original article](https://clickhouse.yandex/docs/en/data_types/nullable/)
diff --git a/docs/en/data_types/special_data_types/nothing.md b/docs/en/data_types/special_data_types/nothing.md
index 2ccf7a7c72a..d0c34e1fecf 100644
--- a/docs/en/data_types/special_data_types/nothing.md
+++ b/docs/en/data_types/special_data_types/nothing.md
@@ -7,16 +7,13 @@ For example, literal [NULL](../../query_language/syntax.md#null-literal) has typ
The `Nothing` type can also used to denote empty arrays:
-```bash
-:) SELECT toTypeName(array())
-
-SELECT toTypeName([])
-
+```sql
+SELECT toTypeName(array())
+```
+```text
┌─toTypeName(array())─┐
│ Array(Nothing) │
└─────────────────────┘
-
-1 rows in set. Elapsed: 0.062 sec.
```
diff --git a/docs/en/data_types/tuple.md b/docs/en/data_types/tuple.md
index d3d3c42cf36..7726c25ce85 100644
--- a/docs/en/data_types/tuple.md
+++ b/docs/en/data_types/tuple.md
@@ -11,24 +11,19 @@ Tuples can be the result of a query. In this case, for text formats other than J
You can use a function to create a tuple:
-```
+```sql
tuple(T1, T2, ...)
```
Example of creating a tuple:
+```sql
+SELECT tuple(1,'a') AS x, toTypeName(x)
```
-:) SELECT tuple(1,'a') AS x, toTypeName(x)
-
-SELECT
- (1, 'a') AS x,
- toTypeName(x)
-
+```text
┌─x───────┬─toTypeName(tuple(1, 'a'))─┐
│ (1,'a') │ Tuple(UInt8, String) │
└─────────┴───────────────────────────┘
-
-1 rows in set. Elapsed: 0.021 sec.
```
## Working with data types
@@ -37,18 +32,13 @@ When creating a tuple on the fly, ClickHouse automatically detects the type of e
Example of automatic data type detection:
-```
+```sql
SELECT tuple(1, NULL) AS x, toTypeName(x)
-
-SELECT
- (1, NULL) AS x,
- toTypeName(x)
-
+```
+```text
┌─x────────┬─toTypeName(tuple(1, NULL))──────┐
│ (1,NULL) │ Tuple(UInt8, Nullable(Nothing)) │
└──────────┴─────────────────────────────────┘
-
-1 rows in set. Elapsed: 0.002 sec.
```
diff --git a/docs/en/data_types/uuid.md b/docs/en/data_types/uuid.md
index f66e1895a20..81e6d11435a 100644
--- a/docs/en/data_types/uuid.md
+++ b/docs/en/data_types/uuid.md
@@ -4,13 +4,13 @@ A universally unique identifier (UUID) is a 16-byte number used to identify reco
The example of UUID type value is represented below:
-```
+```text
61f0c404-5cb3-11e7-907b-a6006ad3dba0
```
If you do not specify the UUID column value when inserting a new record, the UUID value is filled with zero:
-```
+```text
00000000-0000-0000-0000-000000000000
```
@@ -24,13 +24,16 @@ To generate the UUID value, ClickHouse provides the [generateUUIDv4](../query_la
This example demonstrates creating a table with the UUID type column and inserting a value into the table.
-``` sql
-:) CREATE TABLE t_uuid (x UUID, y String) ENGINE=TinyLog
-
-:) INSERT INTO t_uuid SELECT generateUUIDv4(), 'Example 1'
-
-:) SELECT * FROM t_uuid
-
+```sql
+CREATE TABLE t_uuid (x UUID, y String) ENGINE=TinyLog
+```
+```sql
+INSERT INTO t_uuid SELECT generateUUIDv4(), 'Example 1'
+```
+```sql
+SELECT * FROM t_uuid
+```
+```text
┌────────────────────────────────────x─┬─y─────────┐
│ 417ddc5d-e556-4d27-95dd-a34d84e46a50 │ Example 1 │
└──────────────────────────────────────┴───────────┘
@@ -40,11 +43,13 @@ This example demonstrates creating a table with the UUID type column and inserti
In this example, the UUID column value is not specified when inserting a new record.
-``` sql
-:) INSERT INTO t_uuid (y) VALUES ('Example 2')
-
-:) SELECT * FROM t_uuid
-
+```sql
+INSERT INTO t_uuid (y) VALUES ('Example 2')
+```
+```sql
+SELECT * FROM t_uuid
+```
+```text
┌────────────────────────────────────x─┬─y─────────┐
│ 417ddc5d-e556-4d27-95dd-a34d84e46a50 │ Example 1 │
│ 00000000-0000-0000-0000-000000000000 │ Example 2 │
diff --git a/docs/en/database_engines/mysql.md b/docs/en/database_engines/mysql.md
index 24352c1924c..ba2405de0ea 100644
--- a/docs/en/database_engines/mysql.md
+++ b/docs/en/database_engines/mysql.md
@@ -55,7 +55,7 @@ All other MySQL data types are converted into [String](../data_types/string.md).
Table in MySQL:
-```
+```text
mysql> USE test;
Database changed
diff --git a/docs/en/development/build.md b/docs/en/development/build.md
index f67ff45786a..9bb9a7c6471 100644
--- a/docs/en/development/build.md
+++ b/docs/en/development/build.md
@@ -3,21 +3,21 @@
## Install Git and Pbuilder
```bash
-sudo apt-get update
-sudo apt-get install git pbuilder debhelper lsb-release fakeroot sudo debian-archive-keyring debian-keyring
+$ sudo apt-get update
+$ sudo apt-get install git pbuilder debhelper lsb-release fakeroot sudo debian-archive-keyring debian-keyring
```
## Checkout ClickHouse Sources
```bash
-git clone --recursive --branch master https://github.com/ClickHouse/ClickHouse.git
-cd ClickHouse
+$ git clone --recursive --branch master https://github.com/ClickHouse/ClickHouse.git
+$ cd ClickHouse
```
## Run Release Script
```bash
-./release
+$ ./release
```
# How to Build ClickHouse for Development
@@ -29,13 +29,13 @@ Only x86_64 with SSE 4.2 is supported. Support for AArch64 is experimental.
To test for SSE 4.2, do
```bash
-grep -q sse4_2 /proc/cpuinfo && echo "SSE 4.2 supported" || echo "SSE 4.2 not supported"
+$ grep -q sse4_2 /proc/cpuinfo && echo "SSE 4.2 supported" || echo "SSE 4.2 not supported"
```
## Install Git and CMake
```bash
-sudo apt-get install git cmake ninja-build
+$ sudo apt-get install git cmake ninja-build
```
Or cmake3 instead of cmake on older systems.
@@ -47,10 +47,10 @@ There are several ways to do this.
### Install from a PPA Package
```bash
-sudo apt-get install software-properties-common
-sudo apt-add-repository ppa:ubuntu-toolchain-r/test
-sudo apt-get update
-sudo apt-get install gcc-9 g++-9
+$ sudo apt-get install software-properties-common
+$ sudo apt-add-repository ppa:ubuntu-toolchain-r/test
+$ sudo apt-get update
+$ sudo apt-get install gcc-9 g++-9
```
### Install from Sources
@@ -60,33 +60,35 @@ Look at [utils/ci/build-gcc-from-sources.sh](https://github.com/ClickHouse/Click
## Use GCC 9 for Builds
```bash
-export CC=gcc-9
-export CXX=g++-9
+$ export CC=gcc-9
+$ export CXX=g++-9
```
## Install Required Libraries from Packages
```bash
-sudo apt-get install libicu-dev libreadline-dev gperf
+$ sudo apt-get install libicu-dev libreadline-dev gperf
```
## Checkout ClickHouse Sources
```bash
-git clone --recursive git@github.com:ClickHouse/ClickHouse.git
-# or: git clone --recursive https://github.com/ClickHouse/ClickHouse.git
-
-cd ClickHouse
+$ git clone --recursive git@github.com:ClickHouse/ClickHouse.git
+```
+or
+```bash
+$ git clone --recursive https://github.com/ClickHouse/ClickHouse.git
+$ cd ClickHouse
```
## Build ClickHouse
```bash
-mkdir build
-cd build
-cmake ..
-ninja
-cd ..
+$ mkdir build
+$ cd build
+$ cmake ..
+$ ninja
+$ cd ..
```
To create an executable, run `ninja clickhouse`.
diff --git a/docs/en/development/build_osx.md b/docs/en/development/build_osx.md
index 928f8f88dfd..c6a2be20530 100644
--- a/docs/en/development/build_osx.md
+++ b/docs/en/development/build_osx.md
@@ -5,22 +5,25 @@ Build should work on Mac OS X 10.12.
## Install Homebrew
```bash
-/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
+$ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
```
## Install Required Compilers, Tools, and Libraries
```bash
-brew install cmake ninja gcc icu4c openssl libtool gettext readline gperf
+$ brew install cmake ninja gcc icu4c openssl libtool gettext readline gperf
```
## Checkout ClickHouse Sources
```bash
-git clone --recursive git@github.com:yandex/ClickHouse.git
-# or: git clone --recursive https://github.com/yandex/ClickHouse.git
+$ git clone --recursive git@github.com:yandex/ClickHouse.git
+```
+or
+```bash
+$ git clone --recursive https://github.com/yandex/ClickHouse.git
-cd ClickHouse
+$ cd ClickHouse
```
For the latest stable version, switch to the `stable` branch.
@@ -28,11 +31,11 @@ For the latest stable version, switch to the `stable` branch.
## Build ClickHouse
```bash
-mkdir build
-cd build
-cmake .. -DCMAKE_CXX_COMPILER=`which g++-8` -DCMAKE_C_COMPILER=`which gcc-8`
-ninja
-cd ..
+$ mkdir build
+$ cd build
+$ cmake .. -DCMAKE_CXX_COMPILER=`which g++-8` -DCMAKE_C_COMPILER=`which gcc-8`
+$ ninja
+$ cd ..
```
## Caveats
@@ -45,7 +48,7 @@ If you intend to run clickhouse-server, make sure to increase the system's maxfi
To do so, create the following file:
/Library/LaunchDaemons/limit.maxfiles.plist:
-``` xml
+```xml
@@ -70,7 +73,7 @@ To do so, create the following file:
```
Execute the following command:
-``` bash
+```bash
$ sudo chown root:wheel /Library/LaunchDaemons/limit.maxfiles.plist
```
diff --git a/docs/en/development/tests.md b/docs/en/development/tests.md
index 881aaf9dd46..e451d17e717 100644
--- a/docs/en/development/tests.md
+++ b/docs/en/development/tests.md
@@ -86,21 +86,21 @@ Note that all clickhouse tools (server, client, etc) are just symlinks to a sing
Alternatively you can install ClickHouse package: either stable release from Yandex repository or you can build package for yourself with `./release` in ClickHouse sources root. Then start the server with `sudo service clickhouse-server start` (or stop to stop the server). Look for logs at `/etc/clickhouse-server/clickhouse-server.log`.
When ClickHouse is already installed on your system, you can build a new `clickhouse` binary and replace the existing binary:
-```
-sudo service clickhouse-server stop
-sudo cp ./clickhouse /usr/bin/
-sudo service clickhouse-server start
+```bash
+$ sudo service clickhouse-server stop
+$ sudo cp ./clickhouse /usr/bin/
+$ sudo service clickhouse-server start
```
Also you can stop system clickhouse-server and run your own with the same configuration but with logging to terminal:
-```
-sudo service clickhouse-server stop
-sudo -u clickhouse /usr/bin/clickhouse server --config-file /etc/clickhouse-server/config.xml
+```bash
+$ sudo service clickhouse-server stop
+$ sudo -u clickhouse /usr/bin/clickhouse server --config-file /etc/clickhouse-server/config.xml
```
Example with gdb:
-```
-sudo -u clickhouse gdb --args /usr/bin/clickhouse server --config-file /etc/clickhouse-server/config.xml
+```bash
+$ sudo -u clickhouse gdb --args /usr/bin/clickhouse server --config-file /etc/clickhouse-server/config.xml
```
If the system clickhouse-server is already running and you don't want to stop it, you can change port numbers in your `config.xml` (or override them in a file in `config.d` directory), provide appropriate data path, and run it.
@@ -112,7 +112,7 @@ If the system clickhouse-server is already running and you don't want to stop it
Before publishing release as stable we deploy it on testing environment. Testing environment is a cluster that process 1/39 part of [Yandex.Metrica](https://metrica.yandex.com/) data. We share our testing environment with Yandex.Metrica team. ClickHouse is upgraded without downtime on top of existing data. We look at first that data is processed successfully without lagging from realtime, the replication continue to work and there is no issues visible to Yandex.Metrica team. First check can be done in the following way:
-```
+```sql
SELECT hostName() AS h, any(version()), any(uptime()), max(UTCEventTime), count() FROM remote('example01-01-{1..3}t', merge, hits) WHERE EventDate >= today() - 2 GROUP BY h ORDER BY h;
```
@@ -126,16 +126,16 @@ After deploying to testing environment we run load testing with queries from pro
Make sure you have enabled `query_log` on your production cluster.
Collect query log for a day or more:
-```
-clickhouse-client --query="SELECT DISTINCT query FROM system.query_log WHERE event_date = today() AND query LIKE '%ym:%' AND query NOT LIKE '%system.query_log%' AND type = 2 AND is_initial_query" > queries.tsv
+```bash
+$ clickhouse-client --query="SELECT DISTINCT query FROM system.query_log WHERE event_date = today() AND query LIKE '%ym:%' AND query NOT LIKE '%system.query_log%' AND type = 2 AND is_initial_query" > queries.tsv
```
This is a way complicated example. `type = 2` will filter queries that are executed successfully. `query LIKE '%ym:%'` is to select relevant queries from Yandex.Metrica. `is_initial_query` is to select only queries that are initiated by client, not by ClickHouse itself (as parts of distributed query processing).
`scp` this log to your testing cluster and run it as following:
-```
-clickhouse benchmark --concurrency 16 < queries.tsv
+```bash
+$ clickhouse benchmark --concurrency 16 < queries.tsv
```
(probably you also want to specify a `--user`)
diff --git a/docs/en/faq/general.md b/docs/en/faq/general.md
index 4cd9189c256..41026e54a08 100644
--- a/docs/en/faq/general.md
+++ b/docs/en/faq/general.md
@@ -17,7 +17,7 @@ If you use Oracle through the ODBC driver as a source of external dictionaries,
**Example**
-```
+```sql
NLS_LANG=RUSSIAN_RUSSIA.UTF8
```
diff --git a/docs/en/getting_started/example_datasets/amplab_benchmark.md b/docs/en/getting_started/example_datasets/amplab_benchmark.md
index e5d9812a9c3..29c404cedfe 100644
--- a/docs/en/getting_started/example_datasets/amplab_benchmark.md
+++ b/docs/en/getting_started/example_datasets/amplab_benchmark.md
@@ -7,21 +7,21 @@ Sign up for a free account at . You will need a credit c
Run the following in the console:
```bash
-sudo apt-get install s3cmd
-mkdir tiny; cd tiny;
-s3cmd sync s3://big-data-benchmark/pavlo/text-deflate/tiny/ .
-cd ..
-mkdir 1node; cd 1node;
-s3cmd sync s3://big-data-benchmark/pavlo/text-deflate/1node/ .
-cd ..
-mkdir 5nodes; cd 5nodes;
-s3cmd sync s3://big-data-benchmark/pavlo/text-deflate/5nodes/ .
-cd ..
+$ sudo apt-get install s3cmd
+$ mkdir tiny; cd tiny;
+$ s3cmd sync s3://big-data-benchmark/pavlo/text-deflate/tiny/ .
+$ cd ..
+$ mkdir 1node; cd 1node;
+$ s3cmd sync s3://big-data-benchmark/pavlo/text-deflate/1node/ .
+$ cd ..
+$ mkdir 5nodes; cd 5nodes;
+$ s3cmd sync s3://big-data-benchmark/pavlo/text-deflate/5nodes/ .
+$ cd ..
```
Run the following ClickHouse queries:
-``` sql
+```sql
CREATE TABLE rankings_tiny
(
pageURL String,
@@ -86,12 +86,12 @@ CREATE TABLE uservisits_5nodes_on_single
Go back to the console:
```bash
-for i in tiny/rankings/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO rankings_tiny FORMAT CSV"; done
-for i in tiny/uservisits/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO uservisits_tiny FORMAT CSV"; done
-for i in 1node/rankings/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO rankings_1node FORMAT CSV"; done
-for i in 1node/uservisits/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO uservisits_1node FORMAT CSV"; done
-for i in 5nodes/rankings/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO rankings_5nodes_on_single FORMAT CSV"; done
-for i in 5nodes/uservisits/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO uservisits_5nodes_on_single FORMAT CSV"; done
+$ for i in tiny/rankings/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO rankings_tiny FORMAT CSV"; done
+$ for i in tiny/uservisits/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO uservisits_tiny FORMAT CSV"; done
+$ for i in 1node/rankings/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO rankings_1node FORMAT CSV"; done
+$ for i in 1node/uservisits/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO uservisits_1node FORMAT CSV"; done
+$ for i in 5nodes/rankings/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO rankings_5nodes_on_single FORMAT CSV"; done
+$ for i in 5nodes/uservisits/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO uservisits_5nodes_on_single FORMAT CSV"; done
```
Queries for obtaining data samples:
diff --git a/docs/en/getting_started/example_datasets/criteo.md b/docs/en/getting_started/example_datasets/criteo.md
index 268e8346d6c..8c57a2b0e9b 100644
--- a/docs/en/getting_started/example_datasets/criteo.md
+++ b/docs/en/getting_started/example_datasets/criteo.md
@@ -4,14 +4,14 @@ Download the data from hits_v1.tsv
-# now create table
-clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets"
-clickhouse-client --query "CREATE TABLE datasets.hits_v1 ( WatchID UInt64, JavaEnable UInt8, Title String, GoodEvent Int16, EventTime DateTime, EventDate Date, CounterID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RegionID UInt32, UserID UInt64, CounterClass Int8, OS UInt8, UserAgent UInt8, URL String, Referer String, URLDomain String, RefererDomain String, Refresh UInt8, IsRobot UInt8, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), ResolutionWidth UInt16, ResolutionHeight UInt16, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, FlashMinor2 String, NetMajor UInt8, NetMinor UInt8, UserAgentMajor UInt16, UserAgentMinor FixedString(2), CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, MobilePhone UInt8, MobilePhoneModel String, Params String, IPNetworkID UInt32, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, IsArtifical UInt8, WindowClientWidth UInt16, WindowClientHeight UInt16, ClientTimeZone Int16, ClientEventTime DateTime, SilverlightVersion1 UInt8, SilverlightVersion2 UInt8, SilverlightVersion3 UInt32, SilverlightVersion4 UInt16, PageCharset String, CodeVersion UInt32, IsLink UInt8, IsDownload UInt8, IsNotBounce UInt8, FUniqID UInt64, HID UInt32, IsOldCounter UInt8, IsEvent UInt8, IsParameter UInt8, DontCountHits UInt8, WithHash UInt8, HitColor FixedString(1), UTCEventTime DateTime, Age UInt8, Sex UInt8, Income UInt8, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), RemoteIP UInt32, RemoteIP6 FixedString(16), WindowName Int32, OpenerName Int32, HistoryLength Int16, BrowserLanguage FixedString(2), BrowserCountry FixedString(2), SocialNetwork String, SocialAction String, HTTPError UInt16, SendTiming Int32, DNSTiming Int32, ConnectTiming Int32, ResponseStartTiming Int32, ResponseEndTiming Int32, FetchTiming Int32, RedirectTiming Int32, DOMInteractiveTiming Int32, DOMContentLoadedTiming Int32, DOMCompleteTiming Int32, LoadEventStartTiming Int32, LoadEventEndTiming Int32, NSToDOMContentLoadedTiming Int32, FirstPaintTiming Int32, RedirectCount Int8, SocialSourceNetworkID UInt8, SocialSourcePage String, ParamPrice Int64, ParamOrderID String, ParamCurrency FixedString(3), ParamCurrencyID UInt16, GoalsReached Array(UInt32), OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, RefererHash UInt64, URLHash UInt64, CLID UInt32, YCLID UInt64, ShareService String, ShareURL String, ShareTitle String, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), IslandID FixedString(16), RequestNum UInt32, RequestTry UInt8) ENGINE = MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192"
-# import data
-cat hits_v1.tsv | clickhouse-client --query "INSERT INTO datasets.hits_v1 FORMAT TSV" --max_insert_block_size=100000
-# optionally you can optimize table
-clickhouse-client --query "OPTIMIZE TABLE datasets.hits_v1 FINAL"
-clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1"
+$ curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
+$ # now create table
+$ clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets"
+$ clickhouse-client --query "CREATE TABLE datasets.hits_v1 ( WatchID UInt64, JavaEnable UInt8, Title String, GoodEvent Int16, EventTime DateTime, EventDate Date, CounterID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RegionID UInt32, UserID UInt64, CounterClass Int8, OS UInt8, UserAgent UInt8, URL String, Referer String, URLDomain String, RefererDomain String, Refresh UInt8, IsRobot UInt8, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), ResolutionWidth UInt16, ResolutionHeight UInt16, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, FlashMinor2 String, NetMajor UInt8, NetMinor UInt8, UserAgentMajor UInt16, UserAgentMinor FixedString(2), CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, MobilePhone UInt8, MobilePhoneModel String, Params String, IPNetworkID UInt32, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, IsArtifical UInt8, WindowClientWidth UInt16, WindowClientHeight UInt16, ClientTimeZone Int16, ClientEventTime DateTime, SilverlightVersion1 UInt8, SilverlightVersion2 UInt8, SilverlightVersion3 UInt32, SilverlightVersion4 UInt16, PageCharset String, CodeVersion UInt32, IsLink UInt8, IsDownload UInt8, IsNotBounce UInt8, FUniqID UInt64, HID UInt32, IsOldCounter UInt8, IsEvent UInt8, IsParameter UInt8, DontCountHits UInt8, WithHash UInt8, HitColor FixedString(1), UTCEventTime DateTime, Age UInt8, Sex UInt8, Income UInt8, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), RemoteIP UInt32, RemoteIP6 FixedString(16), WindowName Int32, OpenerName Int32, HistoryLength Int16, BrowserLanguage FixedString(2), BrowserCountry FixedString(2), SocialNetwork String, SocialAction String, HTTPError UInt16, SendTiming Int32, DNSTiming Int32, ConnectTiming Int32, ResponseStartTiming Int32, ResponseEndTiming Int32, FetchTiming Int32, RedirectTiming Int32, DOMInteractiveTiming Int32, DOMContentLoadedTiming Int32, DOMCompleteTiming Int32, LoadEventStartTiming Int32, LoadEventEndTiming Int32, NSToDOMContentLoadedTiming Int32, FirstPaintTiming Int32, RedirectCount Int8, SocialSourceNetworkID UInt8, SocialSourcePage String, ParamPrice Int64, ParamOrderID String, ParamCurrency FixedString(3), ParamCurrencyID UInt16, GoalsReached Array(UInt32), OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, RefererHash UInt64, URLHash UInt64, CLID UInt32, YCLID UInt64, ShareService String, ShareURL String, ShareTitle String, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), IslandID FixedString(16), RequestNum UInt32, RequestTry UInt8) ENGINE = MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192"
+$ # import data
+$ cat hits_v1.tsv | clickhouse-client --query "INSERT INTO datasets.hits_v1 FORMAT TSV" --max_insert_block_size=100000
+$ # optionally you can optimize table
+$ clickhouse-client --query "OPTIMIZE TABLE datasets.hits_v1 FINAL"
+$ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1"
```
**Download and import visits from compressed tsv-file**
```bash
-curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
-# now create table
-clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets"
-clickhouse-client --query "CREATE TABLE datasets.visits_v1 ( CounterID UInt32, StartDate Date, Sign Int8, IsNew UInt8, VisitID UInt64, UserID UInt64, StartTime DateTime, Duration UInt32, UTCStartTime DateTime, PageViews Int32, Hits Int32, IsBounce UInt8, Referer String, StartURL String, RefererDomain String, StartURLDomain String, EndURL String, LinkURL String, IsDownload UInt8, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, PlaceID Int32, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), IsYandex UInt8, GoalReachesDepth Int32, GoalReachesURL Int32, GoalReachesAny Int32, SocialSourceNetworkID UInt8, SocialSourcePage String, MobilePhoneModel String, ClientEventTime DateTime, RegionID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RemoteIP UInt32, RemoteIP6 FixedString(16), IPNetworkID UInt32, SilverlightVersion3 UInt32, CodeVersion UInt32, ResolutionWidth UInt16, ResolutionHeight UInt16, UserAgentMajor UInt16, UserAgentMinor UInt16, WindowClientWidth UInt16, WindowClientHeight UInt16, SilverlightVersion2 UInt8, SilverlightVersion4 UInt16, FlashVersion3 UInt16, FlashVersion4 UInt16, ClientTimeZone Int16, OS UInt8, UserAgent UInt8, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, NetMajor UInt8, NetMinor UInt8, MobilePhone UInt8, SilverlightVersion1 UInt8, Age UInt8, Sex UInt8, Income UInt8, JavaEnable UInt8, CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, BrowserLanguage UInt16, BrowserCountry UInt16, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), Params Array(String), Goals Nested(ID UInt32, Serial UInt32, EventTime DateTime, Price Int64, OrderID String, CurrencyID UInt32), WatchIDs Array(UInt64), ParamSumPrice Int64, ParamCurrency FixedString(3), ParamCurrencyID UInt16, ClickLogID UInt64, ClickEventID Int32, ClickGoodEvent Int32, ClickEventTime DateTime, ClickPriorityID Int32, ClickPhraseID Int32, ClickPageID Int32, ClickPlaceID Int32, ClickTypeID Int32, ClickResourceID Int32, ClickCost UInt32, ClickClientIP UInt32, ClickDomainID UInt32, ClickURL String, ClickAttempt UInt8, ClickOrderID UInt32, ClickBannerID UInt32, ClickMarketCategoryID UInt32, ClickMarketPP UInt32, ClickMarketCategoryName String, ClickMarketPPName String, ClickAWAPSCampaignName String, ClickPageName String, ClickTargetType UInt16, ClickTargetPhraseID UInt64, ClickContextType UInt8, ClickSelectType Int8, ClickOptions String, ClickGroupBannerID Int32, OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, FirstVisit DateTime, PredLastVisit Date, LastVisit Date, TotalVisits UInt32, TraficSource Nested(ID Int8, SearchEngineID UInt16, AdvEngineID UInt8, PlaceID UInt16, SocialSourceNetworkID UInt8, Domain String, SearchPhrase String, SocialSourcePage String), Attendance FixedString(16), CLID UInt32, YCLID UInt64, NormalizedRefererHash UInt64, SearchPhraseHash UInt64, RefererDomainHash UInt64, NormalizedStartURLHash UInt64, StartURLDomainHash UInt64, NormalizedEndURLHash UInt64, TopLevelDomain UInt64, URLScheme UInt64, OpenstatServiceNameHash UInt64, OpenstatCampaignIDHash UInt64, OpenstatAdIDHash UInt64, OpenstatSourceIDHash UInt64, UTMSourceHash UInt64, UTMMediumHash UInt64, UTMCampaignHash UInt64, UTMContentHash UInt64, UTMTermHash UInt64, FromHash UInt64, WebVisorEnabled UInt8, WebVisorActivity UInt32, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), Market Nested(Type UInt8, GoalID UInt32, OrderID String, OrderPrice Int64, PP UInt32, DirectPlaceID UInt32, DirectOrderID UInt32, DirectBannerID UInt32, GoodID String, GoodName String, GoodQuantity Int32, GoodPrice Int64), IslandID FixedString(16)) ENGINE = CollapsingMergeTree(StartDate, intHash32(UserID), (CounterID, StartDate, intHash32(UserID), VisitID), 8192, Sign)"
-# import data
-cat visits_v1.tsv | clickhouse-client --query "INSERT INTO datasets.visits_v1 FORMAT TSV" --max_insert_block_size=100000
-# optionally you can optimize table
-clickhouse-client --query "OPTIMIZE TABLE datasets.visits_v1 FINAL"
-clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1"
+$ curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
+$ # now create table
+$ clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets"
+$ clickhouse-client --query "CREATE TABLE datasets.visits_v1 ( CounterID UInt32, StartDate Date, Sign Int8, IsNew UInt8, VisitID UInt64, UserID UInt64, StartTime DateTime, Duration UInt32, UTCStartTime DateTime, PageViews Int32, Hits Int32, IsBounce UInt8, Referer String, StartURL String, RefererDomain String, StartURLDomain String, EndURL String, LinkURL String, IsDownload UInt8, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, PlaceID Int32, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), IsYandex UInt8, GoalReachesDepth Int32, GoalReachesURL Int32, GoalReachesAny Int32, SocialSourceNetworkID UInt8, SocialSourcePage String, MobilePhoneModel String, ClientEventTime DateTime, RegionID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RemoteIP UInt32, RemoteIP6 FixedString(16), IPNetworkID UInt32, SilverlightVersion3 UInt32, CodeVersion UInt32, ResolutionWidth UInt16, ResolutionHeight UInt16, UserAgentMajor UInt16, UserAgentMinor UInt16, WindowClientWidth UInt16, WindowClientHeight UInt16, SilverlightVersion2 UInt8, SilverlightVersion4 UInt16, FlashVersion3 UInt16, FlashVersion4 UInt16, ClientTimeZone Int16, OS UInt8, UserAgent UInt8, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, NetMajor UInt8, NetMinor UInt8, MobilePhone UInt8, SilverlightVersion1 UInt8, Age UInt8, Sex UInt8, Income UInt8, JavaEnable UInt8, CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, BrowserLanguage UInt16, BrowserCountry UInt16, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), Params Array(String), Goals Nested(ID UInt32, Serial UInt32, EventTime DateTime, Price Int64, OrderID String, CurrencyID UInt32), WatchIDs Array(UInt64), ParamSumPrice Int64, ParamCurrency FixedString(3), ParamCurrencyID UInt16, ClickLogID UInt64, ClickEventID Int32, ClickGoodEvent Int32, ClickEventTime DateTime, ClickPriorityID Int32, ClickPhraseID Int32, ClickPageID Int32, ClickPlaceID Int32, ClickTypeID Int32, ClickResourceID Int32, ClickCost UInt32, ClickClientIP UInt32, ClickDomainID UInt32, ClickURL String, ClickAttempt UInt8, ClickOrderID UInt32, ClickBannerID UInt32, ClickMarketCategoryID UInt32, ClickMarketPP UInt32, ClickMarketCategoryName String, ClickMarketPPName String, ClickAWAPSCampaignName String, ClickPageName String, ClickTargetType UInt16, ClickTargetPhraseID UInt64, ClickContextType UInt8, ClickSelectType Int8, ClickOptions String, ClickGroupBannerID Int32, OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, FirstVisit DateTime, PredLastVisit Date, LastVisit Date, TotalVisits UInt32, TraficSource Nested(ID Int8, SearchEngineID UInt16, AdvEngineID UInt8, PlaceID UInt16, SocialSourceNetworkID UInt8, Domain String, SearchPhrase String, SocialSourcePage String), Attendance FixedString(16), CLID UInt32, YCLID UInt64, NormalizedRefererHash UInt64, SearchPhraseHash UInt64, RefererDomainHash UInt64, NormalizedStartURLHash UInt64, StartURLDomainHash UInt64, NormalizedEndURLHash UInt64, TopLevelDomain UInt64, URLScheme UInt64, OpenstatServiceNameHash UInt64, OpenstatCampaignIDHash UInt64, OpenstatAdIDHash UInt64, OpenstatSourceIDHash UInt64, UTMSourceHash UInt64, UTMMediumHash UInt64, UTMCampaignHash UInt64, UTMContentHash UInt64, UTMTermHash UInt64, FromHash UInt64, WebVisorEnabled UInt8, WebVisorActivity UInt32, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), Market Nested(Type UInt8, GoalID UInt32, OrderID String, OrderPrice Int64, PP UInt32, DirectPlaceID UInt32, DirectOrderID UInt32, DirectBannerID UInt32, GoodID String, GoodName String, GoodQuantity Int32, GoodPrice Int64), IslandID FixedString(16)) ENGINE = CollapsingMergeTree(StartDate, intHash32(UserID), (CounterID, StartDate, intHash32(UserID), VisitID), 8192, Sign)"
+$ # import data
+$ cat visits_v1.tsv | clickhouse-client --query "INSERT INTO datasets.visits_v1 FORMAT TSV" --max_insert_block_size=100000
+$ # optionally you can optimize table
+$ clickhouse-client --query "OPTIMIZE TABLE datasets.visits_v1 FINAL"
+$ clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1"
```
## Queries
-Examples of queries to these tables (they are named `test.hits` and `test.visits`) can be found among [stateful tests](https://github.com/yandex/ClickHouse/tree/master/dbms/tests/queries/1_stateful) and in some [performance tests](https://github.com/yandex/ClickHouse/tree/master/dbms/tests/performance/test_hits) of ClickHouse.
+Examples of queries to these tables (they are named `test.hits` and `test.visits`) can be found among [stateful tests](https://github.com/ClickHouse/ClickHouse/tree/master/dbms/tests/queries/1_stateful) and in some [performance tests](https://github.com/ClickHouse/ClickHouse/tree/master/dbms/tests/performance/test_hits) of ClickHouse.
diff --git a/docs/en/getting_started/example_datasets/nyc_taxi.md b/docs/en/getting_started/example_datasets/nyc_taxi.md
index ed073fc85a6..a3226e3d28a 100644
--- a/docs/en/getting_started/example_datasets/nyc_taxi.md
+++ b/docs/en/getting_started/example_datasets/nyc_taxi.md
@@ -29,8 +29,8 @@ It takes about 20-30 minutes to process each month's worth of data in PostgreSQL
You can check the number of downloaded rows as follows:
-```
-time psql nyc-taxi-data -c "SELECT count(*) FROM trips;"
+```bash
+$ time psql nyc-taxi-data -c "SELECT count(*) FROM trips;"
## Count
1298979494
(1 row)
@@ -44,7 +44,7 @@ The data in PostgreSQL uses 370 GB of space.
Exporting the data from PostgreSQL:
-``` sql
+```sql
COPY
(
SELECT trips.id,
@@ -119,7 +119,7 @@ This takes about 5 hours. The resulting TSV file is 590612904969 bytes.
Create a temporary table in ClickHouse:
-``` sql
+```sql
CREATE TABLE trips
(
trip_id UInt32,
@@ -178,8 +178,8 @@ dropoff_puma Nullable(String)
It is needed for converting fields to more correct data types and, if possible, to eliminate NULLs.
-```
-time clickhouse-client --query="INSERT INTO trips FORMAT TabSeparated" < trips.tsv
+```bash
+$ time clickhouse-client --query="INSERT INTO trips FORMAT TabSeparated" < trips.tsv
real 75m56.214s
```
@@ -196,7 +196,7 @@ To start, we'll create a table on a single server. Later we will make the table
Create and populate a summary table:
-```
+```sql
CREATE TABLE trips_mergetree
ENGINE = MergeTree(pickup_date, pickup_datetime, 8192)
AS SELECT
@@ -263,13 +263,10 @@ To load it faster, you can create the table with the `Log` engine instead of `Me
The table uses 126 GB of disk space.
+```sql
+SELECT formatReadableSize(sum(bytes)) FROM system.parts WHERE table = 'trips_mergetree' AND active
```
-:) SELECT formatReadableSize(sum(bytes)) FROM system.parts WHERE table = 'trips_mergetree' AND active
-
-SELECT formatReadableSize(sum(bytes))
-FROM system.parts
-WHERE (table = 'trips_mergetree') AND active
-
+```text
┌─formatReadableSize(sum(bytes))─┐
│ 126.18 GiB │
└────────────────────────────────┘
@@ -277,14 +274,14 @@ WHERE (table = 'trips_mergetree') AND active
Among other things, you can run the OPTIMIZE query on MergeTree. But it's not required, since everything will be fine without it.
-## Dowload of Prepared Partitions
+## Download of Prepared Partitions
```bash
-curl -O https://clickhouse-datasets.s3.yandex.net/trips_mergetree/partitions/trips_mergetree.tar
-tar xvf trips_mergetree.tar -C /var/lib/clickhouse # path to ClickHouse data directory
-# check permissions of unpacked data, fix if required
-sudo service clickhouse-server restart
-clickhouse-client --query "select count(*) from datasets.trips_mergetree"
+$ curl -O https://clickhouse-datasets.s3.yandex.net/trips_mergetree/partitions/trips_mergetree.tar
+$ tar xvf trips_mergetree.tar -C /var/lib/clickhouse # path to ClickHouse data directory
+$ # check permissions of unpacked data, fix if required
+$ sudo service clickhouse-server restart
+$ clickhouse-client --query "select count(*) from datasets.trips_mergetree"
```
!!!info
@@ -296,7 +293,7 @@ clickhouse-client --query "select count(*) from datasets.trips_mergetree"
Q1:
-``` sql
+```sql
SELECT cab_type, count(*) FROM trips_mergetree GROUP BY cab_type
```
@@ -304,7 +301,7 @@ SELECT cab_type, count(*) FROM trips_mergetree GROUP BY cab_type
Q2:
-``` sql
+```sql
SELECT passenger_count, avg(total_amount) FROM trips_mergetree GROUP BY passenger_count
```
@@ -312,7 +309,7 @@ SELECT passenger_count, avg(total_amount) FROM trips_mergetree GROUP BY passenge
Q3:
-``` sql
+```sql
SELECT passenger_count, toYear(pickup_date) AS year, count(*) FROM trips_mergetree GROUP BY passenger_count, year
```
@@ -320,7 +317,7 @@ SELECT passenger_count, toYear(pickup_date) AS year, count(*) FROM trips_mergetr
Q4:
-``` sql
+```sql
SELECT passenger_count, toYear(pickup_date) AS year, round(trip_distance) AS distance, count(*)
FROM trips_mergetree
GROUP BY passenger_count, year, distance
@@ -339,19 +336,19 @@ Creating a table on three servers:
On each server:
-```
+```sql
CREATE TABLE default.trips_mergetree_third ( trip_id UInt32, vendor_id Enum8('1' = 1, '2' = 2, 'CMT' = 3, 'VTS' = 4, 'DDS' = 5, 'B02512' = 10, 'B02598' = 11, 'B02617' = 12, 'B02682' = 13, 'B02764' = 14), pickup_date Date, pickup_datetime DateTime, dropoff_date Date, dropoff_datetime DateTime, store_and_fwd_flag UInt8, rate_code_id UInt8, pickup_longitude Float64, pickup_latitude Float64, dropoff_longitude Float64, dropoff_latitude Float64, passenger_count UInt8, trip_distance Float64, fare_amount Float32, extra Float32, mta_tax Float32, tip_amount Float32, tolls_amount Float32, ehail_fee Float32, improvement_surcharge Float32, total_amount Float32, payment_type_ Enum8('UNK' = 0, 'CSH' = 1, 'CRE' = 2, 'NOC' = 3, 'DIS' = 4), trip_type UInt8, pickup FixedString(25), dropoff FixedString(25), cab_type Enum8('yellow' = 1, 'green' = 2, 'uber' = 3), pickup_nyct2010_gid UInt8, pickup_ctlabel Float32, pickup_borocode UInt8, pickup_boroname Enum8('' = 0, 'Manhattan' = 1, 'Bronx' = 2, 'Brooklyn' = 3, 'Queens' = 4, 'Staten Island' = 5), pickup_ct2010 FixedString(6), pickup_boroct2010 FixedString(7), pickup_cdeligibil Enum8(' ' = 0, 'E' = 1, 'I' = 2), pickup_ntacode FixedString(4), pickup_ntaname Enum16('' = 0, 'Airport' = 1, 'Allerton-Pelham Gardens' = 2, 'Annadale-Huguenot-Prince\'s Bay-Eltingville' = 3, 'Arden Heights' = 4, 'Astoria' = 5, 'Auburndale' = 6, 'Baisley Park' = 7, 'Bath Beach' = 8, 'Battery Park City-Lower Manhattan' = 9, 'Bay Ridge' = 10, 'Bayside-Bayside Hills' = 11, 'Bedford' = 12, 'Bedford Park-Fordham North' = 13, 'Bellerose' = 14, 'Belmont' = 15, 'Bensonhurst East' = 16, 'Bensonhurst West' = 17, 'Borough Park' = 18, 'Breezy Point-Belle Harbor-Rockaway Park-Broad Channel' = 19, 'Briarwood-Jamaica Hills' = 20, 'Brighton Beach' = 21, 'Bronxdale' = 22, 'Brooklyn Heights-Cobble Hill' = 23, 'Brownsville' = 24, 'Bushwick North' = 25, 'Bushwick South' = 26, 'Cambria Heights' = 27, 'Canarsie' = 28, 'Carroll Gardens-Columbia Street-Red Hook' = 29, 'Central Harlem North-Polo Grounds' = 30, 'Central Harlem South' = 31, 'Charleston-Richmond Valley-Tottenville' = 32, 'Chinatown' = 33, 'Claremont-Bathgate' = 34, 'Clinton' = 35, 'Clinton Hill' = 36, 'Co-op City' = 37, 'College Point' = 38, 'Corona' = 39, 'Crotona Park East' = 40, 'Crown Heights North' = 41, 'Crown Heights South' = 42, 'Cypress Hills-City Line' = 43, 'DUMBO-Vinegar Hill-Downtown Brooklyn-Boerum Hill' = 44, 'Douglas Manor-Douglaston-Little Neck' = 45, 'Dyker Heights' = 46, 'East Concourse-Concourse Village' = 47, 'East Elmhurst' = 48, 'East Flatbush-Farragut' = 49, 'East Flushing' = 50, 'East Harlem North' = 51, 'East Harlem South' = 52, 'East New York' = 53, 'East New York (Pennsylvania Ave)' = 54, 'East Tremont' = 55, 'East Village' = 56, 'East Williamsburg' = 57, 'Eastchester-Edenwald-Baychester' = 58, 'Elmhurst' = 59, 'Elmhurst-Maspeth' = 60, 'Erasmus' = 61, 'Far Rockaway-Bayswater' = 62, 'Flatbush' = 63, 'Flatlands' = 64, 'Flushing' = 65, 'Fordham South' = 66, 'Forest Hills' = 67, 'Fort Greene' = 68, 'Fresh Meadows-Utopia' = 69, 'Ft. Totten-Bay Terrace-Clearview' = 70, 'Georgetown-Marine Park-Bergen Beach-Mill Basin' = 71, 'Glen Oaks-Floral Park-New Hyde Park' = 72, 'Glendale' = 73, 'Gramercy' = 74, 'Grasmere-Arrochar-Ft. Wadsworth' = 75, 'Gravesend' = 76, 'Great Kills' = 77, 'Greenpoint' = 78, 'Grymes Hill-Clifton-Fox Hills' = 79, 'Hamilton Heights' = 80, 'Hammels-Arverne-Edgemere' = 81, 'Highbridge' = 82, 'Hollis' = 83, 'Homecrest' = 84, 'Hudson Yards-Chelsea-Flatiron-Union Square' = 85, 'Hunters Point-Sunnyside-West Maspeth' = 86, 'Hunts Point' = 87, 'Jackson Heights' = 88, 'Jamaica' = 89, 'Jamaica Estates-Holliswood' = 90, 'Kensington-Ocean Parkway' = 91, 'Kew Gardens' = 92, 'Kew Gardens Hills' = 93, 'Kingsbridge Heights' = 94, 'Laurelton' = 95, 'Lenox Hill-Roosevelt Island' = 96, 'Lincoln Square' = 97, 'Lindenwood-Howard Beach' = 98, 'Longwood' = 99, 'Lower East Side' = 100, 'Madison' = 101, 'Manhattanville' = 102, 'Marble Hill-Inwood' = 103, 'Mariner\'s Harbor-Arlington-Port Ivory-Graniteville' = 104, 'Maspeth' = 105, 'Melrose South-Mott Haven North' = 106, 'Middle Village' = 107, 'Midtown-Midtown South' = 108, 'Midwood' = 109, 'Morningside Heights' = 110, 'Morrisania-Melrose' = 111, 'Mott Haven-Port Morris' = 112, 'Mount Hope' = 113, 'Murray Hill' = 114, 'Murray Hill-Kips Bay' = 115, 'New Brighton-Silver Lake' = 116, 'New Dorp-Midland Beach' = 117, 'New Springville-Bloomfield-Travis' = 118, 'North Corona' = 119, 'North Riverdale-Fieldston-Riverdale' = 120, 'North Side-South Side' = 121, 'Norwood' = 122, 'Oakland Gardens' = 123, 'Oakwood-Oakwood Beach' = 124, 'Ocean Hill' = 125, 'Ocean Parkway South' = 126, 'Old Astoria' = 127, 'Old Town-Dongan Hills-South Beach' = 128, 'Ozone Park' = 129, 'Park Slope-Gowanus' = 130, 'Parkchester' = 131, 'Pelham Bay-Country Club-City Island' = 132, 'Pelham Parkway' = 133, 'Pomonok-Flushing Heights-Hillcrest' = 134, 'Port Richmond' = 135, 'Prospect Heights' = 136, 'Prospect Lefferts Gardens-Wingate' = 137, 'Queens Village' = 138, 'Queensboro Hill' = 139, 'Queensbridge-Ravenswood-Long Island City' = 140, 'Rego Park' = 141, 'Richmond Hill' = 142, 'Ridgewood' = 143, 'Rikers Island' = 144, 'Rosedale' = 145, 'Rossville-Woodrow' = 146, 'Rugby-Remsen Village' = 147, 'Schuylerville-Throgs Neck-Edgewater Park' = 148, 'Seagate-Coney Island' = 149, 'Sheepshead Bay-Gerritsen Beach-Manhattan Beach' = 150, 'SoHo-TriBeCa-Civic Center-Little Italy' = 151, 'Soundview-Bruckner' = 152, 'Soundview-Castle Hill-Clason Point-Harding Park' = 153, 'South Jamaica' = 154, 'South Ozone Park' = 155, 'Springfield Gardens North' = 156, 'Springfield Gardens South-Brookville' = 157, 'Spuyten Duyvil-Kingsbridge' = 158, 'St. Albans' = 159, 'Stapleton-Rosebank' = 160, 'Starrett City' = 161, 'Steinway' = 162, 'Stuyvesant Heights' = 163, 'Stuyvesant Town-Cooper Village' = 164, 'Sunset Park East' = 165, 'Sunset Park West' = 166, 'Todt Hill-Emerson Hill-Heartland Village-Lighthouse Hill' = 167, 'Turtle Bay-East Midtown' = 168, 'University Heights-Morris Heights' = 169, 'Upper East Side-Carnegie Hill' = 170, 'Upper West Side' = 171, 'Van Cortlandt Village' = 172, 'Van Nest-Morris Park-Westchester Square' = 173, 'Washington Heights North' = 174, 'Washington Heights South' = 175, 'West Brighton' = 176, 'West Concourse' = 177, 'West Farms-Bronx River' = 178, 'West New Brighton-New Brighton-St. George' = 179, 'West Village' = 180, 'Westchester-Unionport' = 181, 'Westerleigh' = 182, 'Whitestone' = 183, 'Williamsbridge-Olinville' = 184, 'Williamsburg' = 185, 'Windsor Terrace' = 186, 'Woodhaven' = 187, 'Woodlawn-Wakefield' = 188, 'Woodside' = 189, 'Yorkville' = 190, 'park-cemetery-etc-Bronx' = 191, 'park-cemetery-etc-Brooklyn' = 192, 'park-cemetery-etc-Manhattan' = 193, 'park-cemetery-etc-Queens' = 194, 'park-cemetery-etc-Staten Island' = 195), pickup_puma UInt16, dropoff_nyct2010_gid UInt8, dropoff_ctlabel Float32, dropoff_borocode UInt8, dropoff_boroname Enum8('' = 0, 'Manhattan' = 1, 'Bronx' = 2, 'Brooklyn' = 3, 'Queens' = 4, 'Staten Island' = 5), dropoff_ct2010 FixedString(6), dropoff_boroct2010 FixedString(7), dropoff_cdeligibil Enum8(' ' = 0, 'E' = 1, 'I' = 2), dropoff_ntacode FixedString(4), dropoff_ntaname Enum16('' = 0, 'Airport' = 1, 'Allerton-Pelham Gardens' = 2, 'Annadale-Huguenot-Prince\'s Bay-Eltingville' = 3, 'Arden Heights' = 4, 'Astoria' = 5, 'Auburndale' = 6, 'Baisley Park' = 7, 'Bath Beach' = 8, 'Battery Park City-Lower Manhattan' = 9, 'Bay Ridge' = 10, 'Bayside-Bayside Hills' = 11, 'Bedford' = 12, 'Bedford Park-Fordham North' = 13, 'Bellerose' = 14, 'Belmont' = 15, 'Bensonhurst East' = 16, 'Bensonhurst West' = 17, 'Borough Park' = 18, 'Breezy Point-Belle Harbor-Rockaway Park-Broad Channel' = 19, 'Briarwood-Jamaica Hills' = 20, 'Brighton Beach' = 21, 'Bronxdale' = 22, 'Brooklyn Heights-Cobble Hill' = 23, 'Brownsville' = 24, 'Bushwick North' = 25, 'Bushwick South' = 26, 'Cambria Heights' = 27, 'Canarsie' = 28, 'Carroll Gardens-Columbia Street-Red Hook' = 29, 'Central Harlem North-Polo Grounds' = 30, 'Central Harlem South' = 31, 'Charleston-Richmond Valley-Tottenville' = 32, 'Chinatown' = 33, 'Claremont-Bathgate' = 34, 'Clinton' = 35, 'Clinton Hill' = 36, 'Co-op City' = 37, 'College Point' = 38, 'Corona' = 39, 'Crotona Park East' = 40, 'Crown Heights North' = 41, 'Crown Heights South' = 42, 'Cypress Hills-City Line' = 43, 'DUMBO-Vinegar Hill-Downtown Brooklyn-Boerum Hill' = 44, 'Douglas Manor-Douglaston-Little Neck' = 45, 'Dyker Heights' = 46, 'East Concourse-Concourse Village' = 47, 'East Elmhurst' = 48, 'East Flatbush-Farragut' = 49, 'East Flushing' = 50, 'East Harlem North' = 51, 'East Harlem South' = 52, 'East New York' = 53, 'East New York (Pennsylvania Ave)' = 54, 'East Tremont' = 55, 'East Village' = 56, 'East Williamsburg' = 57, 'Eastchester-Edenwald-Baychester' = 58, 'Elmhurst' = 59, 'Elmhurst-Maspeth' = 60, 'Erasmus' = 61, 'Far Rockaway-Bayswater' = 62, 'Flatbush' = 63, 'Flatlands' = 64, 'Flushing' = 65, 'Fordham South' = 66, 'Forest Hills' = 67, 'Fort Greene' = 68, 'Fresh Meadows-Utopia' = 69, 'Ft. Totten-Bay Terrace-Clearview' = 70, 'Georgetown-Marine Park-Bergen Beach-Mill Basin' = 71, 'Glen Oaks-Floral Park-New Hyde Park' = 72, 'Glendale' = 73, 'Gramercy' = 74, 'Grasmere-Arrochar-Ft. Wadsworth' = 75, 'Gravesend' = 76, 'Great Kills' = 77, 'Greenpoint' = 78, 'Grymes Hill-Clifton-Fox Hills' = 79, 'Hamilton Heights' = 80, 'Hammels-Arverne-Edgemere' = 81, 'Highbridge' = 82, 'Hollis' = 83, 'Homecrest' = 84, 'Hudson Yards-Chelsea-Flatiron-Union Square' = 85, 'Hunters Point-Sunnyside-West Maspeth' = 86, 'Hunts Point' = 87, 'Jackson Heights' = 88, 'Jamaica' = 89, 'Jamaica Estates-Holliswood' = 90, 'Kensington-Ocean Parkway' = 91, 'Kew Gardens' = 92, 'Kew Gardens Hills' = 93, 'Kingsbridge Heights' = 94, 'Laurelton' = 95, 'Lenox Hill-Roosevelt Island' = 96, 'Lincoln Square' = 97, 'Lindenwood-Howard Beach' = 98, 'Longwood' = 99, 'Lower East Side' = 100, 'Madison' = 101, 'Manhattanville' = 102, 'Marble Hill-Inwood' = 103, 'Mariner\'s Harbor-Arlington-Port Ivory-Graniteville' = 104, 'Maspeth' = 105, 'Melrose South-Mott Haven North' = 106, 'Middle Village' = 107, 'Midtown-Midtown South' = 108, 'Midwood' = 109, 'Morningside Heights' = 110, 'Morrisania-Melrose' = 111, 'Mott Haven-Port Morris' = 112, 'Mount Hope' = 113, 'Murray Hill' = 114, 'Murray Hill-Kips Bay' = 115, 'New Brighton-Silver Lake' = 116, 'New Dorp-Midland Beach' = 117, 'New Springville-Bloomfield-Travis' = 118, 'North Corona' = 119, 'North Riverdale-Fieldston-Riverdale' = 120, 'North Side-South Side' = 121, 'Norwood' = 122, 'Oakland Gardens' = 123, 'Oakwood-Oakwood Beach' = 124, 'Ocean Hill' = 125, 'Ocean Parkway South' = 126, 'Old Astoria' = 127, 'Old Town-Dongan Hills-South Beach' = 128, 'Ozone Park' = 129, 'Park Slope-Gowanus' = 130, 'Parkchester' = 131, 'Pelham Bay-Country Club-City Island' = 132, 'Pelham Parkway' = 133, 'Pomonok-Flushing Heights-Hillcrest' = 134, 'Port Richmond' = 135, 'Prospect Heights' = 136, 'Prospect Lefferts Gardens-Wingate' = 137, 'Queens Village' = 138, 'Queensboro Hill' = 139, 'Queensbridge-Ravenswood-Long Island City' = 140, 'Rego Park' = 141, 'Richmond Hill' = 142, 'Ridgewood' = 143, 'Rikers Island' = 144, 'Rosedale' = 145, 'Rossville-Woodrow' = 146, 'Rugby-Remsen Village' = 147, 'Schuylerville-Throgs Neck-Edgewater Park' = 148, 'Seagate-Coney Island' = 149, 'Sheepshead Bay-Gerritsen Beach-Manhattan Beach' = 150, 'SoHo-TriBeCa-Civic Center-Little Italy' = 151, 'Soundview-Bruckner' = 152, 'Soundview-Castle Hill-Clason Point-Harding Park' = 153, 'South Jamaica' = 154, 'South Ozone Park' = 155, 'Springfield Gardens North' = 156, 'Springfield Gardens South-Brookville' = 157, 'Spuyten Duyvil-Kingsbridge' = 158, 'St. Albans' = 159, 'Stapleton-Rosebank' = 160, 'Starrett City' = 161, 'Steinway' = 162, 'Stuyvesant Heights' = 163, 'Stuyvesant Town-Cooper Village' = 164, 'Sunset Park East' = 165, 'Sunset Park West' = 166, 'Todt Hill-Emerson Hill-Heartland Village-Lighthouse Hill' = 167, 'Turtle Bay-East Midtown' = 168, 'University Heights-Morris Heights' = 169, 'Upper East Side-Carnegie Hill' = 170, 'Upper West Side' = 171, 'Van Cortlandt Village' = 172, 'Van Nest-Morris Park-Westchester Square' = 173, 'Washington Heights North' = 174, 'Washington Heights South' = 175, 'West Brighton' = 176, 'West Concourse' = 177, 'West Farms-Bronx River' = 178, 'West New Brighton-New Brighton-St. George' = 179, 'West Village' = 180, 'Westchester-Unionport' = 181, 'Westerleigh' = 182, 'Whitestone' = 183, 'Williamsbridge-Olinville' = 184, 'Williamsburg' = 185, 'Windsor Terrace' = 186, 'Woodhaven' = 187, 'Woodlawn-Wakefield' = 188, 'Woodside' = 189, 'Yorkville' = 190, 'park-cemetery-etc-Bronx' = 191, 'park-cemetery-etc-Brooklyn' = 192, 'park-cemetery-etc-Manhattan' = 193, 'park-cemetery-etc-Queens' = 194, 'park-cemetery-etc-Staten Island' = 195), dropoff_puma UInt16) ENGINE = MergeTree(pickup_date, pickup_datetime, 8192)
```
On the source server:
-``` sql
+```sql
CREATE TABLE trips_mergetree_x3 AS trips_mergetree_third ENGINE = Distributed(perftest, default, trips_mergetree_third, rand())
```
The following query redistributes data:
-``` sql
+```sql
INSERT INTO trips_mergetree_x3 SELECT * FROM trips_mergetree
```
diff --git a/docs/en/getting_started/example_datasets/ontime.md b/docs/en/getting_started/example_datasets/ontime.md
index d076f5b6469..5df0bd1fe5e 100644
--- a/docs/en/getting_started/example_datasets/ontime.md
+++ b/docs/en/getting_started/example_datasets/ontime.md
@@ -24,7 +24,7 @@ done
Creating a table:
-``` sql
+```sql
CREATE TABLE `ontime` (
`Year` UInt16,
`Quarter` UInt8,
@@ -141,17 +141,17 @@ CREATE TABLE `ontime` (
Loading data:
```bash
-for i in *.zip; do echo $i; unzip -cq $i '*.csv' | sed 's/\.00//g' | clickhouse-client --host=example-perftest01j --query="INSERT INTO ontime FORMAT CSVWithNames"; done
+$ for i in *.zip; do echo $i; unzip -cq $i '*.csv' | sed 's/\.00//g' | clickhouse-client --host=example-perftest01j --query="INSERT INTO ontime FORMAT CSVWithNames"; done
```
-## Dowload of Prepared Partitions
+## Download of Prepared Partitions
```bash
-curl -O https://clickhouse-datasets.s3.yandex.net/ontime/partitions/ontime.tar
-tar xvf ontime.tar -C /var/lib/clickhouse # path to ClickHouse data directory
-# check permissions of unpacked data, fix if required
-sudo service clickhouse-server restart
-clickhouse-client --query "select count(*) from datasets.ontime"
+$ curl -O https://clickhouse-datasets.s3.yandex.net/ontime/partitions/ontime.tar
+$ tar xvf ontime.tar -C /var/lib/clickhouse # path to ClickHouse data directory
+$ # check permissions of unpacked data, fix if required
+$ sudo service clickhouse-server restart
+$ clickhouse-client --query "select count(*) from datasets.ontime"
```
!!!info
@@ -162,7 +162,7 @@ clickhouse-client --query "select count(*) from datasets.ontime"
Q0.
-``` sql
+```sql
SELECT avg(c1)
FROM
(
@@ -174,7 +174,7 @@ FROM
Q1. The number of flights per day from the year 2000 to 2008
-``` sql
+```sql
SELECT DayOfWeek, count(*) AS c
FROM ontime
WHERE Year>=2000 AND Year<=2008
@@ -184,7 +184,7 @@ ORDER BY c DESC;
Q2. The number of flights delayed by more than 10 minutes, grouped by the day of the week, for 2000-2008
-``` sql
+```sql
SELECT DayOfWeek, count(*) AS c
FROM ontime
WHERE DepDelay>10 AND Year>=2000 AND Year<=2008
@@ -194,7 +194,7 @@ ORDER BY c DESC;
Q3. The number of delays by airport for 2000-2008
-``` sql
+```sql
SELECT Origin, count(*) AS c
FROM ontime
WHERE DepDelay>10 AND Year>=2000 AND Year<=2008
@@ -205,7 +205,7 @@ LIMIT 10;
Q4. The number of delays by carrier for 2007
-``` sql
+```sql
SELECT Carrier, count(*)
FROM ontime
WHERE DepDelay>10 AND Year=2007
@@ -215,7 +215,7 @@ ORDER BY count(*) DESC;
Q5. The percentage of delays by carrier for 2007
-``` sql
+```sql
SELECT Carrier, c, c2, c*100/c2 as c3
FROM
(
@@ -241,7 +241,7 @@ ORDER BY c3 DESC;
Better version of the same query:
-``` sql
+```sql
SELECT Carrier, avg(DepDelay>10)*100 AS c3
FROM ontime
WHERE Year=2007
@@ -251,7 +251,7 @@ ORDER BY Carrier
Q6. The previous request for a broader range of years, 2000-2008
-``` sql
+```sql
SELECT Carrier, c, c2, c*100/c2 as c3
FROM
(
@@ -277,7 +277,7 @@ ORDER BY c3 DESC;
Better version of the same query:
-``` sql
+```sql
SELECT Carrier, avg(DepDelay>10)*100 AS c3
FROM ontime
WHERE Year>=2000 AND Year<=2008
@@ -287,7 +287,7 @@ ORDER BY Carrier;
Q7. Percentage of flights delayed for more than 10 minutes, by year
-``` sql
+```sql
SELECT Year, c1/c2
FROM
(
@@ -311,7 +311,7 @@ ORDER BY Year;
Better version of the same query:
-``` sql
+```sql
SELECT Year, avg(DepDelay>10)
FROM ontime
GROUP BY Year
@@ -320,7 +320,7 @@ ORDER BY Year;
Q8. The most popular destinations by the number of directly connected cities for various year ranges
-``` sql
+```sql
SELECT DestCityName, uniqExact(OriginCityName) AS u F
ROM ontime
WHERE Year>=2000 and Year<=2010
@@ -331,7 +331,7 @@ LIMIT 10;
Q9.
-``` sql
+```sql
SELECT Year, count(*) AS c1
FROM ontime
GROUP BY Year;
@@ -339,7 +339,7 @@ GROUP BY Year;
Q10.
-``` sql
+```sql
SELECT
min(Year), max(Year), Carrier, count(*) AS cnt,
sum(ArrDelayMinutes>30) AS flights_delayed,
@@ -357,7 +357,7 @@ LIMIT 1000;
Bonus:
-``` sql
+```sql
SELECT avg(cnt)
FROM
(
diff --git a/docs/en/getting_started/example_datasets/star_schema.md b/docs/en/getting_started/example_datasets/star_schema.md
index 545eaeea6a6..2e66ced7149 100644
--- a/docs/en/getting_started/example_datasets/star_schema.md
+++ b/docs/en/getting_started/example_datasets/star_schema.md
@@ -2,25 +2,25 @@
Compiling dbgen:
-```
-git clone git@github.com:vadimtk/ssb-dbgen.git
-cd ssb-dbgen
-make
+```bash
+$ git clone git@github.com:vadimtk/ssb-dbgen.git
+$ cd ssb-dbgen
+$ make
```
Generating data:
-```
-./dbgen -s 1000 -T c
-./dbgen -s 1000 -T l
-./dbgen -s 1000 -T p
-./dbgen -s 1000 -T s
-./dbgen -s 1000 -T d
+```bash
+$ ./dbgen -s 1000 -T c
+$ ./dbgen -s 1000 -T l
+$ ./dbgen -s 1000 -T p
+$ ./dbgen -s 1000 -T s
+$ ./dbgen -s 1000 -T d
```
Creating tables in ClickHouse:
-```
+```sql
CREATE TABLE customer
(
C_CUSTKEY UInt32,
@@ -85,16 +85,16 @@ ENGINE = MergeTree ORDER BY S_SUPPKEY;
Inserting data:
-```
-clickhouse-client --query "INSERT INTO customer FORMAT CSV" < customer.tbl
-clickhouse-client --query "INSERT INTO part FORMAT CSV" < part.tbl
-clickhouse-client --query "INSERT INTO supplier FORMAT CSV" < supplier.tbl
-clickhouse-client --query "INSERT INTO lineorder FORMAT CSV" < lineorder.tbl
+```bash
+$ clickhouse-client --query "INSERT INTO customer FORMAT CSV" < customer.tbl
+$ clickhouse-client --query "INSERT INTO part FORMAT CSV" < part.tbl
+$ clickhouse-client --query "INSERT INTO supplier FORMAT CSV" < supplier.tbl
+$ clickhouse-client --query "INSERT INTO lineorder FORMAT CSV" < lineorder.tbl
```
Converting "star schema" to denormalized "flat schema":
-```
+```sql
SET max_memory_usage = 20000000000, allow_experimental_multiple_joins_emulation = 1;
CREATE TABLE lineorder_flat
@@ -112,44 +112,56 @@ ALTER TABLE lineorder_flat DROP COLUMN C_CUSTKEY, DROP COLUMN S_SUPPKEY, DROP CO
Running the queries:
-```
Q1.1
+```sql
SELECT sum(LO_EXTENDEDPRICE * LO_DISCOUNT) AS revenue FROM lineorder_flat WHERE toYear(LO_ORDERDATE) = 1993 AND LO_DISCOUNT BETWEEN 1 AND 3 AND LO_QUANTITY < 25;
-
+```
Q1.2
+```sql
SELECT sum(LO_EXTENDEDPRICE * LO_DISCOUNT) AS revenue FROM lineorder_flat WHERE toYYYYMM(LO_ORDERDATE) = 199401 AND LO_DISCOUNT BETWEEN 4 AND 6 AND LO_QUANTITY BETWEEN 26 AND 35;
-
+```
Q1.3
+```sql
SELECT sum(LO_EXTENDEDPRICE * LO_DISCOUNT) AS revenue FROM lineorder_flat WHERE toISOWeek(LO_ORDERDATE) = 6 AND toYear(LO_ORDERDATE) = 1994 AND LO_DISCOUNT BETWEEN 5 AND 7 AND LO_QUANTITY BETWEEN 26 AND 35;
-
+```
Q2.1
+```sql
SELECT sum(LO_REVENUE), toYear(LO_ORDERDATE) AS year, P_BRAND FROM lineorder_flat WHERE P_CATEGORY = 'MFGR#12' AND S_REGION = 'AMERICA' GROUP BY year, P_BRAND ORDER BY year, P_BRAND;
-
+```
Q2.2
+```sql
SELECT sum(LO_REVENUE), toYear(LO_ORDERDATE) AS year, P_BRAND FROM lineorder_flat WHERE P_BRAND BETWEEN 'MFGR#2221' AND 'MFGR#2228' AND S_REGION = 'ASIA' GROUP BY year, P_BRAND ORDER BY year, P_BRAND;
-
+```
Q2.3
+```sql
SELECT sum(LO_REVENUE), toYear(LO_ORDERDATE) AS year, P_BRAND FROM lineorder_flat WHERE P_BRAND = 'MFGR#2239' AND S_REGION = 'EUROPE' GROUP BY year, P_BRAND ORDER BY year, P_BRAND;
-
+```
Q3.1
+```sql
SELECT C_NATION, S_NATION, toYear(LO_ORDERDATE) AS year, sum(LO_REVENUE) AS revenue FROM lineorder_flat WHERE C_REGION = 'ASIA' AND S_REGION = 'ASIA' AND year >= 1992 AND year <= 1997 GROUP BY C_NATION, S_NATION, year ORDER BY year asc, revenue desc;
-
+```
Q3.2
+```sql
SELECT C_CITY, S_CITY, toYear(LO_ORDERDATE) AS year, sum(LO_REVENUE) AS revenue FROM lineorder_flat WHERE C_NATION = 'UNITED STATES' AND S_NATION = 'UNITED STATES' AND year >= 1992 AND year <= 1997 GROUP BY C_CITY, S_CITY, year ORDER BY year asc, revenue desc;
-
+```
Q3.3
+```sql
SELECT C_CITY, S_CITY, toYear(LO_ORDERDATE) AS year, sum(LO_REVENUE) AS revenue FROM lineorder_flat WHERE (C_CITY = 'UNITED KI1' OR C_CITY = 'UNITED KI5') AND (S_CITY = 'UNITED KI1' OR S_CITY = 'UNITED KI5') AND year >= 1992 AND year <= 1997 GROUP BY C_CITY, S_CITY, year ORDER BY year asc, revenue desc;
-
+```
Q3.4
+```sql
SELECT C_CITY, S_CITY, toYear(LO_ORDERDATE) AS year, sum(LO_REVENUE) AS revenue FROM lineorder_flat WHERE (C_CITY = 'UNITED KI1' OR C_CITY = 'UNITED KI5') AND (S_CITY = 'UNITED KI1' OR S_CITY = 'UNITED KI5') AND toYYYYMM(LO_ORDERDATE) = '199712' GROUP BY C_CITY, S_CITY, year ORDER BY year asc, revenue desc;
-
+```
Q4.1
+```sql
SELECT toYear(LO_ORDERDATE) AS year, C_NATION, sum(LO_REVENUE - LO_SUPPLYCOST) AS profit FROM lineorder_flat WHERE C_REGION = 'AMERICA' AND S_REGION = 'AMERICA' AND (P_MFGR = 'MFGR#1' OR P_MFGR = 'MFGR#2') GROUP BY year, C_NATION ORDER BY year, C_NATION;
-
+```
Q4.2
+```sql
SELECT toYear(LO_ORDERDATE) AS year, S_NATION, P_CATEGORY, sum(LO_REVENUE - LO_SUPPLYCOST) AS profit FROM lineorder_flat WHERE C_REGION = 'AMERICA' AND S_REGION = 'AMERICA' AND (year = 1997 OR year = 1998) AND (P_MFGR = 'MFGR#1' OR P_MFGR = 'MFGR#2') GROUP BY year, S_NATION, P_CATEGORY ORDER BY year, S_NATION, P_CATEGORY;
-
+```
Q4.3
+```sql
SELECT toYear(LO_ORDERDATE) AS year, S_CITY, P_BRAND, sum(LO_REVENUE - LO_SUPPLYCOST) AS profit FROM lineorder_flat WHERE S_NATION = 'UNITED STATES' AND (year = 1997 OR year = 1998) AND P_CATEGORY = 'MFGR#14' GROUP BY year, S_CITY, P_BRAND ORDER BY year, S_CITY, P_BRAND;
```
diff --git a/docs/en/getting_started/example_datasets/wikistat.md b/docs/en/getting_started/example_datasets/wikistat.md
index f81d0525367..b9ab6f184f7 100644
--- a/docs/en/getting_started/example_datasets/wikistat.md
+++ b/docs/en/getting_started/example_datasets/wikistat.md
@@ -4,7 +4,7 @@ See:
Creating a table:
-``` sql
+```sql
CREATE TABLE wikistat
(
date Date,
@@ -20,9 +20,9 @@ CREATE TABLE wikistat
Loading data:
```bash
-for i in {2007..2016}; do for j in {01..12}; do echo $i-$j >&2; curl -sSL "http://dumps.wikimedia.org/other/pagecounts-raw/$i/$i-$j/" | grep -oE 'pagecounts-[0-9]+-[0-9]+\.gz'; done; done | sort | uniq | tee links.txt
-cat links.txt | while read link; do wget http://dumps.wikimedia.org/other/pagecounts-raw/$(echo $link | sed -r 's/pagecounts-([0-9]{4})([0-9]{2})[0-9]{2}-[0-9]+\.gz/\1/')/$(echo $link | sed -r 's/pagecounts-([0-9]{4})([0-9]{2})[0-9]{2}-[0-9]+\.gz/\1-\2/')/$link; done
-ls -1 /opt/wikistat/ | grep gz | while read i; do echo $i; gzip -cd /opt/wikistat/$i | ./wikistat-loader --time="$(echo -n $i | sed -r 's/pagecounts-([0-9]{4})([0-9]{2})([0-9]{2})-([0-9]{2})([0-9]{2})([0-9]{2})\.gz/\1-\2-\3 \4-00-00/')" | clickhouse-client --query="INSERT INTO wikistat FORMAT TabSeparated"; done
+$ for i in {2007..2016}; do for j in {01..12}; do echo $i-$j >&2; curl -sSL "http://dumps.wikimedia.org/other/pagecounts-raw/$i/$i-$j/" | grep -oE 'pagecounts-[0-9]+-[0-9]+\.gz'; done; done | sort | uniq | tee links.txt
+$ cat links.txt | while read link; do wget http://dumps.wikimedia.org/other/pagecounts-raw/$(echo $link | sed -r 's/pagecounts-([0-9]{4})([0-9]{2})[0-9]{2}-[0-9]+\.gz/\1/')/$(echo $link | sed -r 's/pagecounts-([0-9]{4})([0-9]{2})[0-9]{2}-[0-9]+\.gz/\1-\2/')/$link; done
+$ ls -1 /opt/wikistat/ | grep gz | while read i; do echo $i; gzip -cd /opt/wikistat/$i | ./wikistat-loader --time="$(echo -n $i | sed -r 's/pagecounts-([0-9]{4})([0-9]{2})([0-9]{2})-([0-9]{2})([0-9]{2})([0-9]{2})\.gz/\1-\2-\3 \4-00-00/')" | clickhouse-client --query="INSERT INTO wikistat FORMAT TabSeparated"; done
```
diff --git a/docs/en/getting_started/index.md b/docs/en/getting_started/index.md
index 8cdbae86e5e..ed7335b748b 100644
--- a/docs/en/getting_started/index.md
+++ b/docs/en/getting_started/index.md
@@ -18,8 +18,8 @@ Yandex ClickHouse team recommends using official pre-compiled `deb` packages for
To install official packages add the Yandex repository in `/etc/apt/sources.list` or in a separate `/etc/apt/sources.list.d/clickhouse.list` file:
-```
-deb http://repo.yandex.ru/clickhouse/deb/stable/ main/
+```bash
+$ deb http://repo.yandex.ru/clickhouse/deb/stable/ main/
```
If you want to use the most recent version, replace `stable` with `testing` (this is recommended for your testing environments).
@@ -27,10 +27,10 @@ If you want to use the most recent version, replace `stable` with `testing` (thi
Then run these commands to actually install packages:
```bash
-sudo apt-get install dirmngr # optional
-sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv E0C56BD4 # optional
-sudo apt-get update
-sudo apt-get install clickhouse-client clickhouse-server
+$ sudo apt-get install dirmngr # optional
+$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv E0C56BD4 # optional
+$ sudo apt-get update
+$ sudo apt-get install clickhouse-client clickhouse-server
```
You can also download and install packages manually from here: .
@@ -42,9 +42,9 @@ Yandex ClickHouse team recommends using official pre-compiled `rpm` packages for
First you need to add the official repository:
```bash
-sudo yum install yum-utils
-sudo rpm --import https://repo.yandex.ru/clickhouse/CLICKHOUSE-KEY.GPG
-sudo yum-config-manager --add-repo https://repo.yandex.ru/clickhouse/rpm/stable/x86_64
+$ sudo yum install yum-utils
+$ sudo rpm --import https://repo.yandex.ru/clickhouse/CLICKHOUSE-KEY.GPG
+$ sudo yum-config-manager --add-repo https://repo.yandex.ru/clickhouse/rpm/stable/x86_64
```
If you want to use the most recent version, replace `stable` with `testing` (this is recommended for your testing environments).
@@ -52,7 +52,7 @@ If you want to use the most recent version, replace `stable` with `testing` (thi
Then run these commands to actually install packages:
```bash
-sudo yum install clickhouse-server clickhouse-client
+$ sudo yum install clickhouse-server clickhouse-client
```
You can also download and install packages manually from here: .
@@ -67,13 +67,13 @@ To manually compile ClickHouse, follow the instructions for [Linux](../developme
You can compile packages and install them or use programs without installing packages. Also by building manually you can disable SSE 4.2 requirement or build for AArch64 CPUs.
-```
+```text
Client: dbms/programs/clickhouse-client
Server: dbms/programs/clickhouse-server
```
You'll need to create a data and metadata folders and `chown` them for the desired user. Their paths can be changed in server config (src/dbms/programs/server/config.xml), by default they are:
-```
+```text
/opt/clickhouse/data/default/
/opt/clickhouse/metadata/default/
```
@@ -129,18 +129,14 @@ $ ./clickhouse-client
ClickHouse client version 0.0.18749.
Connecting to localhost:9000.
Connected to ClickHouse server version 0.0.18749.
-
-:) SELECT 1
-
+```
+```sql
SELECT 1
-
+```
+```text
┌─1─┐
│ 1 │
└───┘
-
-1 rows in set. Elapsed: 0.003 sec.
-
-:)
```
**Congratulations, the system works!**
diff --git a/docs/en/index.md b/docs/en/index.md
index 32bfe2d4fa3..40158f524ec 100644
--- a/docs/en/index.md
+++ b/docs/en/index.md
@@ -78,22 +78,16 @@ See the difference?
For example, the query "count the number of records for each advertising platform" requires reading one "advertising platform ID" column, which takes up 1 byte uncompressed. If most of the traffic was not from advertising platforms, you can expect at least 10-fold compression of this column. When using a quick compression algorithm, data decompression is possible at a speed of at least several gigabytes of uncompressed data per second. In other words, this query can be processed at a speed of approximately several billion rows per second on a single server. This speed is actually achieved in practice.
Example
-```
+```bash
$ clickhouse-client
ClickHouse client version 0.0.52053.
Connecting to localhost:9000.
Connected to ClickHouse server version 0.0.52053.
-
-:) SELECT CounterID, count() FROM hits GROUP BY CounterID ORDER BY count() DESC LIMIT 20
-
-SELECT
-CounterID,
-count()
-FROM hits
-GROUP BY CounterID
-ORDER BY count() DESC
-LIMIT 20
-
+```
+```sql
+SELECT CounterID, count() FROM hits GROUP BY CounterID ORDER BY count() DESC LIMIT 20
+```
+```text
┌─CounterID─┬──count()─┐
│ 114208 │ 56057344 │
│ 115080 │ 51619590 │
@@ -116,10 +110,6 @@ LIMIT 20
│ 115079 │ 8837972 │
│ 337234 │ 8205961 │
└───────────┴──────────┘
-
-20 rows in set. Elapsed: 0.153 sec. Processed 1.00 billion rows, 4.00 GB (6.53 billion rows/s., 26.10 GB/s.)
-
-:)
```
diff --git a/docs/en/interfaces/cli.md b/docs/en/interfaces/cli.md
index b6e59c4aa50..b582ab447d2 100644
--- a/docs/en/interfaces/cli.md
+++ b/docs/en/interfaces/cli.md
@@ -22,14 +22,14 @@ Similar to the HTTP interface, when using the 'query' parameter and sending data
Example of using the client to insert data:
```bash
-echo -ne "1, 'some text', '2016-08-14 00:00:00'\n2, 'some more text', '2016-08-14 00:00:01'" | clickhouse-client --database=test --query="INSERT INTO test FORMAT CSV";
+$ echo -ne "1, 'some text', '2016-08-14 00:00:00'\n2, 'some more text', '2016-08-14 00:00:01'" | clickhouse-client --database=test --query="INSERT INTO test FORMAT CSV";
-cat <<_EOF | clickhouse-client --database=test --query="INSERT INTO test FORMAT CSV";
+$ cat <<_EOF | clickhouse-client --database=test --query="INSERT INTO test FORMAT CSV";
3, 'some text', '2016-08-14 00:00:00'
4, 'some more text', '2016-08-14 00:00:01'
_EOF
-cat file.csv | clickhouse-client --database=test --query="INSERT INTO test FORMAT CSV";
+$ cat file.csv | clickhouse-client --database=test --query="INSERT INTO test FORMAT CSV";
```
In batch mode, the default data format is TabSeparated. You can set the format in the FORMAT clause of the query.
@@ -70,14 +70,14 @@ The command-line client allows passing external data (external temporary tables)
You can create a query with parameters and pass values to them from client application. This allows to avoid formatting query with specific dynamic values on client side. For example:
```bash
-clickhouse-client --param_parName="[1, 2]" -q "SELECT * FROM table WHERE a = {parName:Array(UInt16)}"
+$ clickhouse-client --param_parName="[1, 2]" -q "SELECT * FROM table WHERE a = {parName:Array(UInt16)}"
```
#### Query Syntax {#cli-queries-with-parameters-syntax}
Format a query as usual, then place the values that you want to pass from the app parameters to the query in braces in the following format:
-```
+```sql
{:}
```
@@ -87,7 +87,7 @@ Format a query as usual, then place the values that you want to pass from the ap
#### Example
```bash
-clickhouse-client --param_tuple_in_tuple="(10, ('dt', 10))" -q "SELECT * FROM table WHERE val = {tuple_in_tuple:Tuple(UInt8, Tuple(String, UInt8))}"
+$ clickhouse-client --param_tuple_in_tuple="(10, ('dt', 10))" -q "SELECT * FROM table WHERE val = {tuple_in_tuple:Tuple(UInt8, Tuple(String, UInt8))}"
```
## Configuring {#interfaces_cli_configuration}
diff --git a/docs/en/interfaces/formats.md b/docs/en/interfaces/formats.md
index 67fe9762ffb..88699adbd94 100644
--- a/docs/en/interfaces/formats.md
+++ b/docs/en/interfaces/formats.md
@@ -47,11 +47,11 @@ The `TabSeparated` format is convenient for processing data using custom program
The `TabSeparated` format supports outputting total values (when using WITH TOTALS) and extreme values (when 'extremes' is set to 1). In these cases, the total values and extremes are output after the main data. The main result, total values, and extremes are separated from each other by an empty line. Example:
-``` sql
+```sql
SELECT EventDate, count() AS c FROM test.hits GROUP BY EventDate WITH TOTALS ORDER BY EventDate FORMAT TabSeparated``
```
-```
+```text
2014-03-17 1406958
2014-03-18 1383658
2014-03-19 1405797
@@ -83,7 +83,7 @@ As an exception, parsing dates with times is also supported in Unix timestamp fo
Strings are output with backslash-escaped special characters. The following escape sequences are used for output: `\b`, `\f`, `\r`, `\n`, `\t`, `\0`, `\'`, `\\`. Parsing also supports the sequences `\a`, `\v`, and `\xHH` (hex escape sequences) and any `\c` sequences, where `c` is any character (these sequences are converted to `c`). Thus, reading data supports formats where a line feed can be written as `\n` or `\`, or as a line feed. For example, the string `Hello world` with a line feed between the words instead of a space can be parsed in any of the following variations:
-```
+```text
Hello\nworld
Hello\
@@ -211,7 +211,7 @@ format_schema_rows_between_delimiter = '\n '
```
`Insert` example:
-```
+```text
Some header
Page views: 5, User id: 4324182021466249494, Useless field: hello, Duration: 146, Sign: -1
Page views: 6, User id: 4324182021466249494, Useless field: world, Duration: 185, Sign: 1
@@ -241,7 +241,7 @@ format_schema_rows_between_delimiter = ','
Similar to TabSeparated, but outputs a value in name=value format. Names are escaped the same way as in TabSeparated format, and the = symbol is also escaped.
-```
+```text
SearchPhrase= count()=8267016
SearchPhrase=bathroom interior design count()=2166
SearchPhrase=yandex count()=1655
@@ -260,7 +260,7 @@ SearchPhrase=baku count()=1000
SELECT * FROM t_null FORMAT TSKV
```
-```
+```text
x=1 y=\N
```
@@ -276,8 +276,8 @@ Comma Separated Values format ([RFC](https://tools.ietf.org/html/rfc4180)).
When formatting, rows are enclosed in double quotes. A double quote inside a string is output as two double quotes in a row. There are no other rules for escaping characters. Date and date-time are enclosed in double quotes. Numbers are output without quotes. Values are separated by a delimiter character, which is `,` by default. The delimiter character is defined in the setting [format_csv_delimiter](../operations/settings/settings.md#settings-format_csv_delimiter). Rows are separated using the Unix line feed (LF). Arrays are serialized in CSV as follows: first the array is serialized to a string as in TabSeparated format, and then the resulting string is output to CSV in double quotes. Tuples in CSV format are serialized as separate columns (that is, their nesting in the tuple is lost).
-```
-clickhouse-client --format_csv_delimiter="|" --query="INSERT INTO test.csv FORMAT CSV" < data.csv
+```bash
+$ clickhouse-client --format_csv_delimiter="|" --query="INSERT INTO test.csv FORMAT CSV" < data.csv
```
*By default, the delimiter is `,`. See the [format_csv_delimiter](../operations/settings/settings.md#settings-format_csv_delimiter) setting for more information.
@@ -300,7 +300,7 @@ Also prints the header row, similar to `TabSeparatedWithNames`.
Outputs data in JSON format. Besides data tables, it also outputs column names and types, along with some additional information: the total number of output rows, and the number of rows that could have been output if there weren't a LIMIT. Example:
-``` sql
+```sql
SELECT SearchPhrase, count() AS c FROM test.hits GROUP BY SearchPhrase WITH TOTALS ORDER BY c DESC LIMIT 5 FORMAT JSON
```
@@ -445,7 +445,7 @@ When inserting the data, you should provide a separate JSON object for each row.
### Inserting Data
-```
+```sql
INSERT INTO UserActivity FORMAT JSONEachRow {"PageViews":5, "UserID":"4324182021466249494", "Duration":146,"Sign":-1} {"UserID":"4324182021466249494","PageViews":6,"Duration":185,"Sign":1}
```
@@ -464,7 +464,7 @@ If `DEFAULT expr` is specified, ClickHouse uses different substitution rules dep
Consider the following table:
-```
+```sql
CREATE TABLE IF NOT EXISTS example_table
(
x UInt32,
@@ -482,7 +482,7 @@ CREATE TABLE IF NOT EXISTS example_table
Consider the `UserActivity` table as an example:
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐
│ 4324182021466249494 │ 5 │ 146 │ -1 │
│ 4324182021466249494 │ 6 │ 185 │ 1 │
@@ -491,7 +491,7 @@ Consider the `UserActivity` table as an example:
The query `SELECT * FROM UserActivity FORMAT JSONEachRow` returns:
-```
+```text
{"UserID":"4324182021466249494","PageViews":5,"Duration":146,"Sign":-1}
{"UserID":"4324182021466249494","PageViews":6,"Duration":185,"Sign":1}
```
@@ -576,11 +576,11 @@ Each result block is output as a separate table. This is necessary so that block
Example (shown for the [PrettyCompact](#prettycompact) format):
-``` sql
+```sql
SELECT * FROM t_null
```
-```
+```text
┌─x─┬────y─┐
│ 1 │ ᴺᵁᴸᴸ │
└───┴──────┘
@@ -588,11 +588,11 @@ SELECT * FROM t_null
Rows are not escaped in Pretty* formats. Example is shown for the [PrettyCompact](#prettycompact) format:
-``` sql
+```sql
SELECT 'String with \'quotes\' and \t character' AS Escaping_test
```
-```
+```text
┌─Escaping_test────────────────────────┐
│ String with 'quotes' and character │
└──────────────────────────────────────┘
@@ -603,11 +603,11 @@ This format is only appropriate for outputting a query result, but not for parsi
The Pretty format supports outputting total values (when using WITH TOTALS) and extremes (when 'extremes' is set to 1). In these cases, total values and extreme values are output after the main data, in separate tables. Example (shown for the [PrettyCompact](#prettycompact) format):
-``` sql
+```sql
SELECT EventDate, count() AS c FROM test.hits GROUP BY EventDate WITH TOTALS ORDER BY EventDate FORMAT PrettyCompact
```
-```
+```text
┌──EventDate─┬───────c─┐
│ 2014-03-17 │ 1406958 │
│ 2014-03-18 │ 1383658 │
@@ -646,7 +646,7 @@ Differs from Pretty in that ANSI-escape sequences aren't used. This is necessary
Example:
```bash
-watch -n1 "clickhouse-client --query='SELECT event, value FROM system.events FORMAT PrettyCompactNoEscapes'"
+$ watch -n1 "clickhouse-client --query='SELECT event, value FROM system.events FORMAT PrettyCompactNoEscapes'"
```
You can use the HTTP interface for displaying in the browser.
@@ -702,11 +702,11 @@ Prints each value on a separate line with the column name specified. This format
Example:
-``` sql
+```sql
SELECT * FROM t_null FORMAT Vertical
```
-```
+```text
Row 1:
──────
x: 1
@@ -714,11 +714,11 @@ y: ᴺᵁᴸᴸ
```
Rows are not escaped in Vertical format:
-``` sql
+```sql
SELECT 'string with \'quotes\' and \t with some special \n characters' AS test FORMAT Vertical
```
-```
+```text
Row 1:
──────
test: string with 'quotes' and with some special
@@ -807,12 +807,12 @@ Cap'n Proto is a binary message format similar to Protocol Buffers and Thrift, b
Cap'n Proto messages are strictly typed and not self-describing, meaning they need an external schema description. The schema is applied on the fly and cached for each query.
```bash
-cat capnproto_messages.bin | clickhouse-client --query "INSERT INTO test.hits FORMAT CapnProto SETTINGS format_schema='schema:Message'"
+$ cat capnproto_messages.bin | clickhouse-client --query "INSERT INTO test.hits FORMAT CapnProto SETTINGS format_schema='schema:Message'"
```
Where `schema.capnp` looks like this:
-```
+```capnp
struct Message {
SearchPhrase @0 :Text;
c @1 :Uint64;
@@ -842,7 +842,7 @@ cat protobuf_messages.bin | clickhouse-client --query "INSERT INTO test.table FO
where the file `schemafile.proto` looks like this:
-```
+```capnp
syntax = "proto3";
message MessageType {
@@ -859,7 +859,7 @@ If types of a column and a field of Protocol Buffers' message are different the
Nested messages are supported. For example, for the field `z` in the following message type
-```
+```capnp
message MessageType {
message XType {
message YType {
@@ -876,7 +876,7 @@ Nested messages are suitable to input or output a [nested data structures](../da
Default values defined in a protobuf schema like this
-```
+```capnp
syntax = "proto2";
message MessageType {
diff --git a/docs/en/interfaces/http.md b/docs/en/interfaces/http.md
index 80cf72ec0e2..ee05a1cdb64 100644
--- a/docs/en/interfaces/http.md
+++ b/docs/en/interfaces/http.md
@@ -75,31 +75,31 @@ The POST method of transmitting data is necessary for INSERT queries. In this ca
Examples: Creating a table:
```bash
-echo 'CREATE TABLE t (a UInt8) ENGINE = Memory' | curl 'http://localhost:8123/' --data-binary @-
+$ echo 'CREATE TABLE t (a UInt8) ENGINE = Memory' | curl 'http://localhost:8123/' --data-binary @-
```
Using the familiar INSERT query for data insertion:
```bash
-echo 'INSERT INTO t VALUES (1),(2),(3)' | curl 'http://localhost:8123/' --data-binary @-
+$ echo 'INSERT INTO t VALUES (1),(2),(3)' | curl 'http://localhost:8123/' --data-binary @-
```
Data can be sent separately from the query:
```bash
-echo '(4),(5),(6)' | curl 'http://localhost:8123/?query=INSERT%20INTO%20t%20VALUES' --data-binary @-
+$ echo '(4),(5),(6)' | curl 'http://localhost:8123/?query=INSERT%20INTO%20t%20VALUES' --data-binary @-
```
You can specify any data format. The 'Values' format is the same as what is used when writing INSERT INTO t VALUES:
```bash
-echo '(7),(8),(9)' | curl 'http://localhost:8123/?query=INSERT%20INTO%20t%20FORMAT%20Values' --data-binary @-
+$ echo '(7),(8),(9)' | curl 'http://localhost:8123/?query=INSERT%20INTO%20t%20FORMAT%20Values' --data-binary @-
```
To insert data from a tab-separated dump, specify the corresponding format:
```bash
-echo -ne '10\n11\n12\n' | curl 'http://localhost:8123/?query=INSERT%20INTO%20t%20FORMAT%20TabSeparated' --data-binary @-
+$ echo -ne '10\n11\n12\n' | curl 'http://localhost:8123/?query=INSERT%20INTO%20t%20FORMAT%20TabSeparated' --data-binary @-
```
Reading the table contents. Data is output in random order due to parallel query processing:
@@ -123,7 +123,7 @@ $ curl 'http://localhost:8123/?query=SELECT%20a%20FROM%20t'
Deleting the table.
```bash
-echo 'DROP TABLE t' | curl 'http://localhost:8123/' --data-binary @-
+$ echo 'DROP TABLE t' | curl 'http://localhost:8123/' --data-binary @-
```
For successful requests that don't return a data table, an empty response body is returned.
@@ -141,10 +141,10 @@ Examples of sending data with compression:
```bash
#Sending data to the server:
-curl -vsS "http://localhost:8123/?enable_http_compression=1" -d 'SELECT number FROM system.numbers LIMIT 10' -H 'Accept-Encoding: gzip'
+$ curl -vsS "http://localhost:8123/?enable_http_compression=1" -d 'SELECT number FROM system.numbers LIMIT 10' -H 'Accept-Encoding: gzip'
#Sending data to the client:
-echo "SELECT 1" | gzip -c | curl -sS --data-binary @- -H 'Content-Encoding: gzip' 'http://localhost:8123/'
+$ echo "SELECT 1" | gzip -c | curl -sS --data-binary @- -H 'Content-Encoding: gzip' 'http://localhost:8123/'
```
!!! note "Note"
@@ -173,13 +173,13 @@ The username and password can be indicated in one of two ways:
1. Using HTTP Basic Authentication. Example:
```bash
-echo 'SELECT 1' | curl 'http://user:password@localhost:8123/' -d @-
+$ echo 'SELECT 1' | curl 'http://user:password@localhost:8123/' -d @-
```
2. In the 'user' and 'password' URL parameters. Example:
```bash
-echo 'SELECT 1' | curl 'http://localhost:8123/?user=user&password=password' -d @-
+$ echo 'SELECT 1' | curl 'http://localhost:8123/?user=user&password=password' -d @-
```
If the user name is not specified, the `default` name is used. If the password is not specified, the empty password is used.
@@ -207,7 +207,7 @@ Similarly, you can use ClickHouse sessions in the HTTP protocol. To do this, you
You can receive information about the progress of a query in `X-ClickHouse-Progress` response headers. To do this, enable [send_progress_in_http_headers](../operations/settings/settings.md#settings-send_progress_in_http_headers). Example of the header sequence:
-```
+```text
X-ClickHouse-Progress: {"read_rows":"2752512","read_bytes":"240570816","total_rows_to_read":"8880128"}
X-ClickHouse-Progress: {"read_rows":"5439488","read_bytes":"482285394","total_rows_to_read":"8880128"}
X-ClickHouse-Progress: {"read_rows":"8783786","read_bytes":"819092887","total_rows_to_read":"8880128"}
@@ -239,7 +239,7 @@ To ensure that the entire response is buffered, set `wait_end_of_query=1`. In th
Example:
```bash
-curl -sS 'http://localhost:8123/?max_result_bytes=4000000&buffer_size=3000000&wait_end_of_query=1' -d 'SELECT toUInt8(number) FROM system.numbers LIMIT 9000000 FORMAT RowBinary'
+$ curl -sS 'http://localhost:8123/?max_result_bytes=4000000&buffer_size=3000000&wait_end_of_query=1' -d 'SELECT toUInt8(number) FROM system.numbers LIMIT 9000000 FORMAT RowBinary'
```
Use buffering to avoid situations where a query processing error occurred after the response code and HTTP headers were sent to the client. In this situation, an error message is written at the end of the response body, and on the client side, the error can only be detected at the parsing stage.
@@ -251,7 +251,7 @@ You can create a query with parameters and pass values for them from the corresp
### Example
```bash
-curl -sS "?param_id=2¶m_phrase=test" -d "SELECT * FROM table WHERE int_column = {id:UInt8} and string_column = {phrase:String}"
+$ curl -sS "?param_id=2¶m_phrase=test" -d "SELECT * FROM table WHERE int_column = {id:UInt8} and string_column = {phrase:String}"
```
[Original article](https://clickhouse.yandex/docs/en/interfaces/http_interface/)
diff --git a/docs/en/interfaces/jdbc.md b/docs/en/interfaces/jdbc.md
index d94394493f1..7eb8238fe65 100644
--- a/docs/en/interfaces/jdbc.md
+++ b/docs/en/interfaces/jdbc.md
@@ -1,6 +1,6 @@
# JDBC Driver
-* **[Official driver](https://github.com/yandex/clickhouse-jdbc)**
+* **[Official driver](https://github.com/ClickHouse/clickhouse-jdbc)**
* Third-party drivers:
* [ClickHouse-Native-JDBC](https://github.com/housepower/ClickHouse-Native-JDBC)
* [clickhouse4j](https://github.com/blynkkk/clickhouse4j)
diff --git a/docs/en/interfaces/odbc.md b/docs/en/interfaces/odbc.md
index 845c4e21fb3..07212df6687 100644
--- a/docs/en/interfaces/odbc.md
+++ b/docs/en/interfaces/odbc.md
@@ -1,6 +1,6 @@
# ODBC Driver
-- [Official driver](https://github.com/yandex/clickhouse-odbc).
+- [Official driver](https://github.com/ClickHouse/clickhouse-odbc).
[Original article](https://clickhouse.yandex/docs/en/interfaces/odbc/)
diff --git a/docs/en/interfaces/tcp.md b/docs/en/interfaces/tcp.md
index c17e8c15b5e..0c3e0e7885f 100644
--- a/docs/en/interfaces/tcp.md
+++ b/docs/en/interfaces/tcp.md
@@ -1,5 +1,5 @@
# Native Interface (TCP)
-The native protocol is used in the [command-line client](cli.md), for interserver communication during distributed query processing, and also in other C++ programs. Unfortunately, native ClickHouse protocol does not have formal specification yet, but it can be reverse engineered from ClickHouse source code (starting [around here](https://github.com/yandex/ClickHouse/tree/master/dbms/src/Client)) and/or by intercepting and analyzing TCP traffic.
+The native protocol is used in the [command-line client](cli.md), for interserver communication during distributed query processing, and also in other C++ programs. Unfortunately, native ClickHouse protocol does not have formal specification yet, but it can be reverse engineered from ClickHouse source code (starting [around here](https://github.com/ClickHouse/ClickHouse/tree/master/dbms/src/Client)) and/or by intercepting and analyzing TCP traffic.
[Original article](https://clickhouse.yandex/docs/en/interfaces/tcp/)
diff --git a/docs/en/operations/backup.md b/docs/en/operations/backup.md
index f6cb381e602..fcbfb7fb40c 100644
--- a/docs/en/operations/backup.md
+++ b/docs/en/operations/backup.md
@@ -1,6 +1,6 @@
# Data Backup
-While [replication](table_engines/replication.md) provides protection from hardware failures, it does not protect against human errors: accidental deletion of data, deletion of the wrong table or a table on the wrong cluster, and software bugs that result in incorrect data processing or data corruption. In many cases mistakes like these will affect all replicas. ClickHouse has built-in safeguards to prevent some types of mistakes — for example, by default [you can't just drop tables with a MergeTree-like engine containing more than 50 Gb of data](https://github.com/yandex/ClickHouse/blob/v18.14.18-stable/dbms/programs/server/config.xml#L322-L330). However, these safeguards don't cover all possible cases and can be circumvented.
+While [replication](table_engines/replication.md) provides protection from hardware failures, it does not protect against human errors: accidental deletion of data, deletion of the wrong table or a table on the wrong cluster, and software bugs that result in incorrect data processing or data corruption. In many cases mistakes like these will affect all replicas. ClickHouse has built-in safeguards to prevent some types of mistakes — for example, by default [you can't just drop tables with a MergeTree-like engine containing more than 50 Gb of data](https://github.com/ClickHouse/ClickHouse/blob/v18.14.18-stable/dbms/programs/server/config.xml#L322-L330). However, these safeguards don't cover all possible cases and can be circumvented.
In order to effectively mitigate possible human errors, you should carefully prepare a strategy for backing up and restoring your data **in advance**.
diff --git a/docs/en/operations/configuration_files.md b/docs/en/operations/configuration_files.md
index de4bb0a0f7b..ea10eb3bfe2 100644
--- a/docs/en/operations/configuration_files.md
+++ b/docs/en/operations/configuration_files.md
@@ -20,8 +20,10 @@ Substitutions can also be performed from ZooKeeper. To do this, specify the attr
The `config.xml` file can specify a separate config with user settings, profiles, and quotas. The relative path to this config is set in the 'users_config' element. By default, it is `users.xml`. If `users_config` is omitted, the user settings, profiles, and quotas are specified directly in `config.xml`.
In addition, `users_config` may have overrides in files from the `users_config.d` directory (for example, `users.d`) and substitutions. For example, you can have separate config file for each user like this:
-``` xml
+```bash
$ cat /etc/clickhouse-server/users.d/alice.xml
+```
+```xml
diff --git a/docs/en/operations/settings/constraints_on_settings.md b/docs/en/operations/settings/constraints_on_settings.md
index 144eab7b0c0..c3bc4f05f22 100644
--- a/docs/en/operations/settings/constraints_on_settings.md
+++ b/docs/en/operations/settings/constraints_on_settings.md
@@ -3,7 +3,7 @@
The constraints on settings can be defined in the `users` section of the `user.xml` configuration file and prohibit users from changing some of the settings with the `SET` query.
The constraints are defined as following:
-```
+```xml
@@ -30,7 +30,7 @@ There are supported three types of constraints: `min`, `max`, `readonly`. The `m
**Example:** Let `users.xml` includes lines:
-```
+```xml
10000000000
@@ -51,13 +51,13 @@ There are supported three types of constraints: `min`, `max`, `readonly`. The `m
The following queries all throw exceptions:
-```
+```sql
SET max_memory_usage=20000000001;
SET max_memory_usage=4999999999;
SET force_index_by_date=1;
```
-```
+```text
Code: 452, e.displayText() = DB::Exception: Setting max_memory_usage should not be greater than 20000000000.
Code: 452, e.displayText() = DB::Exception: Setting max_memory_usage should not be less than 5000000000.
Code: 452, e.displayText() = DB::Exception: Setting force_index_by_date should not be changed.
diff --git a/docs/en/operations/settings/query_complexity.md b/docs/en/operations/settings/query_complexity.md
index c00f2132ebd..1e71fb5e333 100644
--- a/docs/en/operations/settings/query_complexity.md
+++ b/docs/en/operations/settings/query_complexity.md
@@ -37,7 +37,7 @@ Memory consumption is also restricted by the parameters `max_memory_usage_for_us
The maximum amount of RAM to use for running a user's queries on a single server.
-Default values are defined in [Settings.h](https://github.com/yandex/ClickHouse/blob/master/dbms/src/Core/Settings.h#L288). By default, the amount is not restricted (`max_memory_usage_for_user = 0`).
+Default values are defined in [Settings.h](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/src/Core/Settings.h#L288). By default, the amount is not restricted (`max_memory_usage_for_user = 0`).
See also the description of [max_memory_usage](#settings_max_memory_usage).
@@ -45,7 +45,7 @@ See also the description of [max_memory_usage](#settings_max_memory_usage).
The maximum amount of RAM to use for running all queries on a single server.
-Default values are defined in [Settings.h](https://github.com/yandex/ClickHouse/blob/master/dbms/src/Core/Settings.h#L289). By default, the amount is not restricted (`max_memory_usage_for_all_queries = 0`).
+Default values are defined in [Settings.h](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/src/Core/Settings.h#L289). By default, the amount is not restricted (`max_memory_usage_for_all_queries = 0`).
See also the description of [max_memory_usage](#settings_max_memory_usage).
diff --git a/docs/en/operations/settings/settings.md b/docs/en/operations/settings/settings.md
index d4e433803ae..5591f82d037 100644
--- a/docs/en/operations/settings/settings.md
+++ b/docs/en/operations/settings/settings.md
@@ -179,7 +179,8 @@ Insert the [DateTime](../../data_types/datetime.md) type value with the differen
```sql
SET input_format_values_interpret_expressions = 0;
INSERT INTO datetime_t VALUES (now())
-
+```
+```text
Exception on client:
Code: 27. DB::Exception: Cannot parse input: expected ) before: now()): (at row 1)
```
@@ -187,7 +188,8 @@ Code: 27. DB::Exception: Cannot parse input: expected ) before: now()): (at row
```sql
SET input_format_values_interpret_expressions = 1;
INSERT INTO datetime_t VALUES (now())
-
+```
+```text
Ok.
```
@@ -196,7 +198,8 @@ The last query is equivalent to the following:
```sql
SET input_format_values_interpret_expressions = 0;
INSERT INTO datetime_t SELECT now()
-
+```
+```text
Ok.
```
@@ -599,7 +602,7 @@ ClickHouse supports the following algorithms of choosing replicas:
### Random (by default) {#load_balancing-random}
-```
+```sql
load_balancing = random
```
@@ -608,7 +611,7 @@ Disadvantages: Server proximity is not accounted for; if the replicas have diffe
### Nearest Hostname {#load_balancing-nearest_hostname}
-```
+```sql
load_balancing = nearest_hostname
```
@@ -622,7 +625,7 @@ We can also assume that when sending a query to the same server, in the absence
### In Order {#load_balancing-in_order}
-```
+```sql
load_balancing = in_order
```
@@ -632,7 +635,7 @@ This method is appropriate when you know exactly which replica is preferable.
### First or Random {#load_balancing-first_or_random}
-```
+```sql
load_balancing = first_or_random
```
@@ -859,9 +862,9 @@ Default value: 0.
## optimize_throw_if_noop {#setting-optimize_throw_if_noop}
-Enables or disables throwing an exception if the [OPTIMIZE](../../query_language/misc.md#misc_operations-optimize) query have not performed a merge.
+Enables or disables throwing an exception if an [OPTIMIZE](../../query_language/misc.md#misc_operations-optimize) query didn't perform a merge.
-By default `OPTIMIZE` returns successfully even if it haven't done anything. This setting allows to distinguish this situation and get the reason in exception message.
+By default, `OPTIMIZE` returns successfully even if it didn't do anything. This setting lets you differentiate these situations and get the reason in an exception message.
Possible values:
@@ -869,6 +872,8 @@ Possible values:
- 0 — Throwing an exception is disabled.
Default value: 0.
+
+
## distributed_replica_error_half_life {#settings-distributed_replica_error_half_life}
- Type: seconds
@@ -896,16 +901,16 @@ Error count of each replica is capped at this value, preventing a single replica
## os_thread_priority {#setting-os_thread_priority}
-Sets the priority ([nice](https://en.wikipedia.org/wiki/Nice_(Unix))) for threads that execute queries. OS scheduler considers this priority when choosing the next thread to run on each available CPU core.
+Sets the priority ([nice](https://en.wikipedia.org/wiki/Nice_(Unix))) for threads that execute queries. The OS scheduler considers this priority when choosing the next thread to run on each available CPU core.
!!! warning "Warning"
- To use this setting, you need to set the `CAP_SYS_NICE` capability. The `clickhouse-server` package sets it up during installation. Some virtual environments don't allow to set the `CAP_SYS_NICE` capability. In this case `clickhouse-server` shows a message about it at the start.
+ To use this setting, you need to set the `CAP_SYS_NICE` capability. The `clickhouse-server` package sets it up during installation. Some virtual environments don't allow you to set the `CAP_SYS_NICE` capability. In this case, `clickhouse-server` shows a message about it at the start.
Possible values:
-You can set values in the `[-20, 19]` range.
+- You can set values in the range `[-20, 19]`.
-The lower value means a higher priority. Threads with low values of `nice` priority are executed more frequently than threads with high values. High values are preferable for long running non-interactive queries because it allows them to quickly give up resources in favour of short interactive queries when they arrive.
+Lower values mean higher priority. Threads with low `nice` priority values are executed more frequently than threads with high values. High values are preferable for long running non-interactive queries because it allows them to quickly give up resources in favor of short interactive queries when they arrive.
Default value: 0.
diff --git a/docs/en/operations/settings/settings_users.md b/docs/en/operations/settings/settings_users.md
index 7f5dba73306..99d558ff295 100644
--- a/docs/en/operations/settings/settings_users.md
+++ b/docs/en/operations/settings/settings_users.md
@@ -4,7 +4,7 @@ The `users` section of the `user.xml` configuration file contains user settings.
Structure of the `users` section:
-```
+```xml
@@ -80,7 +80,7 @@ All results of DNS requests are cached until the server restarts.
To open access for user from any network, specify:
-```
+```xml
::/0
```
@@ -90,7 +90,7 @@ To open access for user from any network, specify:
To open access only from localhost, specify:
-```
+```xml
::1
127.0.0.1
```
@@ -114,7 +114,7 @@ In this section, you can you can limit rows that are returned by ClickHouse for
The following configuration forces that user `user1` can only see the rows of `table1` as the result of `SELECT` queries, where the value of the `id` field is 1000.
-```
+```xml
diff --git a/docs/en/operations/system_tables.md b/docs/en/operations/system_tables.md
index d5f38f51421..5eae7ecd544 100644
--- a/docs/en/operations/system_tables.md
+++ b/docs/en/operations/system_tables.md
@@ -494,7 +494,7 @@ WHERE table = 'visits'
FORMAT Vertical
```
-```
+```text
Row 1:
──────
database: merge
@@ -520,7 +520,7 @@ active_replicas: 2
Columns:
-```
+```text
database: Database name
table: Table name
engine: Table engine name
@@ -573,7 +573,7 @@ If you don't request the last 4 columns (log_max_index, log_pointer, total_repli
For example, you can check that everything is working correctly like this:
-``` sql
+```sql
SELECT
database,
table,
@@ -619,13 +619,13 @@ Columns:
Example:
-``` sql
+```sql
SELECT *
FROM system.settings
WHERE changed
```
-```
+```text
┌─name───────────────────┬─value───────┬─changed─┐
│ max_threads │ 8 │ 1 │
│ use_uncompressed_cache │ 0 │ 1 │
@@ -686,14 +686,14 @@ Columns:
Example:
-``` sql
+```sql
SELECT *
FROM system.zookeeper
WHERE path = '/clickhouse/tables/01-08/visits/replicas'
FORMAT Vertical
```
-```
+```text
Row 1:
──────
name: example01-08-1.yandex.ru
diff --git a/docs/en/operations/table_engines/aggregatingmergetree.md b/docs/en/operations/table_engines/aggregatingmergetree.md
index bab352d5cb5..006614eec1c 100644
--- a/docs/en/operations/table_engines/aggregatingmergetree.md
+++ b/docs/en/operations/table_engines/aggregatingmergetree.md
@@ -11,7 +11,7 @@ It is appropriate to use `AggregatingMergeTree` if it reduces the number of rows
## Creating a Table
-``` sql
+```sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
@@ -59,7 +59,7 @@ In the results of `SELECT` query the values of `AggregateFunction` type have imp
`AggregatingMergeTree` materialized view that watches the `test.visits` table:
-``` sql
+```sql
CREATE MATERIALIZED VIEW test.basic
ENGINE = AggregatingMergeTree() PARTITION BY toYYYYMM(StartDate) ORDER BY (CounterID, StartDate)
AS SELECT
@@ -73,7 +73,7 @@ GROUP BY CounterID, StartDate;
Inserting of data into the `test.visits` table.
-``` sql
+```sql
INSERT INTO test.visits ...
```
@@ -81,7 +81,7 @@ The data are inserted in both the table and view `test.basic` that will perform
To get the aggregated data, we need to execute a query such as `SELECT ... GROUP BY ...` from the view `test.basic`:
-``` sql
+```sql
SELECT
StartDate,
sumMerge(Visits) AS Visits,
diff --git a/docs/en/operations/table_engines/buffer.md b/docs/en/operations/table_engines/buffer.md
index 18024918835..8f4035da19b 100644
--- a/docs/en/operations/table_engines/buffer.md
+++ b/docs/en/operations/table_engines/buffer.md
@@ -25,7 +25,7 @@ The conditions for flushing the data are calculated separately for each of the `
Example:
-``` sql
+```sql
CREATE TABLE merge.hits_buffer AS merge.hits ENGINE = Buffer(merge, hits, 16, 10, 100, 10000, 1000000, 10000000, 100000000)
```
diff --git a/docs/en/operations/table_engines/collapsingmergetree.md b/docs/en/operations/table_engines/collapsingmergetree.md
index 67cad64e950..42c1bf91860 100644
--- a/docs/en/operations/table_engines/collapsingmergetree.md
+++ b/docs/en/operations/table_engines/collapsingmergetree.md
@@ -65,7 +65,7 @@ Use the particular column `Sign`. If `Sign = 1` it means that the row is a state
For example, we want to calculate how much pages users checked at some site and how long they were there. At some moment of time we write the following row with the state of user activity:
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐
│ 4324182021466249494 │ 5 │ 146 │ 1 │
└─────────────────────┴───────────┴──────────┴──────┘
@@ -73,7 +73,7 @@ For example, we want to calculate how much pages users checked at some site and
At some moment later we register the change of user activity and write it with the following two rows.
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐
│ 4324182021466249494 │ 5 │ 146 │ -1 │
│ 4324182021466249494 │ 6 │ 185 │ 1 │
@@ -86,7 +86,7 @@ The second row contains the current state.
As we need only the last state of user activity, the rows
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐
│ 4324182021466249494 │ 5 │ 146 │ 1 │
│ 4324182021466249494 │ 5 │ 146 │ -1 │
@@ -131,7 +131,7 @@ If you need to extract data without aggregation (for example, to check whether r
Example data:
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐
│ 4324182021466249494 │ 5 │ 146 │ 1 │
│ 4324182021466249494 │ 5 │ 146 │ -1 │
@@ -166,11 +166,11 @@ We use two `INSERT` queries to create two different data parts. If we insert the
Getting the data:
-```
+```sql
SELECT * FROM UAct
```
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐
│ 4324182021466249494 │ 5 │ 146 │ -1 │
│ 4324182021466249494 │ 6 │ 185 │ 1 │
@@ -195,7 +195,7 @@ FROM UAct
GROUP BY UserID
HAVING sum(Sign) > 0
```
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┐
│ 4324182021466249494 │ 6 │ 185 │
└─────────────────────┴───────────┴──────────┘
@@ -206,7 +206,7 @@ If we do not need aggregation and want to force collapsing, we can use `FINAL` m
```sql
SELECT * FROM UAct FINAL
```
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐
│ 4324182021466249494 │ 6 │ 185 │ 1 │
└─────────────────────┴───────────┴──────────┴──────┘
@@ -218,7 +218,7 @@ This way of selecting the data is very inefficient. Don't use it for big tables.
Example data:
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐
│ 4324182021466249494 │ 5 │ 146 │ 1 │
│ 4324182021466249494 │ -5 │ -146 │ -1 │
@@ -247,28 +247,38 @@ insert into UAct values(4324182021466249494, -5, -146, -1);
insert into UAct values(4324182021466249494, 6, 185, 1);
select * from UAct final; // avoid using final in production (just for a test or small tables)
+```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐
│ 4324182021466249494 │ 6 │ 185 │ 1 │
└─────────────────────┴───────────┴──────────┴──────┘
-
+```
+```sql
SELECT
UserID,
sum(PageViews) AS PageViews,
sum(Duration) AS Duration
FROM UAct
GROUP BY UserID
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┐
│ 4324182021466249494 │ 6 │ 185 │
└─────────────────────┴───────────┴──────────┘
-
+```
+```sqk
select count() FROM UAct
+```
+```text
┌─count()─┐
│ 3 │
└─────────┘
-
+```
+```sql
optimize table UAct final;
select * FROM UAct
+```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐
│ 4324182021466249494 │ 6 │ 185 │ 1 │
└─────────────────────┴───────────┴──────────┴──────┘
diff --git a/docs/en/operations/table_engines/custom_partitioning_key.md b/docs/en/operations/table_engines/custom_partitioning_key.md
index e53cf903aed..ba9c551ce2a 100644
--- a/docs/en/operations/table_engines/custom_partitioning_key.md
+++ b/docs/en/operations/table_engines/custom_partitioning_key.md
@@ -6,7 +6,7 @@ A partition is a logical combination of records in a table by a specified criter
The partition is specified in the `PARTITION BY expr` clause when [creating a table](mergetree.md#table_engine-mergetree-creating-a-table). The partition key can be any expression from the table columns. For example, to specify partitioning by month, use the expression `toYYYYMM(date_column)`:
-``` sql
+```sql
CREATE TABLE visits
(
VisitDate Date,
@@ -20,7 +20,7 @@ ORDER BY Hour;
The partition key can also be a tuple of expressions (similar to the [primary key](mergetree.md#primary-keys-and-indexes-in-queries)). For example:
-``` sql
+```sql
ENGINE = ReplicatedCollapsingMergeTree('/clickhouse/tables/name', 'replica1', Sign)
PARTITION BY (toMonday(StartDate), EventType)
ORDER BY (CounterID, StartDate, intHash32(UserID));
@@ -35,7 +35,7 @@ When inserting new data to a table, this data is stored as a separate part (chun
Use the [system.parts](../system_tables.md#system_tables-parts) table to view the table parts and partitions. For example, let's assume that we have a `visits` table with partitioning by month. Let's perform the `SELECT` query for the `system.parts` table:
-``` sql
+```sql
SELECT
partition,
name,
@@ -44,7 +44,7 @@ FROM system.parts
WHERE table = 'visits'
```
-```
+```text
┌─partition─┬─name───────────┬─active─┐
│ 201901 │ 201901_1_3_1 │ 0 │
│ 201901 │ 201901_1_9_2 │ 1 │
@@ -74,11 +74,11 @@ The `active` column shows the status of the part. `1` is active; `0` is inactive
As you can see in the example, there are several separated parts of the same partition (for example, `201901_1_3_1` and `201901_1_9_2`). This means that these parts are not merged yet. ClickHouse merges the inserted parts of data periodically, approximately 15 minutes after inserting. In addition, you can perform a non-scheduled merge using the [OPTIMIZE](../../query_language/misc.md#misc_operations-optimize) query. Example:
-``` sql
+```sql
OPTIMIZE TABLE visits PARTITION 201902;
```
-```
+```text
┌─partition─┬─name───────────┬─active─┐
│ 201901 │ 201901_1_3_1 │ 0 │
│ 201901 │ 201901_1_9_2 │ 1 │
@@ -96,7 +96,7 @@ Inactive parts will be deleted approximately 10 minutes after merging.
Another way to view a set of parts and partitions is to go into the directory of the table: `/var/lib/clickhouse/data///`. For example:
```bash
-dev:/var/lib/clickhouse/data/default/visits$ ls -l
+/var/lib/clickhouse/data/default/visits$ ls -l
total 40
drwxr-xr-x 2 clickhouse clickhouse 4096 Feb 1 16:48 201901_1_3_1
drwxr-xr-x 2 clickhouse clickhouse 4096 Feb 5 16:17 201901_1_9_2
diff --git a/docs/en/operations/table_engines/dictionary.md b/docs/en/operations/table_engines/dictionary.md
index 4ed0199297e..045c013f1c7 100644
--- a/docs/en/operations/table_engines/dictionary.md
+++ b/docs/en/operations/table_engines/dictionary.md
@@ -38,9 +38,7 @@ As an example, consider a dictionary of `products` with the following configurat
Query the dictionary data:
-``` sql
-select name, type, key, attribute.names, attribute.types, bytes_allocated, element_count,source from system.dictionaries where name = 'products';
-
+```sql
SELECT
name,
type,
@@ -54,7 +52,7 @@ FROM system.dictionaries
WHERE name = 'products'
```
-```
+```text
┌─name─────┬─type─┬─key────┬─attribute.names─┬─attribute.types─┬─bytes_allocated─┬─element_count─┬─source──────────┐
│ products │ Flat │ UInt64 │ ['title'] │ ['String'] │ 23065376 │ 175032 │ ODBC: .products │
└──────────┴──────┴────────┴─────────────────┴─────────────────┴─────────────────┴───────────────┴─────────────────┘
@@ -66,45 +64,29 @@ This view isn't helpful when you need to get raw data, or when performing a `JOI
Syntax:
-```
+```sql
CREATE TABLE %table_name% (%fields%) engine = Dictionary(%dictionary_name%)`
```
Usage example:
-``` sql
+```sql
create table products (product_id UInt64, title String) Engine = Dictionary(products);
-
-CREATE TABLE products
-(
- product_id UInt64,
- title String,
-)
-ENGINE = Dictionary(products)
```
-
```
-Ok.
-
-0 rows in set. Elapsed: 0.004 sec.
+Ok
```
Take a look at what's in the table.
-``` sql
+```sql
select * from products limit 1;
-
-SELECT *
-FROM products
-LIMIT 1
```
-```
+```text
┌────product_id─┬─title───────────┐
│ 152689 │ Some item │
└───────────────┴─────────────────┘
-
-1 rows in set. Elapsed: 0.006 sec.
```
diff --git a/docs/en/operations/table_engines/distributed.md b/docs/en/operations/table_engines/distributed.md
index ade20da4eff..8bb0082843b 100644
--- a/docs/en/operations/table_engines/distributed.md
+++ b/docs/en/operations/table_engines/distributed.md
@@ -6,7 +6,7 @@ Reading is automatically parallelized. During a read, the table indexes on remot
The Distributed engine accepts parameters: the cluster name in the server's config file, the name of a remote database, the name of a remote table, and (optionally) a sharding key.
Example:
-```
+```sql
Distributed(logs, default, hits[, sharding_key])
```
diff --git a/docs/en/operations/table_engines/external_data.md b/docs/en/operations/table_engines/external_data.md
index 315bcb386a8..c57794df565 100644
--- a/docs/en/operations/table_engines/external_data.md
+++ b/docs/en/operations/table_engines/external_data.md
@@ -32,9 +32,9 @@ The files specified in 'file' will be parsed by the format specified in 'format'
Examples:
```bash
-echo -ne "1\n2\n3\n" | clickhouse-client --query="SELECT count() FROM test.visits WHERE TraficSourceID IN _data" --external --file=- --types=Int8
+$ echo -ne "1\n2\n3\n" | clickhouse-client --query="SELECT count() FROM test.visits WHERE TraficSourceID IN _data" --external --file=- --types=Int8
849897
-cat /etc/passwd | sed 's/:/\t/g' | clickhouse-client --query="SELECT shell, count() AS c FROM passwd GROUP BY shell ORDER BY c DESC" --external --file=- --name=passwd --structure='login String, unused String, uid UInt16, gid UInt16, comment String, home String, shell String'
+$ cat /etc/passwd | sed 's/:/\t/g' | clickhouse-client --query="SELECT shell, count() AS c FROM passwd GROUP BY shell ORDER BY c DESC" --external --file=- --name=passwd --structure='login String, unused String, uid UInt16, gid UInt16, comment String, home String, shell String'
/bin/sh 20
/bin/false 5
/bin/bash 4
@@ -47,9 +47,9 @@ When using the HTTP interface, external data is passed in the multipart/form-dat
Example:
```bash
-cat /etc/passwd | sed 's/:/\t/g' > passwd.tsv
+$ cat /etc/passwd | sed 's/:/\t/g' > passwd.tsv
-curl -F 'passwd=@passwd.tsv;' 'http://localhost:8123/?query=SELECT+shell,+count()+AS+c+FROM+passwd+GROUP+BY+shell+ORDER+BY+c+DESC&passwd_structure=login+String,+unused+String,+uid+UInt16,+gid+UInt16,+comment+String,+home+String,+shell+String'
+$ curl -F 'passwd=@passwd.tsv;' 'http://localhost:8123/?query=SELECT+shell,+count()+AS+c+FROM+passwd+GROUP+BY+shell+ORDER+BY+c+DESC&passwd_structure=login+String,+unused+String,+uid+UInt16,+gid+UInt16,+comment+String,+home+String,+shell+String'
/bin/sh 20
/bin/false 5
/bin/bash 4
diff --git a/docs/en/operations/table_engines/file.md b/docs/en/operations/table_engines/file.md
index bd7ee3cb90e..71e29bdff6d 100644
--- a/docs/en/operations/table_engines/file.md
+++ b/docs/en/operations/table_engines/file.md
@@ -11,7 +11,7 @@ Usage examples:
## Usage in ClickHouse Server
-```
+```sql
File(Format)
```
@@ -33,7 +33,7 @@ You may manually create this subfolder and file in server filesystem and then [A
**1.** Set up the `file_engine_table` table:
-``` sql
+```sql
CREATE TABLE file_engine_table (name String, value UInt32) ENGINE=File(TabSeparated)
```
@@ -49,11 +49,11 @@ two 2
**3.** Query the data:
-``` sql
+```sql
SELECT * FROM file_engine_table
```
-```
+```text
┌─name─┬─value─┐
│ one │ 1 │
│ two │ 2 │
diff --git a/docs/en/operations/table_engines/graphitemergetree.md b/docs/en/operations/table_engines/graphitemergetree.md
index b85a88f56ce..a8ed8aaaddf 100644
--- a/docs/en/operations/table_engines/graphitemergetree.md
+++ b/docs/en/operations/table_engines/graphitemergetree.md
@@ -89,7 +89,7 @@ patterns
Structure of the `patterns` section:
-```
+```text
pattern
regexp
function
diff --git a/docs/en/operations/table_engines/hdfs.md b/docs/en/operations/table_engines/hdfs.md
index 652ca43b176..1f6ecc50a79 100644
--- a/docs/en/operations/table_engines/hdfs.md
+++ b/docs/en/operations/table_engines/hdfs.md
@@ -5,7 +5,7 @@ to the [File](file.md) and [URL](url.md) engines, but provides Hadoop-specific f
## Usage
-```
+```sql
ENGINE = HDFS(URI, format)
```
The `URI` parameter is the whole file URI in HDFS.
@@ -13,27 +13,28 @@ The `format` parameter specifies one of the available file formats. To perform
`SELECT` queries, the format must be supported for input, and to perform
`INSERT` queries -- for output. The available formats are listed in the
[Formats](../../interfaces/formats.md#formats) section.
+The path part of `URI` may contain globs. In this case the table would be readonly.
**Example:**
**1.** Set up the `hdfs_engine_table` table:
-``` sql
+```sql
CREATE TABLE hdfs_engine_table (name String, value UInt32) ENGINE=HDFS('hdfs://hdfs1:9000/other_storage', 'TSV')
```
**2.** Fill file:
-``` sql
+```sql
INSERT INTO hdfs_engine_table VALUES ('one', 1), ('two', 2), ('three', 3)
```
**3.** Query the data:
-``` sql
+```sql
SELECT * FROM hdfs_engine_table LIMIT 2
```
-```
+```text
┌─name─┬─value─┐
│ one │ 1 │
│ two │ 2 │
@@ -48,4 +49,55 @@ SELECT * FROM hdfs_engine_table LIMIT 2
- Indexes.
- Replication.
+**Globs in path**
+
+Multiple path components can have globs. For being processed file should exists and matches to the whole path pattern. Listing of files determines during `SELECT` (not at `CREATE` moment).
+
+- `*` — Substitutes any number of any characters except `/` including empty string.
+- `?` — Substitutes any single character.
+- `{some_string,another_string,yet_another_one}` — Substitutes any of strings `'some_string', 'another_string', 'yet_another_one'`.
+- `{N..M}` — Substitutes any number in range from N to M including both borders.
+
+ Constructions with `{}` are similar to the [remote table function](../../query_language/table_functions/remote.md)).
+
+**Example**
+
+1. Suppose we have several files in TSV format with the following URIs on HDFS:
+
+- 'hdfs://hdfs1:9000/some_dir/some_file_1'
+- 'hdfs://hdfs1:9000/some_dir/some_file_2'
+- 'hdfs://hdfs1:9000/some_dir/some_file_3'
+- 'hdfs://hdfs1:9000/another_dir/some_file_1'
+- 'hdfs://hdfs1:9000/another_dir/some_file_2'
+- 'hdfs://hdfs1:9000/another_dir/some_file_3'
+
+2. There are several ways to make a table consisting of all six files:
+
+```sql
+CREATE TABLE table_with_range (name String, value UInt32) ENGINE = HDFS('hdfs://hdfs1:9000/{some,another}_dir/some_file_{1..3}', 'TSV')
+```
+
+Another way:
+
+```sql
+CREATE TABLE table_with_question_mark (name String, value UInt32) ENGINE = HDFS('hdfs://hdfs1:9000/{some,another}_dir/some_file_?', 'TSV')
+```
+
+Table consists of all the files in both directories (all files should satisfy format and schema described in query):
+
+```sql
+CREATE TABLE table_with_asterisk (name String, value UInt32) ENGINE = HDFS('hdfs://hdfs1:9000/{some,another}_dir/*', 'TSV')
+```
+
+!!! warning
+ If the listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use `?`.
+
+**Example**
+
+Create table with files named `file000`, `file001`, ... , `file999`:
+
+```sql
+CREARE TABLE big_table (name String, value UInt32) ENGINE = HDFS('hdfs://hdfs1:9000/big_dir/file{0..9}{0..9}{0..9}', 'CSV')
+```
+
[Original article](https://clickhouse.yandex/docs/en/operations/table_engines/hdfs/)
diff --git a/docs/en/operations/table_engines/jdbc.md b/docs/en/operations/table_engines/jdbc.md
index 91e93c53232..e2ceb12641d 100644
--- a/docs/en/operations/table_engines/jdbc.md
+++ b/docs/en/operations/table_engines/jdbc.md
@@ -27,7 +27,7 @@ ENGINE = JDBC(dbms_uri, external_database, external_table)
Creating a table in MySQL server by connecting directly with it's console client:
-```
+```text
mysql> CREATE TABLE `test`.`test` (
-> `int_id` INT NOT NULL AUTO_INCREMENT,
-> `int_nullable` INT NULL DEFAULT NULL,
@@ -50,30 +50,29 @@ mysql> select * from test;
Creating a table in ClickHouse server and selecting data from it:
-```
+```sql
CREATE TABLE jdbc_table ENGINE JDBC('jdbc:mysql://localhost:3306/?user=root&password=root', 'test', 'test')
-
-Ok.
-
+```
+```sql
DESCRIBE TABLE jdbc_table
-
+```
+```text
┌─name───────────────┬─type───────────────┬─default_type─┬─default_expression─┐
│ int_id │ Int32 │ │ │
│ int_nullable │ Nullable(Int32) │ │ │
│ float │ Float32 │ │ │
│ float_nullable │ Nullable(Float32) │ │ │
└────────────────────┴────────────────────┴──────────────┴────────────────────┘
-
-10 rows in set. Elapsed: 0.031 sec.
-
+```
+```sql
SELECT *
FROM jdbc_table
-
+```
+```text
┌─int_id─┬─int_nullable─┬─float─┬─float_nullable─┐
│ 1 │ ᴺᵁᴸᴸ │ 2 │ ᴺᵁᴸᴸ │
└────────┴──────────────┴───────┴────────────────┘
-1 rows in set. Elapsed: 0.055 sec.
```
## See Also
diff --git a/docs/en/operations/table_engines/join.md b/docs/en/operations/table_engines/join.md
index 6a7236e2c5b..4182ac936f8 100644
--- a/docs/en/operations/table_engines/join.md
+++ b/docs/en/operations/table_engines/join.md
@@ -4,7 +4,7 @@ Prepared data structure for using in [JOIN](../../query_language/select.md#selec
## Creating a Table
-```
+```sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1] [TTL expr1],
diff --git a/docs/en/operations/table_engines/kafka.md b/docs/en/operations/table_engines/kafka.md
index 90745ebb4cf..16c53d786c6 100644
--- a/docs/en/operations/table_engines/kafka.md
+++ b/docs/en/operations/table_engines/kafka.md
@@ -11,7 +11,7 @@ Kafka lets you:
## Creating a Table {#table_engine-kafka-creating-a-table}
-```
+```sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
@@ -44,7 +44,7 @@ Optional parameters:
Examples:
-``` sql
+```sql
CREATE TABLE queue (
timestamp UInt64,
level String,
@@ -79,7 +79,7 @@ Examples:
Do not use this method in new projects. If possible, switch old projects to the method described above.
-```
+```sql
Kafka(kafka_broker_list, kafka_topic_list, kafka_group_name, kafka_format
[, kafka_row_delimiter, kafka_schema, kafka_num_consumers, kafka_skip_broken_messages])
```
@@ -104,7 +104,7 @@ One kafka table can have as many materialized views as you like, they do not rea
Example:
-``` sql
+```sql
CREATE TABLE queue (
timestamp UInt64,
level String,
@@ -128,7 +128,7 @@ To improve performance, received messages are grouped into blocks the size of [m
To stop receiving topic data or to change the conversion logic, detach the materialized view:
-```
+```sql
DETACH TABLE consumer;
ATTACH MATERIALIZED VIEW consumer;
```
diff --git a/docs/en/operations/table_engines/log_family.md b/docs/en/operations/table_engines/log_family.md
index aef0e21f08c..9353ea796b6 100644
--- a/docs/en/operations/table_engines/log_family.md
+++ b/docs/en/operations/table_engines/log_family.md
@@ -16,7 +16,7 @@ Engines:
- Append data to the end of file when writing.
- Support locks for concurrent data access.
- During `INSERT` query the table is locked, and other queries for reading and writing data both wait for unlocking. If there are no writing data queries, any number of reading data queries can be performed concurrently.
+ During `INSERT` queries, the table is locked, and other queries for reading and writing data both wait for the table to unlock. If there are no data writing queries, any number of data reading queries can be performed concurrently.
- Do not support [mutation](../../query_language/alter.md#alter-mutations) operations.
- Do not support indexes.
@@ -30,9 +30,9 @@ Engines:
## Differences
-The `TinyLog` engine is the simplest in the family and provides the poorest functionality and lowest efficiency. The `TinyLog` engine does not support a parallel reading of data. It reads the data slower than other engines of the family that have parallel reading, and it uses almost as many descriptors as the `Log` engine because it stores each column in a separate file. Use it in simple low-load scenarios.
+The `TinyLog` engine is the simplest in the family and provides the poorest functionality and lowest efficiency. The `TinyLog` engine doesn't support parallel data reading by several threads. It reads data slower than other engines in the family that support parallel reading and it uses almost as many descriptors as the `Log` engine because it stores each column in a separate file. Use it in simple low-load scenarios.
-The `Log` and `StripeLog` engines support parallel reading of data. When reading data ClickHouse uses multiple threads. Each thread processes separated data block. The `Log` engine uses the separate file for each column of the table. The `StripeLog` stores all the data in one file. Thus the `StripeLog` engine uses fewer descriptors in the operating system, but the `Log` engine provides a more efficient reading of the data.
+The `Log` and `StripeLog` engines support parallel data reading. When reading data, ClickHouse uses multiple threads. Each thread processes a separate data block. The `Log` engine uses a separate file for each column of the table. `StripeLog` stores all the data in one file. As a result, the `StripeLog` engine uses fewer descriptors in the operating system, but the `Log` engine provides higher efficiency when reading data.
[Original article](https://clickhouse.yandex/docs/en/operations/table_engines/log_family/)
diff --git a/docs/en/operations/table_engines/mergetree.md b/docs/en/operations/table_engines/mergetree.md
index c3d64395a02..82ea3b23cc7 100644
--- a/docs/en/operations/table_engines/mergetree.md
+++ b/docs/en/operations/table_engines/mergetree.md
@@ -47,7 +47,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
For a description of parameters, see the [CREATE query description](../../query_language/create.md).
-!!! note "Note"
+!!!note "Note"
`INDEX` is an experimental feature, see [Data Skipping Indexes](#table_engine-mergetree-data_skipping-indexes).
### Query Clauses
@@ -78,7 +78,7 @@ For a description of parameters, see the [CREATE query description](../../query_
For more details, see [TTL for columns and tables](#table_engine-mergetree-ttl)
- `SETTINGS` — Additional parameters that control the behavior of the `MergeTree`:
- - `index_granularity` — The granularity of an index. The number of data rows between the "marks" of an index. By default, 8192. For the list of available parameters, see [MergeTreeSettings.h](https://github.com/yandex/ClickHouse/blob/master/dbms/src/Storages/MergeTree/MergeTreeSettings.h).
+ - `index_granularity` — The granularity of an index. The number of data rows between the "marks" of an index. By default, 8192. For the list of available parameters, see [MergeTreeSettings.h](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/src/Storages/MergeTree/MergeTreeSettings.h).
- `use_minimalistic_part_header_in_zookeeper` — Storage method of the data parts headers in ZooKeeper. If `use_minimalistic_part_header_in_zookeeper=1`, then ZooKeeper stores less data. For more information, see the [setting description](../server_settings/settings.md#server-settings-use_minimalistic_part_header_in_zookeeper) in "Server configuration parameters".
- `min_merge_bytes_to_use_direct_io` — The minimum data volume for merge operation that is required for using direct I/O access to the storage disk. When merging data parts, ClickHouse calculates the total storage volume of all the data to be merged. If the volume exceeds `min_merge_bytes_to_use_direct_io` bytes, ClickHouse reads and writes the data to the storage disk using the direct I/O interface (`O_DIRECT` option). If `min_merge_bytes_to_use_direct_io = 0`, then direct I/O is disabled. Default value: `10 * 1024 * 1024 * 1024` bytes.
@@ -101,7 +101,7 @@ The `index_granularity` setting can be omitted because 8192 is the default value
!!! attention
Do not use this method in new projects. If possible, switch old projects to the method described above.
-```
+```sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
@@ -119,7 +119,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
**Example**
-```
+```sql
MergeTree(EventDate, intHash32(UserID), (CounterID, EventDate, intHash32(UserID)), 8192)
```
@@ -288,24 +288,24 @@ SELECT count() FROM table WHERE u64 * i32 == 10 AND u64 * length(s) >= 1234
- `ngrambf_v1(n, size_of_bloom_filter_in_bytes, number_of_hash_functions, random_seed)`
- Stores a [bloom filter](https://en.wikipedia.org/wiki/Bloom_filter) that contains all ngrams from a block of data. Works only with strings. Can be used for optimization of `equals`, `like` and `in` expressions.
+ Stores a [Bloom filter](https://en.wikipedia.org/wiki/Bloom_filter) that contains all ngrams from a block of data. Works only with strings. Can be used for optimization of `equals`, `like` and `in` expressions.
- `n` — ngram size,
- `size_of_bloom_filter_in_bytes` — Bloom filter size in bytes (you can use large values here, for example, 256 or 512, because it can be compressed well).
- - `number_of_hash_functions` — The number of hash functions used in the bloom filter.
- - `random_seed` — The seed for bloom filter hash functions.
+ - `number_of_hash_functions` — The number of hash functions used in the Bloom filter.
+ - `random_seed` — The seed for Bloom filter hash functions.
- `tokenbf_v1(size_of_bloom_filter_in_bytes, number_of_hash_functions, random_seed)`
The same as `ngrambf_v1`, but stores tokens instead of ngrams. Tokens are sequences separated by non-alphanumeric characters.
-- `bloom_filter([false_positive])` — Stores [bloom filter](https://en.wikipedia.org/wiki/Bloom_filter) for the specified columns.
+- `bloom_filter([false_positive])` — Stores a [Bloom filter](https://en.wikipedia.org/wiki/Bloom_filter) for the specified columns.
- The `false_positive` optional parameter is the probability of false positive response from the filter. Possible values: (0, 1). Default value: 0.025.
+ The optional `false_positive` parameter is the probability of receiving a false positive response from the filter. Possible values: (0, 1). Default value: 0.025.
Supported data types: `Int*`, `UInt*`, `Float*`, `Enum`, `Date`, `DateTime`, `String`, `FixedString`.
- Supported for the following functions: [equals](../../query_language/functions/comparison_functions.md), [notEquals](../../query_language/functions/comparison_functions.md), [in](../../query_language/functions/in_functions.md), [notIn](../../query_language/functions/in_functions.md).
+ The following functions can use it: [equals](../../query_language/functions/comparison_functions.md), [notEquals](../../query_language/functions/comparison_functions.md), [in](../../query_language/functions/in_functions.md), [notIn](../../query_language/functions/in_functions.md).
```sql
INDEX sample_index (u64 * length(s)) TYPE minmax GRANULARITY 4
@@ -315,9 +315,9 @@ INDEX sample_index3 (lower(str), str) TYPE ngrambf_v1(3, 256, 2, 0) GRANULARITY
#### Functions Support
-Conditions in the `WHERE` clause contain calls of functions over the columns. If the column is a part of some index, ClickHouse tries to use this index when performing the functions. ClickHouse supports different subset of functions for using indexes.
+Conditions in the `WHERE` clause contains calls of the functions that operate with columns. If the column is a part of an index, ClickHouse tries to use this index when performing the functions. ClickHouse supports different subsets of functions for using indexes.
-The `set` index can be used with all functions. Functions subsets for other indexes are in the table below.
+The `set` index can be used with all functions. Function subsets for other indexes are shown in the table below.
Function (operator) / Index | primary key | minmax | ngrambf_v1 | tokenbf_v1 | bloom_filter
----------------------------|-------------|--------|------------|------------|---------------
@@ -326,7 +326,7 @@ Function (operator) / Index | primary key | minmax | ngrambf_v1 | tokenbf_v1 | b
[like](../../query_language/functions/string_search_functions.md#function-like) | ✔ | ✔ | ✔ | ✗ | ✗
[notLike](../../query_language/functions/string_search_functions.md#function-notlike) | ✔ | ✔ | ✔ | ✔ | ✗
[startsWith](../../query_language/functions/string_functions.md#function-startswith) | ✔ | ✔ | ✔ | ✔ | ✗
-[endsWith](../../query_language/functions/string_functions.md#function-endswith) | ✗ | ✗ | ✔ | ✔ |
+[endsWith](../../query_language/functions/string_functions.md#function-endswith) | ✗ | ✗ | ✔ | ✔ | ✗
[multiSearchAny](../../query_language/functions/string_search_functions.md#function-multisearchany) | ✗ | ✗ | ✔ | ✔ | ✗
[in](../../query_language/functions/in_functions.md#in-functions) | ✔ | ✔ | ✔ | ✔ | ✔
[notIn](../../query_language/functions/in_functions.md#in-functions) | ✔ | ✔ | ✔ | ✔ | ✔
@@ -338,9 +338,9 @@ Function (operator) / Index | primary key | minmax | ngrambf_v1 | tokenbf_v1 | b
[notEmpty](../../query_language/functions/array_functions.md#function-notempty) | ✔ | ✔ | ✗ | ✗ | ✗
hasToken | ✗ | ✗ | ✗ | ✔ | ✗
-Functions with a constant argument less than ngram size couldn't be used by `ngrambf_v1` for the query optimization.
+Functions with a constant argument that is less than ngram size can't be used by `ngrambf_v1` for query optimization.
-Bloom filters can have false positive matches, so the `ngrambf_v1`, `tokenbf_v1`, `bloom_filter` indexes couldn't be used for optimizing queries where the result of a function is expected to be false, for example:
+Bloom filters can have false positive matches, so the `ngrambf_v1`, `tokenbf_v1`, and `bloom_filter` indexes can't be used for optimizing queries where the result of a function is expected to be false, for example:
- Can be optimized:
- `s LIKE '%test%'`
@@ -370,14 +370,14 @@ The `TTL` clause can be set for the whole table and for each individual column.
The table must have the column in the [Date](../../data_types/date.md) or [DateTime](../../data_types/datetime.md) data type. To define the lifetime of data, use operations on this time column, for example:
-```
+```sql
TTL time_column
TTL time_column + interval
```
To define `interval`, use [time interval](../../query_language/operators.md#operators-datetime) operators.
-```
+```sql
TTL date_time + INTERVAL 1 MONTH
TTL date_time + INTERVAL 15 HOUR
```
diff --git a/docs/en/operations/table_engines/mysql.md b/docs/en/operations/table_engines/mysql.md
index 3b044a6184a..9dac9ba6478 100644
--- a/docs/en/operations/table_engines/mysql.md
+++ b/docs/en/operations/table_engines/mysql.md
@@ -45,7 +45,7 @@ The rest of the conditions and the `LIMIT` sampling constraint are executed in C
Table in MySQL:
-```
+```text
mysql> CREATE TABLE `test`.`test` (
-> `int_id` INT NOT NULL AUTO_INCREMENT,
-> `int_nullable` INT NULL DEFAULT NULL,
diff --git a/docs/en/operations/table_engines/odbc.md b/docs/en/operations/table_engines/odbc.md
index a3e7cda4a39..afcedd4849e 100644
--- a/docs/en/operations/table_engines/odbc.md
+++ b/docs/en/operations/table_engines/odbc.md
@@ -8,7 +8,7 @@ This engine supports the [Nullable](../../data_types/nullable.md) data type.
## Creating a Table
-```
+```sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1],
@@ -41,15 +41,17 @@ Ensure that unixODBC and MySQL Connector are installed.
By default (if installed from packages), ClickHouse starts as user `clickhouse`. Thus, you need to create and configure this user in the MySQL server.
+```bash
+$ sudo mysql
```
-sudo mysql
+```sql
mysql> CREATE USER 'clickhouse'@'localhost' IDENTIFIED BY 'clickhouse';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'clickhouse'@'clickhouse' WITH GRANT OPTION;
```
Then configure the connection in `/etc/odbc.ini`.
-```
+```bash
$ cat /etc/odbc.ini
[mysqlconn]
DRIVER = /usr/local/lib/libmyodbc5w.so
@@ -62,8 +64,8 @@ PASSWORD = clickhouse
You can check the connection using the `isql` utility from the unixODBC installation.
-```
-isql -v mysqlconn
+```bash
+$ isql -v mysqlconn
+---------------------------------------+
| Connected! |
| |
@@ -72,7 +74,7 @@ isql -v mysqlconn
Table in MySQL:
-```
+```text
mysql> CREATE TABLE `test`.`test` (
-> `int_id` INT NOT NULL AUTO_INCREMENT,
-> `int_nullable` INT NULL DEFAULT NULL,
diff --git a/docs/en/operations/table_engines/stripelog.md b/docs/en/operations/table_engines/stripelog.md
index 6637055bc30..11ba0ae3ace 100644
--- a/docs/en/operations/table_engines/stripelog.md
+++ b/docs/en/operations/table_engines/stripelog.md
@@ -6,7 +6,7 @@ Use this engine in scenarios when you need to write many tables with a small amo
## Creating a Table {#table_engines-stripelog-creating-a-table}
-```
+```sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
column1_name [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
@@ -60,7 +60,7 @@ ClickHouse uses multiple threads when selecting data. Each thread reads a separa
```sql
SELECT * FROM stripe_log_table
```
-```
+```text
┌───────────timestamp─┬─message_type─┬─message────────────────────┐
│ 2019-01-18 14:27:32 │ REGULAR │ The second regular message │
│ 2019-01-18 14:34:53 │ WARNING │ The first warning message │
@@ -75,7 +75,7 @@ Sorting the results (ascending order by default):
```sql
SELECT * FROM stripe_log_table ORDER BY timestamp
```
-```
+```text
┌───────────timestamp─┬─message_type─┬─message────────────────────┐
│ 2019-01-18 14:23:43 │ REGULAR │ The first regular message │
│ 2019-01-18 14:27:32 │ REGULAR │ The second regular message │
diff --git a/docs/en/operations/table_engines/summingmergetree.md b/docs/en/operations/table_engines/summingmergetree.md
index 807476861f2..11a7b73bc44 100644
--- a/docs/en/operations/table_engines/summingmergetree.md
+++ b/docs/en/operations/table_engines/summingmergetree.md
@@ -7,7 +7,7 @@ We recommend to use the engine together with `MergeTree`. Store complete data in
## Creating a Table
-```
+```sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
@@ -38,7 +38,7 @@ When creating a `SummingMergeTree` table the same [clauses](mergetree.md) are re
!!! attention
Do not use this method in new projects and, if possible, switch the old projects to the method described above.
-```
+```sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
@@ -68,8 +68,8 @@ ORDER BY key
Insert data to it:
-```
-:) INSERT INTO summtt Values(1,1),(1,2),(2,1)
+```sql
+INSERT INTO summtt Values(1,1),(1,2),(2,1)
```
ClickHouse may sum all the rows not completely ([see below](#data-processing)), so we use an aggregate function `sum` and `GROUP BY` clause in the query.
@@ -78,7 +78,7 @@ ClickHouse may sum all the rows not completely ([see below](#data-processing)),
SELECT key, sum(value) FROM summtt GROUP BY key
```
-```
+```text
┌─key─┬─sum(value)─┐
│ 2 │ 1 │
│ 1 │ 3 │
@@ -119,7 +119,7 @@ then this nested table is interpreted as a mapping of `key => (values...)`, and
Examples:
-```
+```text
[(1, 100)] + [(2, 150)] -> [(1, 100), (2, 150)]
[(1, 100)] + [(1, 150)] -> [(1, 250)]
[(1, 100)] + [(1, 150), (2, 150)] -> [(1, 250), (2, 150)]
diff --git a/docs/en/operations/table_engines/tinylog.md b/docs/en/operations/table_engines/tinylog.md
index fe4913118e9..7be5e36c0b6 100644
--- a/docs/en/operations/table_engines/tinylog.md
+++ b/docs/en/operations/table_engines/tinylog.md
@@ -1,12 +1,9 @@
# TinyLog
-Engine belongs to the family of log engines. See [Log Engine Family](log_family.md) for common properties of log engines and for their differences.
+The engine belongs to the log engine family. See [Log Engine Family](log_family.md) for common properties of log engines and their differences.
-The typical way using this table engine is write-once method: firstly write the data one time, then read it as many times as needed. For example, you can use `TinyLog`-type tables for intermediary data that is processed in small batches.
-
-Queries are executed in a single stream. In other words, this engine is intended for relatively small tables (recommended up to about 1,000,000 rows). It makes sense to use this table engine if you have many small tables, since it is simpler than the [Log](log.md) engine (fewer files need to be opened).
-
-The situation when you have a large number of small tables guarantees poor productivity, but may already be used when working with another DBMS, and you may find it easier to switch to using `TinyLog`-type tables.
+This table engine is typically used with the write-once method: write data one time, then read it as many times as necessary. For example, you can use `TinyLog`-type tables for intermediary data that is processed in small batches. Note that storing data in a large number of small tables is inefficient.
+Queries are executed in a single stream. In other words, this engine is intended for relatively small tables (up to about 1,000,000 rows). It makes sense to use this table engine if you have many small tables, since it's simpler than the [Log](log.md) engine (fewer files need to be opened).
[Original article](https://clickhouse.yandex/docs/en/operations/table_engines/tinylog/)
diff --git a/docs/en/operations/table_engines/url.md b/docs/en/operations/table_engines/url.md
index dcae1d5d6d1..6521604171c 100644
--- a/docs/en/operations/table_engines/url.md
+++ b/docs/en/operations/table_engines/url.md
@@ -21,7 +21,7 @@ respectively. For processing `POST` requests, the remote server must support
**1.** Create a `url_engine_table` table on the server :
-``` sql
+```sql
CREATE TABLE url_engine_table (word String, value UInt64)
ENGINE=URL('http://127.0.0.1:12345/', CSV)
```
@@ -46,16 +46,16 @@ if __name__ == "__main__":
```
```bash
-python3 server.py
+$ python3 server.py
```
**3.** Request data:
-``` sql
+```sql
SELECT * FROM url_engine_table
```
-```
+```text
┌─word──┬─value─┐
│ Hello │ 1 │
│ World │ 2 │
diff --git a/docs/en/operations/table_engines/versionedcollapsingmergetree.md b/docs/en/operations/table_engines/versionedcollapsingmergetree.md
index 2dd417d85bf..547e6625744 100644
--- a/docs/en/operations/table_engines/versionedcollapsingmergetree.md
+++ b/docs/en/operations/table_engines/versionedcollapsingmergetree.md
@@ -29,7 +29,7 @@ For a description of query parameters, see the [query description](../../query_l
**Engine Parameters**
-```
+```sql
VersionedCollapsingMergeTree(sign, version)
```
@@ -81,7 +81,7 @@ Use the `Sign` column when writing the row. If `Sign = 1` it means that the row
For example, we want to calculate how many pages users visited on some site and how long they were there. At some point in time we write the following row with the state of user activity:
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┬─Version─┐
│ 4324182021466249494 │ 5 │ 146 │ 1 │ 1 |
└─────────────────────┴───────────┴──────────┴──────┴─────────┘
@@ -89,7 +89,7 @@ For example, we want to calculate how many pages users visited on some site and
At some point later we register the change of user activity and write it with the following two rows.
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┬─Version─┐
│ 4324182021466249494 │ 5 │ 146 │ -1 │ 1 |
│ 4324182021466249494 │ 6 │ 185 │ 1 │ 2 |
@@ -102,7 +102,7 @@ The second row contains the current state.
Because we need only the last state of user activity, the rows
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┬─Version─┐
│ 4324182021466249494 │ 5 │ 146 │ 1 │ 1 |
│ 4324182021466249494 │ 5 │ 146 │ -1 │ 1 |
@@ -139,7 +139,7 @@ If you need to extract the data with "collapsing" but without aggregation (for e
Example data:
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┬─Version─┐
│ 4324182021466249494 │ 5 │ 146 │ 1 │ 1 |
│ 4324182021466249494 │ 5 │ 146 │ -1 │ 1 |
@@ -175,11 +175,11 @@ We use two `INSERT` queries to create two different data parts. If we insert the
Getting the data:
-```
+```sql
SELECT * FROM UAct
```
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┬─Version─┐
│ 4324182021466249494 │ 5 │ 146 │ 1 │ 1 │
└─────────────────────┴───────────┴──────────┴──────┴─────────┘
@@ -205,7 +205,7 @@ FROM UAct
GROUP BY UserID, Version
HAVING sum(Sign) > 0
```
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Version─┐
│ 4324182021466249494 │ 6 │ 185 │ 2 │
└─────────────────────┴───────────┴──────────┴─────────┘
@@ -216,7 +216,7 @@ If we don't need aggregation and want to force collapsing, we can use the `FINAL
```sql
SELECT * FROM UAct FINAL
```
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┬─Version─┐
│ 4324182021466249494 │ 6 │ 185 │ 1 │ 2 │
└─────────────────────┴───────────┴──────────┴──────┴─────────┘
diff --git a/docs/en/operations/tips.md b/docs/en/operations/tips.md
index 6a347af7e22..c3418de4be1 100644
--- a/docs/en/operations/tips.md
+++ b/docs/en/operations/tips.md
@@ -5,7 +5,7 @@
Always use the `performance` scaling governor. The `on-demand` scaling governor works much worse with constantly high demand.
```bash
-echo 'performance' | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
+$ echo 'performance' | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
```
## CPU Limitations
@@ -20,8 +20,8 @@ For large amounts of data and when processing interactive (online) queries, you
Even for data volumes of ~50 TB per server, using 128 GB of RAM significantly improves query performance compared to 64 GB.
Do not disable overcommit. The value `cat /proc/sys/vm/overcommit_memory` should be 0 or 1. Run
-```
-echo 0 | sudo tee /proc/sys/vm/overcommit_memory
+```bash
+$ echo 0 | sudo tee /proc/sys/vm/overcommit_memory
```
## Huge Pages
@@ -29,7 +29,7 @@ echo 0 | sudo tee /proc/sys/vm/overcommit_memory
Always disable transparent huge pages. It interferes with memory allocators, which leads to significant performance degradation.
```bash
-echo 'never' | sudo tee /sys/kernel/mm/transparent_hugepage/enabled
+$ echo 'never' | sudo tee /sys/kernel/mm/transparent_hugepage/enabled
```
Use `perf top` to watch the time spent in the kernel for memory management.
@@ -54,7 +54,7 @@ If you have more than 4 disks, use RAID-6 (preferred) or RAID-50, instead of RAI
When using RAID-5, RAID-6 or RAID-50, always increase stripe_cache_size, since the default value is usually not the best choice.
```bash
-echo 4096 | sudo tee /sys/block/md2/md/stripe_cache_size
+$ echo 4096 | sudo tee /sys/block/md2/md/stripe_cache_size
```
Calculate the exact number from the number of devices and the block size, using the formula: `2 * num_devices * chunk_size_in_bytes / 4096`.
@@ -163,7 +163,7 @@ dynamicConfigFile=/etc/zookeeper-{{ cluster['name'] }}/conf/zoo.cfg.dynamic
Java version:
-```
+```text
Java(TM) SE Runtime Environment (build 1.8.0_25-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.25-b02, mixed mode)
```
@@ -211,7 +211,7 @@ JAVA_OPTS="-Xms{{ cluster.get('xms','128M') }} \
Salt init:
-```
+```text
description "zookeeper-{{ cluster['name'] }} centralized coordination service"
start on runlevel [2345]
diff --git a/docs/en/operations/troubleshooting.md b/docs/en/operations/troubleshooting.md
index fdfff6b26b2..656a12bad2a 100644
--- a/docs/en/operations/troubleshooting.md
+++ b/docs/en/operations/troubleshooting.md
@@ -26,14 +26,14 @@ Possible issues:
Command:
-```
-sudo service clickhouse-server status
+```bash
+$ sudo service clickhouse-server status
```
If the server is not running, start it with the command:
-```
-sudo service clickhouse-server start
+```bash
+$ sudo service clickhouse-server start
```
**Check logs**
@@ -47,19 +47,19 @@ If the server started successfully, you should see the strings:
If `clickhouse-server` start failed with a configuration error, you should see the `` string with an error description. For example:
-```
+```text
2019.01.11 15:23:25.549505 [ 45 ] {} ExternalDictionaries: Failed reloading 'event2id' external dictionary: Poco::Exception. Code: 1000, e.code() = 111, e.displayText() = Connection refused, e.what() = Connection refused
```
If you don't see an error at the end of the file, look through the entire file starting from the string:
-```
+```text
Application: starting up.
```
If you try to start a second instance of `clickhouse-server` on the server, you see the following log:
-```
+```text
2019.01.11 15:25:11.151730 [ 1 ] {} : Starting ClickHouse 19.1.0 with revision 54413
2019.01.11 15:25:11.154578 [ 1 ] {} Application: starting up
2019.01.11 15:25:11.156361 [ 1 ] {} StatusFile: Status file ./status already exists - unclean restart. Contents:
@@ -77,14 +77,14 @@ Revision: 54413
If you don't find any useful information in `clickhouse-server` logs or there aren't any logs, you can view `system.d` logs using the command:
-```
-sudo journalctl -u clickhouse-server
+```bash
+$ sudo journalctl -u clickhouse-server
```
**Start clickhouse-server in interactive mode**
-```
-sudo -u clickhouse /usr/bin/clickhouse-server --config-file /etc/clickhouse-server/config.xml
+```bash
+$ sudo -u clickhouse /usr/bin/clickhouse-server --config-file /etc/clickhouse-server/config.xml
```
This command starts the server as an interactive app with standard parameters of the autostart script. In this mode `clickhouse-server` prints all the event messages in the console.
diff --git a/docs/en/operations/update.md b/docs/en/operations/update.md
index 9a2268a0793..d008a0e3c61 100644
--- a/docs/en/operations/update.md
+++ b/docs/en/operations/update.md
@@ -2,10 +2,10 @@
If ClickHouse was installed from deb packages, execute the following commands on the server:
-```
-sudo apt-get update
-sudo apt-get install clickhouse-client clickhouse-server
-sudo service clickhouse-server restart
+```bash
+$ sudo apt-get update
+$ sudo apt-get install clickhouse-client clickhouse-server
+$ sudo service clickhouse-server restart
```
If you installed ClickHouse using something other than the recommended deb packages, use the appropriate update method.
diff --git a/docs/en/operations/utils/clickhouse-copier.md b/docs/en/operations/utils/clickhouse-copier.md
index 57358d49f90..08388aab7db 100644
--- a/docs/en/operations/utils/clickhouse-copier.md
+++ b/docs/en/operations/utils/clickhouse-copier.md
@@ -24,7 +24,7 @@ To reduce network traffic, we recommend running `clickhouse-copier` on the same
The utility should be run manually:
```bash
-clickhouse-copier copier --daemon --config zookeeper.xml --task-path /task/path --base-dir /path/to/dir
+$ clickhouse-copier copier --daemon --config zookeeper.xml --task-path /task/path --base-dir /path/to/dir
```
Parameters:
diff --git a/docs/en/operations/utils/clickhouse-local.md b/docs/en/operations/utils/clickhouse-local.md
index e91d1eda2d7..618a90585d2 100644
--- a/docs/en/operations/utils/clickhouse-local.md
+++ b/docs/en/operations/utils/clickhouse-local.md
@@ -17,8 +17,8 @@ By default `clickhouse-local` does not have access to data on the same host, but
Basic usage:
-``` bash
-clickhouse-local --structure "table_structure" --input-format "format_of_incoming_data" -q "query"
+```bash
+$ clickhouse-local --structure "table_structure" --input-format "format_of_incoming_data" -q "query"
```
Arguments:
@@ -40,8 +40,8 @@ Also there are arguments for each ClickHouse configuration variable which are mo
## Examples
-``` bash
-echo -e "1,2\n3,4" | clickhouse-local -S "a Int64, b Int64" -if "CSV" -q "SELECT * FROM table"
+```bash
+$ echo -e "1,2\n3,4" | clickhouse-local -S "a Int64, b Int64" -if "CSV" -q "SELECT * FROM table"
Read 2 rows, 32.00 B in 0.000 sec., 5182 rows/sec., 80.97 KiB/sec.
1 2
3 4
@@ -49,7 +49,7 @@ Read 2 rows, 32.00 B in 0.000 sec., 5182 rows/sec., 80.97 KiB/sec.
Previous example is the same as:
-``` bash
+```bash
$ echo -e "1,2\n3,4" | clickhouse-local -q "CREATE TABLE table (a Int64, b Int64) ENGINE = File(CSV, stdin); SELECT a, b FROM table; DROP TABLE table"
Read 2 rows, 32.00 B in 0.000 sec., 4987 rows/sec., 77.93 KiB/sec.
1 2
@@ -58,8 +58,10 @@ Read 2 rows, 32.00 B in 0.000 sec., 4987 rows/sec., 77.93 KiB/sec.
Now let's output memory user for each Unix user:
-``` bash
+```bash
$ ps aux | tail -n +2 | awk '{ printf("%s\t%s\n", $1, $4) }' | clickhouse-local -S "user String, mem Float64" -q "SELECT user, round(sum(mem), 2) as memTotal FROM table GROUP BY user ORDER BY memTotal DESC FORMAT Pretty"
+```
+```text
Read 186 rows, 4.15 KiB in 0.035 sec., 5302 rows/sec., 118.34 KiB/sec.
┏━━━━━━━━━━┳━━━━━━━━━━┓
┃ user ┃ memTotal ┃
diff --git a/docs/en/query_language/agg_functions/combinators.md b/docs/en/query_language/agg_functions/combinators.md
index d2cf85c3d75..ccad56083c4 100644
--- a/docs/en/query_language/agg_functions/combinators.md
+++ b/docs/en/query_language/agg_functions/combinators.md
@@ -16,7 +16,7 @@ The -Array suffix can be appended to any aggregate function. In this case, the a
Example 1: `sumArray(arr)` - Totals all the elements of all 'arr' arrays. In this example, it could have been written more simply: `sum(arraySum(arr))`.
-Example 2: `uniqArray(arr)` – Count the number of unique elements in all 'arr' arrays. This could be done an easier way: `uniq(arrayJoin(arr))`, but it's not always possible to add 'arrayJoin' to a query.
+Example 2: `uniqArray(arr)` – Counts the number of unique elements in all 'arr' arrays. This could be done an easier way: `uniq(arrayJoin(arr))`, but it's not always possible to add 'arrayJoin' to a query.
-If and -Array can be combined. However, 'Array' must come first, then 'If'. Examples: `uniqArrayIf(arr, cond)`, `quantilesTimingArrayIf(level1, level2)(arr, cond)`. Due to this order, the 'cond' argument can't be an array.
@@ -44,26 +44,26 @@ Merges the intermediate aggregation states in the same way as the -Merge combina
Converts an aggregate function for tables into an aggregate function for arrays that aggregates the corresponding array items and returns an array of results. For example, `sumForEach` for the arrays `[1, 2]`, `[3, 4, 5]`and`[6, 7]`returns the result `[10, 13, 5]` after adding together the corresponding array items.
-## -Resample
+## -Resample {#agg_functions-combinator-resample}
-Allows to divide data by groups, and then separately aggregates the data in those groups. Groups are created by splitting the values of one of the columns into intervals.
+Lets you divide data into groups, and then separately aggregates the data in those groups. Groups are created by splitting the values from one column into intervals.
-```
+```sql
Resample(start, end, step)(, resampling_key)
```
**Parameters**
-- `start` — Starting value of the whole required interval for the values of `resampling_key`.
-- `stop` — Ending value of the whole required interval for the values of `resampling_key`. The whole interval doesn't include the `stop` value `[start, stop)`.
-- `step` — Step for separating the whole interval by subintervals. The `aggFunction` is executed over each of those subintervals independently.
-- `resampling_key` — Column, which values are used for separating data by intervals.
-- `aggFunction_params` — Parameters of `aggFunction`.
+- `start` — Starting value of the whole required interval for `resampling_key` values.
+- `stop` — Ending value of the whole required interval for `resampling_key` values. The whole interval doesn't include the `stop` value `[start, stop)`.
+- `step` — Step for separating the whole interval into subintervals. The `aggFunction` is executed over each of those subintervals independently.
+- `resampling_key` — Column whose values are used for separating data into intervals.
+- `aggFunction_params` — `aggFunction` parameters.
**Returned values**
-- Array of `aggFunction` results for each of subintervals.
+- Array of `aggFunction` results for each subinterval.
**Example**
@@ -80,9 +80,9 @@ Consider the `people` table with the following data:
└────────┴─────┴──────┘
```
-Let's get the names of the persons which age lies in the intervals of `[30,60)` and `[60,75)`. As we use integer representation of age, then there are ages of `[30, 59]` and `[60,74]`.
+Let's get the names of the people whose age lies in the intervals of `[30,60)` and `[60,75)`. Since we use integer representation for age, we get ages in the `[30, 59]` and `[60,74]` intervals.
-For aggregating names into the array, we use the aggregate function [groupArray](reference.md#agg_function-grouparray). It takes a single argument. For our case, it is the `name` column. The `groupArrayResample` function should use the `age` column to aggregate names by age. To define required intervals, we pass the `(30, 75, 30)` arguments into the `groupArrayResample` function.
+To aggregate names in an array, we use the [groupArray](reference.md#agg_function-grouparray) aggregate function. It takes one argument. In our case, it's the `name` column. The `groupArrayResample` function should use the `age` column to aggregate names by age. To define the required intervals, we pass the `30, 75, 30` arguments into the `groupArrayResample` function.
```sql
SELECT groupArrayResample(30, 75, 30)(name, age) from people
@@ -95,9 +95,9 @@ SELECT groupArrayResample(30, 75, 30)(name, age) from people
Consider the results.
-`Jonh` is out of the sample because he is too young. Other people are distributed according to the specified age intervals.
+`Jonh` is out of the sample because he's too young. Other people are distributed according to the specified age intervals.
-Now, let's count the total number of people and their average wage in the specified age intervals.
+Now let's count the total number of people and their average wage in the specified age intervals.
```sql
SELECT
diff --git a/docs/en/query_language/agg_functions/index.md b/docs/en/query_language/agg_functions/index.md
index 93da97357d5..42b9bf2a511 100644
--- a/docs/en/query_language/agg_functions/index.md
+++ b/docs/en/query_language/agg_functions/index.md
@@ -15,7 +15,7 @@ During aggregation, all `NULL`s are skipped.
Consider this table:
-```
+```text
┌─x─┬────y─┐
│ 1 │ 2 │
│ 2 │ ᴺᵁᴸᴸ │
@@ -27,34 +27,27 @@ Consider this table:
Let's say you need to total the values in the `y` column:
+```sql
+SELECT sum(y) FROM t_null_big
+```
```
-:) SELECT sum(y) FROM t_null_big
-
-SELECT sum(y)
-FROM t_null_big
-
┌─sum(y)─┐
│ 7 │
└────────┘
-
-1 rows in set. Elapsed: 0.002 sec.
```
The `sum` function interprets `NULL` as `0`. In particular, this means that if the function receives input of a selection where all the values are `NULL`, then the result will be `0`, not `NULL`.
Now you can use the `groupArray` function to create an array from the `y` column:
+```sql
+SELECT groupArray(y) FROM t_null_big
```
-:) SELECT groupArray(y) FROM t_null_big
-
-SELECT groupArray(y)
-FROM t_null_big
-
+```text
┌─groupArray(y)─┐
│ [2,2,3] │
└───────────────┘
-1 rows in set. Elapsed: 0.002 sec.
```
`groupArray` does not include `NULL` in the resulting array.
diff --git a/docs/en/query_language/agg_functions/parametric_functions.md b/docs/en/query_language/agg_functions/parametric_functions.md
index 3ea26de5937..47196e2a4eb 100644
--- a/docs/en/query_language/agg_functions/parametric_functions.md
+++ b/docs/en/query_language/agg_functions/parametric_functions.md
@@ -6,15 +6,15 @@ Some aggregate functions can accept not only argument columns (used for compress
Calculates an adaptive histogram. It doesn't guarantee precise results.
-```
+```sql
histogram(number_of_bins)(values)
```
-The functions uses [A Streaming Parallel Decision Tree Algorithm](http://jmlr.org/papers/volume11/ben-haim10a/ben-haim10a.pdf). The borders of histogram bins are adjusted as a new data enters a function, and in common case the widths of bins are not equal.
+The functions uses [A Streaming Parallel Decision Tree Algorithm](http://jmlr.org/papers/volume11/ben-haim10a/ben-haim10a.pdf). The borders of histogram bins are adjusted as new data enters a function. In common case, the widths of bins are not equal.
**Parameters**
-`number_of_bins` — Upper limit for a number of bins for the histogram. Function automatically calculates the number of bins. It tries to reach the specified number of bins, but if it fails, it uses less number of bins.
+`number_of_bins` — Upper limit for the number of bins in the histogram. The function automatically calculates the number of bins. It tries to reach the specified number of bins, but if it fails, it uses fewer bins.
`values` — [Expression](../syntax.md#syntax-expressions) resulting in input values.
**Returned values**
@@ -69,7 +69,7 @@ FROM
└────────┴───────┘
```
-In this case you should remember, that you don't know the borders of histogram bins.
+In this case, you should remember that you don't know the histogram bin borders.
## sequenceMatch(pattern)(time, cond1, cond2, ...)
@@ -90,7 +90,7 @@ Example: `sequenceMatch ('(?1).*(?2)')(EventTime, URL LIKE '%company%', URL LIKE
This is a singular example. You could write it using other aggregate functions:
-```
+```sql
minIf(EventTime, URL LIKE '%company%') < maxIf(EventTime, URL LIKE '%cart%').
```
@@ -153,7 +153,7 @@ Set the following chain of events:
To find out how far the user `user_id` could get through the chain in an hour in January of 2017, make the query:
-```
+```sql
SELECT
level,
count() AS c
@@ -184,7 +184,7 @@ Consider you are doing a website analytics, intend to calculate the retention of
This could be easily calculate by `retention`
-```
+```sql
SELECT
sum(r[1]) AS r1,
sum(r[2]) AS r2,
@@ -218,7 +218,7 @@ It works as fast as possible, except for cases when a large N value is used and
Usage example:
-```
+```text
Problem: Generate a report that shows only keywords that produced at least 5 unique users.
Solution: Write in the GROUP BY query SearchPhrase HAVING uniqUpTo(4)(UserID) >= 5
```
diff --git a/docs/en/query_language/agg_functions/reference.md b/docs/en/query_language/agg_functions/reference.md
index a43e84112b0..a7e9fa36741 100644
--- a/docs/en/query_language/agg_functions/reference.md
+++ b/docs/en/query_language/agg_functions/reference.md
@@ -79,7 +79,7 @@ When a `SELECT` query has the `GROUP BY` clause or at least one aggregate functi
Selects a frequently occurring value using the [heavy hitters](http://www.cs.umd.edu/~samir/498/karp.pdf) algorithm. If there is a value that occurs more than in half the cases in each of the query's execution threads, this value is returned. Normally, the result is nondeterministic.
-```
+```sql
anyHeavy(column)
```
@@ -91,12 +91,12 @@ anyHeavy(column)
Take the [OnTime](../../getting_started/example_datasets/ontime.md) data set and select any frequently occurring value in the `AirlineID` column.
-``` sql
+```sql
SELECT anyHeavy(AirlineID) AS res
FROM ontime
```
-```
+```text
┌───res─┐
│ 19690 │
└───────┘
@@ -111,7 +111,7 @@ The result is just as indeterminate as for the `any` function.
Applies bitwise `AND` for series of numbers.
-```
+```sql
groupBitAnd(expr)
```
@@ -127,7 +127,7 @@ Value of the `UInt*` type.
Test data:
-```
+```text
binary decimal
00101100 = 44
00011100 = 28
@@ -137,7 +137,7 @@ binary decimal
Query:
-```
+```sql
SELECT groupBitAnd(num) FROM t
```
@@ -145,7 +145,7 @@ Where `num` is the column with the test data.
Result:
-```
+```text
binary decimal
00000100 = 4
```
@@ -154,7 +154,7 @@ binary decimal
Applies bitwise `OR` for series of numbers.
-```
+```sql
groupBitOr(expr)
```
@@ -170,7 +170,7 @@ Value of the `UInt*` type.
Test data:
-```
+```text
binary decimal
00101100 = 44
00011100 = 28
@@ -180,7 +180,7 @@ binary decimal
Query:
-```
+```sql
SELECT groupBitOr(num) FROM t
```
@@ -188,7 +188,7 @@ Where `num` is the column with the test data.
Result:
-```
+```text
binary decimal
01111101 = 125
```
@@ -197,7 +197,7 @@ binary decimal
Applies bitwise `XOR` for series of numbers.
-```
+```sql
groupBitXor(expr)
```
@@ -213,7 +213,7 @@ Value of the `UInt*` type.
Test data:
-```
+```text
binary decimal
00101100 = 44
00011100 = 28
@@ -223,7 +223,7 @@ binary decimal
Query:
-```
+```sql
SELECT groupBitXor(num) FROM t
```
@@ -231,7 +231,7 @@ Where `num` is the column with the test data.
Result:
-```
+```text
binary decimal
01101000 = 104
```
@@ -241,7 +241,7 @@ binary decimal
Bitmap or Aggregate calculations from a unsigned integer column, return cardinality of type UInt64, if add suffix -State, then return [bitmap object](../functions/bitmap_functions.md).
-```
+```sql
groupBitmap(expr)
```
@@ -257,7 +257,7 @@ Value of the `UInt64` type.
Test data:
-```
+```text
UserID
1
1
@@ -267,13 +267,13 @@ UserID
Query:
-```
+```sql
SELECT groupBitmap(UserID) as num FROM t
```
Result:
-```
+```text
num
3
```
@@ -291,15 +291,17 @@ Calculates the maximum.
Calculates the 'arg' value for a minimal 'val' value. If there are several different values of 'arg' for minimal values of 'val', the first of these values encountered is output.
**Example:**
-```
+```text
┌─user─────┬─salary─┐
│ director │ 5000 │
│ manager │ 3000 │
│ worker │ 1000 │
└──────────┴────────┘
-
+```
+```sql
SELECT argMin(user, salary) FROM salary
-
+```
+```text
┌─argMin(user, salary)─┐
│ worker │
└──────────────────────┘
@@ -330,7 +332,7 @@ Returns a tuple of two arrays: keys in sorted order, and values summed for
Example:
-``` sql
+```sql
CREATE TABLE sum_map(
date Date,
timeslot DateTime,
@@ -351,7 +353,7 @@ FROM sum_map
GROUP BY timeslot
```
-```
+```text
┌────────────timeslot─┬─sumMap(statusMap.status, statusMap.requests)─┐
│ 2000-01-01 00:00:00 │ ([1,2,3,4,5],[10,10,20,10,10]) │
│ 2000-01-01 00:01:00 │ ([4,5,6,7,8],[10,10,20,10,10]) │
@@ -362,7 +364,7 @@ GROUP BY timeslot
Computes the [skewness](https://en.wikipedia.org/wiki/Skewness) of a sequence.
-```
+```sql
skewPop(expr)
```
@@ -386,7 +388,7 @@ Computes the [sample skewness](https://en.wikipedia.org/wiki/Skewness) of a sequ
It represents an unbiased estimate of the skewness of a random variable if passed values form its sample.
-```
+```sql
skewSamp(expr)
```
@@ -408,7 +410,7 @@ SELECT skewSamp(value) FROM series_with_value_column
Computes the [kurtosis](https://en.wikipedia.org/wiki/Kurtosis) of a sequence.
-```
+```sql
kurtPop(expr)
```
@@ -432,7 +434,7 @@ Computes the [sample kurtosis](https://en.wikipedia.org/wiki/Kurtosis) of a sequ
It represents an unbiased estimate of the kurtosis of a random variable if passed values form its sample.
-```
+```sql
kurtSamp(expr)
```
@@ -463,7 +465,7 @@ The function returns array of tuples with `(timestamp, aggregated_value)` pairs.
Before using this function make sure `timestamp` is in ascending order.
Example:
-```
+```text
┌─uid─┬─timestamp─┬─value─┐
│ 1 │ 2 │ 0.2 │
│ 1 │ 7 │ 0.7 │
@@ -477,7 +479,7 @@ Example:
│ 2 │ 24 │ 4.8 │
└─────┴───────────┴───────┘
```
-```
+```sql
CREATE TABLE time_series(
uid UInt64,
timestamp Int64,
@@ -493,7 +495,7 @@ FROM (
);
```
And the result will be:
-```
+```text
[(2,0.2),(3,0.9),(7,2.1),(8,2.4),(12,3.6),(17,5.1),(18,5.4),(24,7.2),(25,2.5)]
```
@@ -502,7 +504,7 @@ Similarly timeSeriesGroupRateSum, timeSeriesGroupRateSum will Calculate the rate
Also, timestamp should be in ascend order before use this function.
Use this function, the result above case will be:
-```
+```text
[(2,0),(3,0.1),(7,0.3),(8,0.3),(12,0.3),(17,0.3),(18,0.3),(24,0.3),(25,0.1)]
```
@@ -516,7 +518,7 @@ The result is always Float64.
Calculates the approximate number of different values of the argument.
-```
+```sql
uniq(x[, ...])
```
@@ -551,7 +553,7 @@ We recommend using this function in almost all scenarios.
Calculates the approximate number of different argument values.
-```
+```sql
uniqCombined(HLL_precision)(x[, ...])
```
@@ -595,7 +597,7 @@ Compared to the [uniq](#agg_function-uniq) function, the `uniqCombined`:
Calculates the approximate number of different argument values, using the [HyperLogLog](https://en.wikipedia.org/wiki/HyperLogLog) algorithm.
-```
+```sql
uniqHLL12(x[, ...])
```
@@ -631,7 +633,7 @@ We don't recommend using this function. In most cases, use the [uniq](#agg_funct
Calculates the exact number of different argument values.
-```
+```sql
uniqExact(x[, ...])
```
@@ -676,7 +678,7 @@ Optional parameters:
Calculates the moving sum of input values.
-```
+```sql
groupArrayMovingSum(numbers_for_summing)
groupArrayMovingSum(window_size)(numbers_for_summing)
```
@@ -745,7 +747,7 @@ FROM t
Calculates the moving average of input values.
-```
+```sql
groupArrayMovingAvg(numbers_for_summing)
groupArrayMovingAvg(window_size)(numbers_for_summing)
```
@@ -848,16 +850,16 @@ Don't use this function for calculating timings. There is a more suitable functi
## quantileTiming {#agg_function-quantiletiming}
-Computes the quantile of the specified level with determined precision. The function intended for calculating quantiles of page loading time in milliseconds.
+Computes the quantile of the specified level with determined precision. The function is intended for calculating page loading time quantiles in milliseconds.
-```
+```sql
quantileTiming(level)(expr)
```
**Parameters**
- `level` — Quantile level. Range: [0, 1].
-- `expr` — [Expression](../syntax.md#syntax-expressions) returning number in the [Float*](../../data_types/float.md) type. The function expects input values in unix timestamp format in milliseconds, but it doesn't validate format.
+- `expr` — [Expression](../syntax.md#syntax-expressions) returning a [Float*](../../data_types/float.md)-type number. The function expects input values in unix timestamp format in milliseconds, but it doesn't validate format.
- If negative values are passed to the function, the behavior is undefined.
- If the value is greater than 30,000 (a page loading time of more than 30 seconds), it is assumed to be 30,000.
@@ -866,13 +868,13 @@ quantileTiming(level)(expr)
The calculation is accurate if:
-- Total number of values is not more than about 5670.
-- Total number of values is more than about 5670, but the times of page loading is less than 1024ms.
+- Total number of values doesn't exceed 5670.
+- Total number of values exceeds 5670, but the page loading time is less than 1024ms.
-Otherwise, the result of a calculation is rounded to the value, multiple of 16 ms.
+Otherwise, the result of the calculation is rounded to the nearest multiple of 16 ms.
!! note "Note"
- For calculating quantiles of page loading times, this function is more effective and accurate compared to [quantile](#agg_function-quantile).
+ For calculating page loading time quantiles, this function is more effective and accurate than [quantile](#agg_function-quantile).
**Returned value**
@@ -881,9 +883,9 @@ Otherwise, the result of a calculation is rounded to the value, multiple of 16 m
Type: `Float32`.
!!! note "Note"
- If no values were passed to the function (when using `quantileTimingIf`), [NaN](../../data_types/float.md#data_type-float-nan-inf) is returned. The purpose of this is to differentiate these cases from the cases which result in zero. See [ORDER BY clause](../select.md#select-order-by) for the note on sorting `NaN` values.
+ If no values are passed to the function (when using `quantileTimingIf`), [NaN](../../data_types/float.md#data_type-float-nan-inf) is returned. The purpose of this is to differentiate these cases from cases that result in zero. See [ORDER BY clause](../select.md#select-order-by) for notes on sorting `NaN` values.
-The result is deterministic (it doesn't depend on the order of query processing).
+The result is deterministic (it doesn't depend on the query processing order).
**Example**
@@ -955,7 +957,7 @@ Returns an array of the most frequent values in the specified column. The result
Implements the [ Filtered Space-Saving](http://www.l2f.inesc-id.pt/~fmmb/wiki/uploads/Work/misnis.ref0a.pdf) algorithm for analyzing TopK, based on the reduce-and-combine algorithm from [Parallel Space Saving](https://arxiv.org/pdf/1401.0702.pdf).
-```
+```sql
topK(N)(column)
```
@@ -972,12 +974,12 @@ We recommend using the `N < 10 ` value; performance is reduced with large `N` va
Take the [OnTime](../../getting_started/example_datasets/ontime.md) data set and select the three most frequently occurring values in the `AirlineID` column.
-``` sql
+```sql
SELECT topK(3)(AirlineID) AS res
FROM ontime
```
-```
+```text
┌─res─────────────────┐
│ [19393,19790,19805] │
└─────────────────────┘
@@ -1001,7 +1003,7 @@ Calculates the Pearson correlation coefficient: `Σ((x - x̅)(y - y̅)) / sqrt(
Performs simple (unidimensional) linear regression.
-```
+```sql
simpleLinearRegression(x, y)
```
diff --git a/docs/en/query_language/alter.md b/docs/en/query_language/alter.md
index 84ff4f390a8..5c1d6331add 100644
--- a/docs/en/query_language/alter.md
+++ b/docs/en/query_language/alter.md
@@ -6,7 +6,7 @@ The `ALTER` query is only supported for `*MergeTree` tables, as well as `Merge`a
Changing the table structure.
-``` sql
+```sql
ALTER TABLE [db].name [ON CLUSTER cluster] ADD|DROP|CLEAR|COMMENT|MODIFY COLUMN ...
```
@@ -25,7 +25,7 @@ These actions are described in detail below.
#### ADD COLUMN {#alter_add-column}
-``` sql
+```sql
ADD COLUMN [IF NOT EXISTS] name [type] [default_expr] [AFTER name_after]
```
@@ -39,13 +39,13 @@ This approach allows us to complete the `ALTER` query instantly, without increas
Example:
-``` sql
+```sql
ALTER TABLE visits ADD COLUMN browser String AFTER user_id
```
#### DROP COLUMN {#alter_drop-column}
-``` sql
+```sql
DROP COLUMN [IF EXISTS] name
```
@@ -55,13 +55,13 @@ Deletes data from the file system. Since this deletes entire files, the query is
Example:
-``` sql
+```sql
ALTER TABLE visits DROP COLUMN browser
```
#### CLEAR COLUMN {#alter_clear-column}
-``` sql
+```sql
CLEAR COLUMN [IF EXISTS] name IN PARTITION partition_name
```
@@ -71,13 +71,13 @@ If the `IF EXISTS` clause is specified, the query won't return an error if the c
Example:
-``` sql
+```sql
ALTER TABLE visits CLEAR COLUMN browser IN PARTITION tuple()
```
#### COMMENT COLUMN {#alter_comment-column}
-``` sql
+```sql
COMMENT COLUMN [IF EXISTS] name 'comment'
```
@@ -89,13 +89,13 @@ Comments are stored in the `comment_expression` column returned by the [DESCRIBE
Example:
-``` sql
+```sql
ALTER TABLE visits COMMENT COLUMN browser 'The table shows the browser used for accessing the site.'
```
#### MODIFY COLUMN {#alter_modify-column}
-``` sql
+```sql
MODIFY COLUMN [IF EXISTS] name [type] [default_expr]
```
@@ -105,7 +105,7 @@ When changing the type, values are converted as if the [toType](functions/type_c
Example:
-``` sql
+```sql
ALTER TABLE visits MODIFY COLUMN browser Array(String)
```
@@ -139,7 +139,7 @@ For tables that don't store data themselves (such as `Merge` and `Distributed`),
The following command is supported:
-``` sql
+```sql
MODIFY ORDER BY new_expression
```
@@ -171,7 +171,7 @@ Also, they are replicated (syncing indices metadata through ZooKeeper).
See more on [constraints](create.md#constraints)
Constraints could be added or deleted using following syntax:
-```
+```sql
ALTER TABLE [db].name ADD CONSTRAINT constraint_name CHECK expression;
ALTER TABLE [db].name DROP CONSTRAINT constraint_name;
```
@@ -197,7 +197,7 @@ The following operations with [partitions](../operations/table_engines/custom_pa
#### DETACH PARTITION {#alter_detach-partition}
-``` sql
+```sql
ALTER TABLE table_name DETACH PARTITION partition_expr
```
@@ -205,7 +205,7 @@ Moves all data for the specified partition to the `detached` directory. The serv
Example:
-``` sql
+```sql
ALTER TABLE visits DETACH PARTITION 201901
```
@@ -217,7 +217,7 @@ This query is replicated – it moves the data to the `detached` directory on al
#### DROP PARTITION {#alter_drop-partition}
-``` sql
+```sql
ALTER TABLE table_name DROP PARTITION partition_expr
```
@@ -245,7 +245,7 @@ ALTER TABLE table_name ATTACH PARTITION|PART partition_expr
Adds data to the table from the `detached` directory. It is possible to add data for an entire partition or for a separate part. Examples:
-``` sql
+```sql
ALTER TABLE visits ATTACH PARTITION 201901;
ALTER TABLE visits ATTACH PART 201901_2_2_0;
```
@@ -258,7 +258,7 @@ So you can put data to the `detached` directory on one replica, and use the `ALT
#### REPLACE PARTITION {#alter_replace-partition}
-``` sql
+```sql
ALTER TABLE table2 REPLACE PARTITION partition_expr FROM table1
```
@@ -271,7 +271,7 @@ For the query to run successfully, the following conditions must be met:
#### CLEAR COLUMN IN PARTITION {#alter_clear-column-partition}
-``` sql
+```sql
ALTER TABLE table_name CLEAR COLUMN column_name IN PARTITION partition_expr
```
@@ -279,13 +279,13 @@ Resets all values in the specified column in a partition. If the `DEFAULT` claus
Example:
-``` sql
+```sql
ALTER TABLE visits CLEAR COLUMN hour in PARTITION 201902
```
#### FREEZE PARTITION {#alter_freeze-partition}
-``` sql
+```sql
ALTER TABLE table_name FREEZE [PARTITION partition_expr]
```
@@ -321,7 +321,7 @@ For more information about backups and restoring data, see the [Data Backup](../
#### CLEAR INDEX IN PARTITION {#alter_clear-index-partition}
-``` sql
+```sql
ALTER TABLE table_name CLEAR INDEX index_name IN PARTITION partition_expr
```
@@ -329,7 +329,7 @@ The query works similar to `CLEAR COLUMN`, but it resets an index instead of a c
#### FETCH PARTITION {#alter_fetch-partition}
-``` sql
+```sql
ALTER TABLE table_name FETCH PARTITION partition_expr FROM 'path-in-zookeeper'
```
@@ -342,7 +342,7 @@ The query does the following:
For example:
-``` sql
+```sql
ALTER TABLE users FETCH PARTITION 201902 FROM '/clickhouse/tables/01-01/visits';
ALTER TABLE users ATTACH PARTITION 201902;
```
@@ -370,11 +370,11 @@ For old-style tables, you can specify the partition either as a number `201901`
All the rules above are also true for the [OPTIMIZE](misc.md#misc_operations-optimize) query. If you need to specify the only partition when optimizing a non-partitioned table, set the expression `PARTITION tuple()`. For example:
-``` sql
+```sql
OPTIMIZE TABLE table_not_partitioned PARTITION tuple() FINAL;
```
-The examples of `ALTER ... PARTITION` queries are demonstrated in the tests [`00502_custom_partitioning_local`](https://github.com/yandex/ClickHouse/blob/master/dbms/tests/queries/0_stateless/00502_custom_partitioning_local.sql) and [`00502_custom_partitioning_replicated_zookeeper`](https://github.com/yandex/ClickHouse/blob/master/dbms/tests/queries/0_stateless/00502_custom_partitioning_replicated_zookeeper.sql).
+The examples of `ALTER ... PARTITION` queries are demonstrated in the tests [`00502_custom_partitioning_local`](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/tests/queries/0_stateless/00502_custom_partitioning_local.sql) and [`00502_custom_partitioning_replicated_zookeeper`](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/tests/queries/0_stateless/00502_custom_partitioning_replicated_zookeeper.sql).
### Synchronicity of ALTER Queries
@@ -393,19 +393,19 @@ Existing tables are ready for mutations as-is (no conversion necessary), but aft
Currently available commands:
-``` sql
+```sql
ALTER TABLE [db.]table DELETE WHERE filter_expr
```
The `filter_expr` must be of type UInt8. The query deletes rows in the table for which this expression takes a non-zero value.
-``` sql
+```sql
ALTER TABLE [db.]table UPDATE column1 = expr1 [, ...] WHERE filter_expr
```
The command is available starting with the 18.12.14 version. The `filter_expr` must be of type UInt8. This query updates values of specified columns to the values of corresponding expressions in rows for which the `filter_expr` takes a non-zero value. Values are casted to the column type using the `CAST` operator. Updating columns that are used in the calculation of the primary or the partition key is not supported.
-``` sql
+```sql
ALTER TABLE [db.]table MATERIALIZE INDEX name IN PARTITION partition_name
```
diff --git a/docs/en/query_language/create.md b/docs/en/query_language/create.md
index 20790ffb834..2a8c546233c 100644
--- a/docs/en/query_language/create.md
+++ b/docs/en/query_language/create.md
@@ -4,7 +4,7 @@
Creates database.
-``` sql
+```sql
CREATE DATABASE [IF NOT EXISTS] db_name [ON CLUSTER cluster] [ENGINE = engine(...)]
```
@@ -48,19 +48,19 @@ The structure of the table is a list of column descriptions. If indexes are supp
A column description is `name type` in the simplest case. Example: `RegionID UInt32`.
Expressions can also be defined for default values (see below).
-``` sql
+```sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name AS [db2.]name2 [ENGINE = engine]
```
Creates a table with the same structure as another table. You can specify a different engine for the table. If the engine is not specified, the same engine will be used as for the `db2.name2` table.
-``` sql
+```sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name AS table_fucntion()
```
Creates a table with the structure and data returned by a [table function](table_functions/index.md).
-``` sql
+```sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name ENGINE = engine AS SELECT ...
```
@@ -149,7 +149,7 @@ ENGINE =
If a codec is specified, the default codec doesn't apply. Codecs can be combined in a pipeline, for example, `CODEC(Delta, ZSTD)`. To select the best codecs combination for you project, pass benchmarks, similar to described in the Altinity [New Encodings to Improve ClickHouse Efficiency](https://www.altinity.com/blog/2019/7/new-encodings-to-improve-clickhouse) article.
!!!warning
- You cannot decompress ClickHouse database files with external utilities, for example, `lz4`. Use the special utility, [clickhouse-compressor](https://github.com/yandex/ClickHouse/tree/master/dbms/programs/compressor).
+ You cannot decompress ClickHouse database files with external utilities, for example, `lz4`. Use the special utility, [clickhouse-compressor](https://github.com/ClickHouse/ClickHouse/tree/master/dbms/programs/compressor).
Compression is supported for the table engines:
@@ -221,7 +221,7 @@ In most cases, temporary tables are not created manually, but when using externa
The `CREATE`, `DROP`, `ALTER`, and `RENAME` queries support distributed execution on a cluster.
For example, the following query creates the `all_hits` `Distributed` table on each host in `cluster`:
-``` sql
+```sql
CREATE TABLE IF NOT EXISTS all_hits ON CLUSTER cluster (p Date, i Int32) ENGINE = Distributed(cluster, default, hits)
```
@@ -231,7 +231,7 @@ The local version of the query will eventually be implemented on each host in th
## CREATE VIEW
-``` sql
+```sql
CREATE [MATERIALIZED] VIEW [IF NOT EXISTS] [db.]table_name [TO[db.]name] [ENGINE = engine] [POPULATE] AS SELECT ...
```
@@ -241,19 +241,19 @@ Normal views don't store any data, but just perform a read from another table. I
As an example, assume you've created a view:
-``` sql
+```sql
CREATE VIEW view AS SELECT ...
```
and written a query:
-``` sql
+```sql
SELECT a, b, c FROM view
```
This query is fully equivalent to using the subquery:
-``` sql
+```sql
SELECT a, b, c FROM (SELECT ...)
```
diff --git a/docs/en/query_language/dicts/external_dicts_dict_layout.md b/docs/en/query_language/dicts/external_dicts_dict_layout.md
index a9a80dbe761..15bb4850cb9 100644
--- a/docs/en/query_language/dicts/external_dicts_dict_layout.md
+++ b/docs/en/query_language/dicts/external_dicts_dict_layout.md
@@ -112,7 +112,7 @@ This storage method works the same way as hashed and allows using date/time (arb
Example: The table contains discounts for each advertiser in the format:
-```
+```text
+---------------+---------------------+-------------------+--------+
| advertiser id | discount start date | discount end date | amount |
+===============+=====================+===================+========+
@@ -146,7 +146,7 @@ Example:
To work with these dictionaries, you need to pass an additional argument to the `dictGetT` function, for which a range is selected:
-```
+```sql
dictGetT('dict_name', 'attr_name', id, date)
```
@@ -240,7 +240,7 @@ This type of storage is for mapping network prefixes (IP addresses) to metadata
Example: The table contains network prefixes and their corresponding AS number and country code:
-```
+```text
+-----------------+-------+--------+
| prefix | asn | cca2 |
+=================+=======+========+
@@ -283,13 +283,13 @@ The key must have only one String type attribute that contains an allowed IP pre
For queries, you must use the same functions (`dictGetT` with a tuple) as for dictionaries with composite keys:
-```
+```sql
dictGetT('dict_name', 'attr_name', tuple(ip))
```
The function takes either `UInt32` for IPv4, or `FixedString(16)` for IPv6:
-```
+```sql
dictGetString('prefix', 'asn', tuple(IPv6StringToNum('2001:db8::1')))
```
diff --git a/docs/en/query_language/dicts/external_dicts_dict_sources.md b/docs/en/query_language/dicts/external_dicts_dict_sources.md
index d670c24c5fb..493b75a9cbb 100644
--- a/docs/en/query_language/dicts/external_dicts_dict_sources.md
+++ b/docs/en/query_language/dicts/external_dicts_dict_sources.md
@@ -131,7 +131,7 @@ If you have a problems with encodings when using Oracle, see the corresponding [
Let's configure unixODBC for PostgreSQL. Content of `/etc/odbc.ini`:
-```
+```text
[gregtest]
Driver = /usr/lib/psqlodbca.so
Servername = localhost
@@ -144,7 +144,7 @@ PASSWORD = test
If you then make a query such as
-```
+```sql
SELECT * FROM odbc('DSN=gregtest;Servername=some-server.com', 'test_db');
```
@@ -155,12 +155,13 @@ ODBC driver will send values of `USERNAME` and `PASSWORD` from `odbc.ini` to `so
Ubuntu OS.
Installing unixODBC and the ODBC driver for PostgreSQL:
-
- sudo apt-get install -y unixodbc odbcinst odbc-postgresql
+```bash
+$ sudo apt-get install -y unixodbc odbcinst odbc-postgresql
+```
Configuring `/etc/odbc.ini` (or `~/.odbc.ini`):
-```
+```text
[DEFAULT]
Driver = myconnection
@@ -222,13 +223,13 @@ Ubuntu OS.
Installing the driver: :
-```
- sudo apt-get install tdsodbc freetds-bin sqsh
+```bash
+$ sudo apt-get install tdsodbc freetds-bin sqsh
```
-Configuring the driver: :
+Configuring the driver:
-```
+```bash
$ cat /etc/freetds/freetds.conf
...
diff --git a/docs/en/query_language/functions/arithmetic_functions.md b/docs/en/query_language/functions/arithmetic_functions.md
index 0f4795ec0b9..fb0c3939a2a 100644
--- a/docs/en/query_language/functions/arithmetic_functions.md
+++ b/docs/en/query_language/functions/arithmetic_functions.md
@@ -4,11 +4,11 @@ For all arithmetic functions, the result type is calculated as the smallest numb
Example:
-``` sql
+```sql
SELECT toTypeName(0), toTypeName(0 + 0), toTypeName(0 + 0 + 0), toTypeName(0 + 0 + 0 + 0)
```
-```
+```text
┌─toTypeName(0)─┬─toTypeName(plus(0, 0))─┬─toTypeName(plus(plus(0, 0), 0))─┬─toTypeName(plus(plus(plus(0, 0), 0), 0))─┐
│ UInt8 │ UInt16 │ UInt32 │ UInt64 │
└───────────────┴────────────────────────┴─────────────────────────────────┴──────────────────────────────────────────┘
diff --git a/docs/en/query_language/functions/array_functions.md b/docs/en/query_language/functions/array_functions.md
index 5065d428994..bea2c0a6ec6 100644
--- a/docs/en/query_language/functions/array_functions.md
+++ b/docs/en/query_language/functions/array_functions.md
@@ -49,7 +49,7 @@ Returns an 'Array(T)' type result, where 'T' is the smallest common type out of
Combines arrays passed as arguments.
-```
+```sql
arrayConcat(arrays)
```
@@ -82,9 +82,10 @@ Returns 0 if the the element is not in the array, or 1 if it is.
`NULL` is processed as a value.
-```
+```sql
SELECT has([1, 2, NULL], NULL)
-
+```
+```text
┌─has([1, 2, NULL], NULL)─┐
│ 1 │
└─────────────────────────┘
@@ -94,7 +95,7 @@ SELECT has([1, 2, NULL], NULL)
Checks whether one array is a subset of another.
-```
+```sql
hasAll(set, subset)
```
@@ -132,7 +133,7 @@ hasAll(set, subset)
Checks whether two arrays have intersection by some elements.
-```
+```sql
hasAny(array1, array2)
```
@@ -169,10 +170,10 @@ Returns the index of the first 'x' element (starting from 1) if it is in the arr
Example:
-```
-:) SELECT indexOf([1,3,NULL,NULL],NULL)
-
+```sql
SELECT indexOf([1, 3, NULL, NULL], NULL)
+```
+```text
┌─indexOf([1, 3, NULL, NULL], NULL)─┐
│ 3 │
@@ -189,9 +190,10 @@ Returns the number of elements in the array equal to x. Equivalent to arrayCount
Example:
-```
+```sql
SELECT countEqual([1, 2, NULL, NULL], NULL)
-
+```
+```text
┌─countEqual([1, 2, NULL, NULL], NULL)─┐
│ 2 │
└──────────────────────────────────────┘
@@ -293,7 +295,7 @@ This is necessary when using ARRAY JOIN with a nested data structure and further
Removes the last item from the array.
-```
+```sql
arrayPopBack(array)
```
@@ -316,7 +318,7 @@ SELECT arrayPopBack([1, 2, 3]) AS res
Removes the first item from the array.
-```
+```sql
arrayPopFront(array)
```
@@ -339,7 +341,7 @@ SELECT arrayPopFront([1, 2, 3]) AS res
Adds one item to the end of the array.
-```
+```sql
arrayPushBack(array, single_value)
```
@@ -363,7 +365,7 @@ SELECT arrayPushBack(['a'], 'b') AS res
Adds one element to the beginning of the array.
-```
+```sql
arrayPushFront(array, single_value)
```
@@ -387,7 +389,7 @@ SELECT arrayPushBack(['b'], 'a') AS res
Changes the length of the array.
-```
+```sql
arrayResize(array, size[, extender])
```
@@ -405,17 +407,19 @@ An array of length `size`.
**Examples of calls**
-```
+```sql
SELECT arrayResize([1], 3)
-
+```
+```text
┌─arrayResize([1], 3)─┐
│ [1,0,0] │
└─────────────────────┘
```
-```
+```sql
SELECT arrayResize([1], 3, NULL)
-
+```
+```text
┌─arrayResize([1], 3, NULL)─┐
│ [1,NULL,NULL] │
└───────────────────────────┘
@@ -425,7 +429,7 @@ SELECT arrayResize([1], 3, NULL)
Returns a slice of the array.
-```
+```sql
arraySlice(array, offset[, length])
```
@@ -653,7 +657,7 @@ Takes an array, returns an array with the difference between all pairs of neighb
SELECT arrayDifference([1, 2, 3, 4])
```
-```
+```text
┌─arrayDifference([1, 2, 3, 4])─┐
│ [0,1,1,1] │
└───────────────────────────────┘
@@ -667,7 +671,7 @@ Takes an array, returns an array containing the different elements in all the ar
SELECT arrayDistinct([1, 2, 2, 3, 1])
```
-```
+```text
┌─arrayDistinct([1, 2, 2, 3, 1])─┐
│ [1,2,3] │
└────────────────────────────────┘
@@ -687,7 +691,7 @@ SELECT
arrayIntersect([1, 2], [1, 3], [1, 4]) AS intersect
```
-```
+```text
┌─no_intersect─┬─intersect─┐
│ [] │ [1] │
└──────────────┴───────────┘
diff --git a/docs/en/query_language/functions/array_join.md b/docs/en/query_language/functions/array_join.md
index ede5b5e9d41..25dfc626581 100644
--- a/docs/en/query_language/functions/array_join.md
+++ b/docs/en/query_language/functions/array_join.md
@@ -19,7 +19,7 @@ Example:
SELECT arrayJoin([1, 2, 3] AS src) AS dst, 'Hello', src
```
-```
+```text
┌─dst─┬─\'Hello\'─┬─src─────┐
│ 1 │ Hello │ [1,2,3] │
│ 2 │ Hello │ [1,2,3] │
diff --git a/docs/en/query_language/functions/bitmap_functions.md b/docs/en/query_language/functions/bitmap_functions.md
index 20cfefa4f1e..f50097b08cb 100644
--- a/docs/en/query_language/functions/bitmap_functions.md
+++ b/docs/en/query_language/functions/bitmap_functions.md
@@ -13,7 +13,7 @@ For more information on RoaringBitmap, see: [CRoaring](https://github.com/Roarin
Build a bitmap from unsigned integer array.
-```
+```sql
bitmapBuild(array)
```
@@ -36,7 +36,7 @@ SELECT bitmapBuild([1, 2, 3, 4, 5]) AS res, toTypeName(res)
Convert bitmap to integer array.
-```
+```sql
bitmapToArray(bitmap)
```
@@ -50,7 +50,7 @@ bitmapToArray(bitmap)
SELECT bitmapToArray(bitmapBuild([1, 2, 3, 4, 5])) AS res
```
-```
+```text
┌─res─────────┐
│ [1,2,3,4,5] │
└─────────────┘
@@ -60,7 +60,7 @@ SELECT bitmapToArray(bitmapBuild([1, 2, 3, 4, 5])) AS res
Return subset in specified range (not include the range_end).
-```
+```sql
bitmapSubsetInRange(bitmap, range_start, range_end)
```
@@ -72,11 +72,11 @@ bitmapSubsetInRange(bitmap, range_start, range_end)
**Example**
-``` sql
+```sql
SELECT bitmapToArray(bitmapSubsetInRange(bitmapBuild([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,100,200,500]), toUInt32(30), toUInt32(200))) AS res
```
-```
+```text
┌─res───────────────┐
│ [30,31,32,33,100] │
└───────────────────┘
@@ -112,7 +112,7 @@ SELECT bitmapToArray(bitmapSubsetLimit(bitmapBuild([0,1,2,3,4,5,6,7,8,9,10,11,12
Checks whether the bitmap contains an element.
-```
+```sql
bitmapContains(haystack, needle)
```
@@ -143,7 +143,7 @@ SELECT bitmapContains(bitmapBuild([1,5,7,9]), toUInt32(9)) AS res
Checks whether two bitmaps have intersection by some elements.
-```
+```sql
bitmapHasAny(bitmap1, bitmap2)
```
@@ -160,11 +160,11 @@ If you are sure that `bitmap2` contains strictly one element, consider using the
**Example**
-``` sql
+```sql
SELECT bitmapHasAny(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res
```
-```
+```text
┌─res─┐
│ 1 │
└─────┘
@@ -175,7 +175,7 @@ SELECT bitmapHasAny(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res
Analogous to `hasAll(array, array)` returns 1 if the first bitmap contains all the elements of the second one, 0 otherwise.
If the second argument is an empty bitmap then returns 1.
-```
+```sql
bitmapHasAll(bitmap,bitmap)
```
@@ -185,11 +185,11 @@ bitmapHasAll(bitmap,bitmap)
**Example**
-``` sql
+```sql
SELECT bitmapHasAll(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res
```
-```
+```text
┌─res─┐
│ 0 │
└─────┘
@@ -200,7 +200,7 @@ SELECT bitmapHasAll(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res
Two bitmap and calculation, the result is a new bitmap.
-```
+```sql
bitmapAnd(bitmap,bitmap)
```
@@ -214,7 +214,7 @@ bitmapAnd(bitmap,bitmap)
SELECT bitmapToArray(bitmapAnd(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res
```
-```
+```text
┌─res─┐
│ [3] │
└─────┘
@@ -225,7 +225,7 @@ SELECT bitmapToArray(bitmapAnd(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS re
Two bitmap or calculation, the result is a new bitmap.
-```
+```sql
bitmapOr(bitmap,bitmap)
```
@@ -235,11 +235,11 @@ bitmapOr(bitmap,bitmap)
**Example**
-``` sql
+```sql
SELECT bitmapToArray(bitmapOr(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res
```
-```
+```text
┌─res─────────┐
│ [1,2,3,4,5] │
└─────────────┘
@@ -249,7 +249,7 @@ SELECT bitmapToArray(bitmapOr(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res
Two bitmap xor calculation, the result is a new bitmap.
-```
+```sql
bitmapXor(bitmap,bitmap)
```
@@ -259,11 +259,11 @@ bitmapXor(bitmap,bitmap)
**Example**
-``` sql
+```sql
SELECT bitmapToArray(bitmapXor(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res
```
-```
+```text
┌─res───────┐
│ [1,2,4,5] │
└───────────┘
@@ -273,7 +273,7 @@ SELECT bitmapToArray(bitmapXor(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS re
Two bitmap andnot calculation, the result is a new bitmap.
-```
+```sql
bitmapAndnot(bitmap,bitmap)
```
@@ -283,11 +283,11 @@ bitmapAndnot(bitmap,bitmap)
**Example**
-``` sql
+```sql
SELECT bitmapToArray(bitmapAndnot(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res
```
-```
+```text
┌─res───┐
│ [1,2] │
└───────┘
@@ -298,7 +298,7 @@ SELECT bitmapToArray(bitmapAndnot(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS
Retrun bitmap cardinality of type UInt64.
-```
+```sql
bitmapCardinality(bitmap)
```
@@ -308,11 +308,11 @@ bitmapCardinality(bitmap)
**Example**
-``` sql
+```sql
SELECT bitmapCardinality(bitmapBuild([1, 2, 3, 4, 5])) AS res
```
-```
+```text
┌─res─┐
│ 5 │
└─────┘
@@ -373,7 +373,7 @@ SELECT bitmapMax(bitmapBuild([1, 2, 3, 4, 5])) AS res
Two bitmap and calculation, return cardinality of type UInt64.
-```
+```sql
bitmapAndCardinality(bitmap,bitmap)
```
@@ -383,11 +383,11 @@ bitmapAndCardinality(bitmap,bitmap)
**Example**
-``` sql
+```sql
SELECT bitmapAndCardinality(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res;
```
-```
+```text
┌─res─┐
│ 1 │
└─────┘
@@ -398,7 +398,7 @@ SELECT bitmapAndCardinality(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res;
Two bitmap or calculation, return cardinality of type UInt64.
-```
+```sql
bitmapOrCardinality(bitmap,bitmap)
```
@@ -408,11 +408,11 @@ bitmapOrCardinality(bitmap,bitmap)
**Example**
-``` sql
+```sql
SELECT bitmapOrCardinality(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res;
```
-```
+```text
┌─res─┐
│ 5 │
└─────┘
@@ -422,7 +422,7 @@ SELECT bitmapOrCardinality(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res;
Two bitmap xor calculation, return cardinality of type UInt64.
-```
+```sql
bitmapXorCardinality(bitmap,bitmap)
```
@@ -432,11 +432,11 @@ bitmapXorCardinality(bitmap,bitmap)
**Example**
-``` sql
+```sql
SELECT bitmapXorCardinality(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res;
```
-```
+```text
┌─res─┐
│ 4 │
└─────┘
@@ -447,7 +447,7 @@ SELECT bitmapXorCardinality(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res;
Two bitmap andnot calculation, return cardinality of type UInt64.
-```
+```sql
bitmapAndnotCardinality(bitmap,bitmap)
```
@@ -457,11 +457,11 @@ bitmapAndnotCardinality(bitmap,bitmap)
**Example**
-``` sql
+```sql
SELECT bitmapAndnotCardinality(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res;
```
-```
+```text
┌─res─┐
│ 2 │
└─────┘
diff --git a/docs/en/query_language/functions/conditional_functions.md b/docs/en/query_language/functions/conditional_functions.md
index b5c7ce583cc..074df25303f 100644
--- a/docs/en/query_language/functions/conditional_functions.md
+++ b/docs/en/query_language/functions/conditional_functions.md
@@ -11,7 +11,7 @@ Returns `then` if `cond != 0`, or `else` if `cond = 0`.
Allows you to write the [CASE](../operators.md#operator_case) operator more compactly in the query.
-```
+```sql
multiIf(cond_1, then_1, cond_2, then_2...else)
```
@@ -31,7 +31,7 @@ The function returns one of the values `then_N` or `else`, depending on the cond
Take the table
-```
+```text
┌─x─┬────y─┐
│ 1 │ ᴺᵁᴸᴸ │
│ 2 │ 3 │
@@ -40,7 +40,7 @@ Take the table
Run the query `SELECT multiIf(isNull(y) x, y < 3, y, NULL) FROM t_null`. Result:
-```
+```text
┌─multiIf(isNull(y), x, less(y, 3), y, NULL)─┐
│ 1 │
│ ᴺᵁᴸᴸ │
diff --git a/docs/en/query_language/functions/date_time_functions.md b/docs/en/query_language/functions/date_time_functions.md
index e8716d8c542..c22457d3fcc 100644
--- a/docs/en/query_language/functions/date_time_functions.md
+++ b/docs/en/query_language/functions/date_time_functions.md
@@ -4,7 +4,7 @@ Support for time zones
All functions for working with the date and time that have a logical use for the time zone can accept a second optional time zone argument. Example: Asia/Yekaterinburg. In this case, they use the specified time zone instead of the local (default) one.
-``` sql
+```sql
SELECT
toDateTime('2016-06-15 23:00:00') AS time,
toDate(time) AS date_local,
@@ -12,7 +12,7 @@ SELECT
toString(time, 'US/Samoa') AS time_samoa
```
-```
+```text
┌────────────────time─┬─date_local─┬─date_yekat─┬─time_samoa──────────┐
│ 2016-06-15 23:00:00 │ 2016-06-15 │ 2016-06-16 │ 2016-06-15 09:00:00 │
└─────────────────────┴────────────┴────────────┴─────────────────────┘
@@ -201,7 +201,7 @@ For mode values with a meaning of “with 4 or more days this year,” weeks are
For mode values with a meaning of “contains January 1”, the week contains January 1 is week 1. It doesn't matter how many days in the new year the week contained, even if it contained only one day.
-```
+```sql
toWeek(date, [, mode][, Timezone])
```
**Parameters**
@@ -212,11 +212,11 @@ toWeek(date, [, mode][, Timezone])
**Example**
-``` sql
+```sql
SELECT toDate('2016-12-27') AS date, toWeek(date) AS week0, toWeek(date,1) AS week1, toWeek(date,9) AS week9;
```
-```
+```text
┌───────date─┬─week0─┬─week1─┬─week9─┐
│ 2016-12-27 │ 52 │ 52 │ 1 │
└────────────┴───────┴───────┴───────┘
@@ -231,11 +231,11 @@ The mode argument works exactly like the mode argument to toWeek(). For the sing
**Example**
-``` sql
+```sql
SELECT toDate('2016-12-27') AS date, toYearWeek(date) AS yearWeek0, toYearWeek(date,1) AS yearWeek1, toYearWeek(date,9) AS yearWeek9;
```
-```
+```text
┌───────date─┬─yearWeek0─┬─yearWeek1─┬─yearWeek9─┐
│ 2016-12-27 │ 201652 │ 201652 │ 201701 │
└────────────┴───────────┴───────────┴───────────┘
@@ -286,7 +286,7 @@ SELECT
addYears(date_time, 1) AS add_years_with_date_time
```
-```
+```text
┌─add_years_with_date─┬─add_years_with_date_time─┐
│ 2019-01-01 │ 2019-01-01 00:00:00 │
└─────────────────────┴──────────────────────────┘
@@ -305,7 +305,7 @@ SELECT
subtractYears(date_time, 1) AS subtract_years_with_date_time
```
-```
+```text
┌─subtract_years_with_date─┬─subtract_years_with_date_time─┐
│ 2018-01-01 │ 2018-01-01 00:00:00 │
└──────────────────────────┴───────────────────────────────┘
diff --git a/docs/en/query_language/functions/ext_dict_functions.md b/docs/en/query_language/functions/ext_dict_functions.md
index 95ddb0eaef6..5af21d6014a 100644
--- a/docs/en/query_language/functions/ext_dict_functions.md
+++ b/docs/en/query_language/functions/ext_dict_functions.md
@@ -6,7 +6,7 @@ For information on connecting and configuring external dictionaries, see [Extern
Retrieves a value from an external dictionary.
-```
+```sql
dictGet('dict_name', 'attr_name', id_expr)
dictGetOrDefault('dict_name', 'attr_name', id_expr, default_value_expr)
```
@@ -95,7 +95,7 @@ LIMIT 3
Checks whether a key is present in a dictionary.
-```
+```sql
dictHas('dict_name', id_expr)
```
@@ -115,7 +115,7 @@ Type: `UInt8`.
For the hierarchical dictionary, returns an array of dictionary keys starting from the passed `id_expr` and continuing along the chain of parent elements.
-```
+```sql
dictGetHierarchy('dict_name', id_expr)
```
@@ -134,7 +134,7 @@ Type: Array(UInt64).
Checks the ancestor of a key through the whole hierarchical chain in the dictionary.
-```
+```sql
dictIsIn('dict_name', child_id_expr, ancestor_id_expr)
```
@@ -169,7 +169,7 @@ All these functions have the `OrDefault` modification. For example, `dictGetDate
Syntax:
-```
+```sql
dictGet[Type]('dict_name', 'attr_name', id_expr)
dictGet[Type]OrDefault('dict_name', 'attr_name', id_expr, default_value_expr)
```
diff --git a/docs/en/query_language/functions/functions_for_nulls.md b/docs/en/query_language/functions/functions_for_nulls.md
index 41fec479d0d..a5116096947 100644
--- a/docs/en/query_language/functions/functions_for_nulls.md
+++ b/docs/en/query_language/functions/functions_for_nulls.md
@@ -4,7 +4,7 @@
Checks whether the argument is [NULL](../syntax.md#null).
-```
+```sql
isNull(x)
```
@@ -21,7 +21,7 @@ isNull(x)
Input table
-```
+```text
┌─x─┬────y─┐
│ 1 │ ᴺᵁᴸᴸ │
│ 2 │ 3 │
@@ -30,25 +30,21 @@ Input table
Query
+```sql
+SELECT x FROM t_null WHERE isNull(y)
```
-:) SELECT x FROM t_null WHERE isNull(y)
-
-SELECT x
-FROM t_null
-WHERE isNull(y)
-
+```text
┌─x─┐
│ 1 │
└───┘
-1 rows in set. Elapsed: 0.010 sec.
```
## isNotNull
Checks whether the argument is [NULL](../syntax.md#null).
-```
+```sql
isNotNull(x)
```
@@ -65,7 +61,7 @@ isNotNull(x)
Input table
-```
+```text
┌─x─┬────y─┐
│ 1 │ ᴺᵁᴸᴸ │
│ 2 │ 3 │
@@ -74,25 +70,21 @@ Input table
Query
+```sql
+SELECT x FROM t_null WHERE isNotNull(y)
```
-:) SELECT x FROM t_null WHERE isNotNull(y)
-
-SELECT x
-FROM t_null
-WHERE isNotNull(y)
-
+```text
┌─x─┐
│ 2 │
└───┘
-1 rows in set. Elapsed: 0.010 sec.
```
## coalesce
Checks from left to right whether `NULL` arguments were passed and returns the first non-`NULL` argument.
-```
+```sql
coalesce(x,...)
```
@@ -109,7 +101,7 @@ coalesce(x,...)
Consider a list of contacts that may specify multiple ways to contact a customer.
-```
+```text
┌─name─────┬─mail─┬─phone─────┬──icq─┐
│ client 1 │ ᴺᵁᴸᴸ │ 123-45-67 │ 123 │
│ client 2 │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │
@@ -120,25 +112,22 @@ The `mail` and `phone` fields are of type String, but the `icq` field is `UInt32
Get the first available contact method for the customer from the contact list:
+```sql
+SELECT coalesce(mail, phone, CAST(icq,'Nullable(String)')) FROM aBook
```
-:) SELECT coalesce(mail, phone, CAST(icq,'Nullable(String)')) FROM aBook
-
-SELECT coalesce(mail, phone, CAST(icq, 'Nullable(String)'))
-FROM aBook
-
+```text
┌─name─────┬─coalesce(mail, phone, CAST(icq, 'Nullable(String)'))─┐
│ client 1 │ 123-45-67 │
│ client 2 │ ᴺᵁᴸᴸ │
└──────────┴──────────────────────────────────────────────────────┘
-2 rows in set. Elapsed: 0.006 sec.
```
## ifNull
Returns an alternative value if the main argument is `NULL`.
-```
+```sql
ifNull(x,alt)
```
@@ -154,17 +143,19 @@ ifNull(x,alt)
**Example**
-```
+```sql
SELECT ifNull('a', 'b')
-
+```
+```text
┌─ifNull('a', 'b')─┐
│ a │
└──────────────────┘
```
-```
+```sql
SELECT ifNull(NULL, 'b')
-
+```
+```text
┌─ifNull(NULL, 'b')─┐
│ b │
└───────────────────┘
@@ -174,7 +165,7 @@ SELECT ifNull(NULL, 'b')
Returns `NULL` if the arguments are equal.
-```
+```sql
nullIf(x, y)
```
@@ -189,17 +180,19 @@ nullIf(x, y)
**Example**
-```
+```sql
SELECT nullIf(1, 1)
-
+```
+```text
┌─nullIf(1, 1)─┐
│ ᴺᵁᴸᴸ │
└──────────────┘
```
-```
+```sql
SELECT nullIf(1, 2)
-
+```
+```text
┌─nullIf(1, 2)─┐
│ 1 │
└──────────────┘
@@ -209,7 +202,7 @@ SELECT nullIf(1, 2)
Results in a value of type [Nullable](../../data_types/nullable.md) for a non- `Nullable`, if the value is not `NULL`.
-```
+```sql
assumeNotNull(x)
```
@@ -226,15 +219,16 @@ assumeNotNull(x)
Consider the `t_null` table.
-```
+```sql
SHOW CREATE TABLE t_null
-
+```
+```text
┌─statement─────────────────────────────────────────────────────────────────┐
│ CREATE TABLE default.t_null ( x Int8, y Nullable(Int8)) ENGINE = TinyLog │
└───────────────────────────────────────────────────────────────────────────┘
```
-```
+```text
┌─x─┬────y─┐
│ 1 │ ᴺᵁᴸᴸ │
│ 2 │ 3 │
@@ -243,18 +237,20 @@ SHOW CREATE TABLE t_null
Apply the `assumeNotNull` function to the `y` column.
-```
+```sql
SELECT assumeNotNull(y) FROM t_null
-
+```
+```text
┌─assumeNotNull(y)─┐
│ 0 │
│ 3 │
└──────────────────┘
```
-```
+```sql
SELECT toTypeName(assumeNotNull(y)) FROM t_null
-
+```
+```text
┌─toTypeName(assumeNotNull(y))─┐
│ Int8 │
│ Int8 │
@@ -265,7 +261,7 @@ SELECT toTypeName(assumeNotNull(y)) FROM t_null
Converts the argument type to `Nullable`.
-```
+```sql
toNullable(x)
```
@@ -279,15 +275,18 @@ toNullable(x)
**Example**
-```
+```sql
SELECT toTypeName(10)
-
+```
+```text
┌─toTypeName(10)─┐
│ UInt8 │
└────────────────┘
-
+```
+```sql
SELECT toTypeName(toNullable(10))
-
+```
+```text
┌─toTypeName(toNullable(10))─┐
│ Nullable(UInt8) │
└────────────────────────────┘
diff --git a/docs/en/query_language/functions/geo.md b/docs/en/query_language/functions/geo.md
index 79b8390d59f..49cdcecc9f9 100644
--- a/docs/en/query_language/functions/geo.md
+++ b/docs/en/query_language/functions/geo.md
@@ -4,7 +4,7 @@
Calculate the distance between two points on the Earth's surface using [the great-circle formula](https://en.wikipedia.org/wiki/Great-circle_distance).
-```
+```sql
greatCircleDistance(lon1Deg, lat1Deg, lon2Deg, lat2Deg)
```
@@ -25,11 +25,11 @@ Generates an exception when the input parameter values fall outside of the range
**Example**
-``` sql
+```sql
SELECT greatCircleDistance(55.755831, 37.617673, -55.755831, -37.617673)
```
-```
+```text
┌─greatCircleDistance(55.755831, 37.617673, -55.755831, -37.617673)─┐
│ 14132374.194975413 │
└───────────────────────────────────────────────────────────────────┘
@@ -40,7 +40,7 @@ SELECT greatCircleDistance(55.755831, 37.617673, -55.755831, -37.617673)
Checks whether the point belongs to at least one of the ellipses.
Coordinates are geometric in the Cartesian coordinate system.
-```
+```sql
pointInEllipses(x, y, x₀, y₀, a₀, b₀,...,xₙ, yₙ, aₙ, bₙ)
```
@@ -58,11 +58,11 @@ The input parameters must be `2+4⋅n`, where `n` is the number of ellipses.
**Example**
-``` sql
+```sql
SELECT pointInEllipses(10., 10., 10., 9.1, 1., 0.9999)
```
-```
+```text
┌─pointInEllipses(10., 10., 10., 9.1, 1., 0.9999)─┐
│ 1 │
└─────────────────────────────────────────────────┘
@@ -72,7 +72,7 @@ SELECT pointInEllipses(10., 10., 10., 9.1, 1., 0.9999)
Checks whether the point belongs to the polygon on the plane.
-```
+```sql
pointInPolygon((x, y), [(a, b), (c, d) ...], ...)
```
@@ -89,11 +89,11 @@ If the point is on the polygon boundary, the function may return either 0 or 1.
**Example**
-``` sql
+```sql
SELECT pointInPolygon((3., 3.), [(6, 0), (8, 4), (5, 8), (0, 2)]) AS res
```
-```
+```text
┌─res─┐
│ 1 │
└─────┘
@@ -102,7 +102,7 @@ SELECT pointInPolygon((3., 3.), [(6, 0), (8, 4), (5, 8), (0, 2)]) AS res
## geohashEncode
Encodes latitude and longitude as a geohash-string, please see (http://geohash.org/, https://en.wikipedia.org/wiki/Geohash).
-```
+```sql
geohashEncode(longitude, latitude, [precision])
```
@@ -118,11 +118,11 @@ geohashEncode(longitude, latitude, [precision])
**Example**
-``` sql
+```sql
SELECT geohashEncode(-5.60302734375, 42.593994140625, 0) AS res
```
-```
+```text
┌─res──────────┐
│ ezs42d000000 │
└──────────────┘
@@ -142,11 +142,11 @@ Decodes any geohash-encoded string into longitude and latitude.
**Example**
-``` sql
+```sql
SELECT geohashDecode('ezs42') AS res
```
-```
+```text
┌─res─────────────────────────────┐
│ (-5.60302734375,42.60498046875) │
└─────────────────────────────────┘
@@ -156,7 +156,7 @@ SELECT geohashDecode('ezs42') AS res
Calculates [H3](https://uber.github.io/h3/#/documentation/overview/introduction) point index `(lon, lat)` with specified resolution.
-```
+```sql
geoToH3(lon, lat, resolution)
```
@@ -175,10 +175,10 @@ Type: [UInt64](../../data_types/int_uint.md).
**Example**
-``` sql
+```sql
SELECT geoToH3(37.79506683, 55.71290588, 15) as h3Index
```
-```
+```text
┌────────────h3Index─┐
│ 644325524701193974 │
└────────────────────┘
@@ -207,10 +207,10 @@ Please note that function will throw an exception if resulting array is over 10'
**Example**
-```
+```sql
SELECT geohashesInBox(24.48, 40.56, 24.785, 40.81, 4) AS thasos
```
-```
+```text
┌─thasos──────────────────────────────────────┐
│ ['sx1q','sx1r','sx32','sx1w','sx1x','sx38'] │
└─────────────────────────────────────────────┘
diff --git a/docs/en/query_language/functions/hash_functions.md b/docs/en/query_language/functions/hash_functions.md
index b384dead609..fde41d97da4 100644
--- a/docs/en/query_language/functions/hash_functions.md
+++ b/docs/en/query_language/functions/hash_functions.md
@@ -6,7 +6,7 @@ Hash functions can be used for the deterministic pseudo-random shuffling of elem
[Interprets](../../query_language/functions/type_conversion_functions.md#type_conversion_functions-reinterpretAsString) all the input parameters as strings and calculates the [MD5](https://en.wikipedia.org/wiki/MD5) hash value for each of them. Then combines hashes, takes the first 8 bytes of the hash of the resulting string, and interprets them as `UInt64` in big-endian byte order.
-```
+```sql
halfMD5(par1, ...)
```
@@ -42,7 +42,7 @@ If you want to get the same result as output by the md5sum utility, use lower(he
Produces a 64-bit [SipHash](https://131002.net/siphash/) hash value.
-```
+```sql
sipHash64(par1,...)
```
@@ -68,7 +68,7 @@ A [UInt64](../../data_types/int_uint.md) data type hash value.
```sql
SELECT sipHash64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS SipHash, toTypeName(SipHash) AS type
```
-```
+```text
┌──────────────SipHash─┬─type───┐
│ 13726873534472839665 │ UInt64 │
└──────────────────────┴────────┘
@@ -84,7 +84,7 @@ Differs from sipHash64 in that the final xor-folding state is only done up to 12
Produces a 64-bit [CityHash](https://github.com/google/cityhash) hash value.
-```
+```sql
cityHash64(par1,...)
```
@@ -150,7 +150,7 @@ Levels are the same as in URLHierarchy. This function is specific to Yandex.Metr
Produces a 64-bit [FarmHash](https://github.com/google/farmhash) hash value.
-```
+```sql
farmHash64(par1, ...)
```
@@ -177,21 +177,71 @@ SELECT farmHash64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:0
## javaHash {#hash_functions-javahash}
-Calculates [JavaHash](http://hg.openjdk.java.net/jdk8u/jdk8u/jdk/file/478a4add975b/src/share/classes/java/lang/String.java#l1452)
- from a string.
-Accepts a String-type argument. Returns Int32.
+Calculates [JavaHash](http://hg.openjdk.java.net/jdk8u/jdk8u/jdk/file/478a4add975b/src/share/classes/java/lang/String.java#l1452) from a string. This hash function is neither fast nor having a good quality. The only reason to use it is when this algorithm is already used in another system and you have to calculate exactly the same result.
-## hiveHash
+```sql
+SELECT javaHash('');
+```
-Calculates HiveHash from a string.
-Accepts a String-type argument. Returns Int32.
-This is just [JavaHash](#hash_functions-javahash) with zeroed out sign bit. This function is used in [Apache Hive](https://en.wikipedia.org/wiki/Apache_Hive) for versions before 3.0.
+**Returned value**
+
+A `Int32` data type hash value.
+
+Type: `javaHash`.
+
+**Example**
+
+Query:
+
+```sql
+SELECT javaHash('Hello, world!');
+```
+
+Result:
+
+```text
+┌─javaHash('Hello, world!')─┐
+│ -1880044555 │
+└───────────────────────────┘
+```
+
+## hiveHash {#hash_functions-hivehash}
+
+Calculates `HiveHash` from a string.
+
+```sql
+SELECT hiveHash('');
+```
+
+This is just [JavaHash](#hash_functions-javahash) with zeroed out sign bit. This function is used in [Apache Hive](https://en.wikipedia.org/wiki/Apache_Hive) for versions before 3.0. This hash function is neither fast nor having a good quality. The only reason to use it is when this algorithm is already used in another system and you have to calculate exactly the same result.
+
+**Returned value**
+
+A `Int32` data type hash value.
+
+Type: `hiveHash`.
+
+**Example**
+
+Query:
+
+```sql
+SELECT hiveHash('Hello, world!');
+```
+
+Result:
+
+```text
+┌─hiveHash('Hello, world!')─┐
+│ 267439093 │
+└───────────────────────────┘
+```
## metroHash64
Produces a 64-bit [MetroHash](http://www.jandrewrogers.com/2015/05/27/metrohash/) hash value.
-```
+```sql
metroHash64(par1, ...)
```
@@ -224,7 +274,7 @@ For more information, see the link: [JumpConsistentHash](https://arxiv.org/pdf/1
Produces a [MurmurHash2](https://github.com/aappleby/smhasher) hash value.
-```
+```sql
murmurHash2_32(par1, ...)
murmurHash2_64(par1, ...)
```
@@ -253,7 +303,7 @@ SELECT murmurHash2_64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:
Produces a [MurmurHash3](https://github.com/aappleby/smhasher) hash value.
-```
+```sql
murmurHash3_32(par1, ...)
murmurHash3_64(par1, ...)
```
@@ -282,7 +332,7 @@ SELECT murmurHash3_32(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:
Produces a 128-bit [MurmurHash3](https://github.com/aappleby/smhasher) hash value.
-```
+```sql
murmurHash3_128( expr )
```
@@ -305,10 +355,42 @@ SELECT murmurHash3_128('example_string') AS MurmurHash3, toTypeName(MurmurHash3)
└──────────────────┴─────────────────┘
```
-## xxHash32, xxHash64
+## xxHash32, xxHash64 {#hash_functions-xxhash32}
-Calculates xxHash from a string.
-Accepts a String-type argument. Returns UInt64 Or UInt32.
-For more information, see the link: [xxHash](http://cyan4973.github.io/xxHash/)
+Calculates `xxHash` from a string. It is proposed in two flavors, 32 and 64 bits.
+
+```sql
+SELECT xxHash32('');
+
+OR
+
+SELECT xxHash64('');
+```
+
+**Returned value**
+
+A `Uint32` or `Uint64` data type hash value.
+
+Type: `xxHash`.
+
+**Example**
+
+Query:
+
+```sql
+SELECT xxHash32('Hello, world!');
+```
+
+Result:
+
+```text
+┌─xxHash32('Hello, world!')─┐
+│ 834093149 │
+└───────────────────────────┘
+```
+
+**See Also**
+
+- [xxHash](http://cyan4973.github.io/xxHash/).
[Original article](https://clickhouse.yandex/docs/en/query_language/functions/hash_functions/)
diff --git a/docs/en/query_language/functions/higher_order_functions.md b/docs/en/query_language/functions/higher_order_functions.md
index 5f401219a33..c114f912980 100644
--- a/docs/en/query_language/functions/higher_order_functions.md
+++ b/docs/en/query_language/functions/higher_order_functions.md
@@ -25,18 +25,20 @@ Returns an array obtained from the original application of the `func` function t
Examples:
-``` sql
+```sql
SELECT arrayMap(x -> (x + 2), [1, 2, 3]) as res;
-
+```
+```text
┌─res─────┐
│ [3,4,5] │
└─────────┘
```
The following example shows how to create a tuple of elements from different arrays:
-``` sql
+```sql
SELECT arrayMap((x, y) -> (x, y), [1, 2, 3], [4, 5, 6]) AS res
-
+```
+```text
┌─res─────────────────┐
│ [(1,4),(2,5),(3,6)] │
└─────────────────────┘
@@ -50,17 +52,17 @@ Returns an array containing only the elements in `arr1` for which `func` returns
Examples:
-``` sql
+```sql
SELECT arrayFilter(x -> x LIKE '%World%', ['Hello', 'abc World']) AS res
```
-```
+```text
┌─res───────────┐
│ ['abc World'] │
└───────────────┘
```
-``` sql
+```sql
SELECT
arrayFilter(
(i, x) -> x LIKE '%World%',
@@ -69,7 +71,7 @@ SELECT
AS res
```
-```
+```text
┌─res─┐
│ [2] │
└─────┘
@@ -111,11 +113,11 @@ Returns an array of partial sums of elements in the source array (a running sum)
Example:
-``` sql
+```sql
SELECT arrayCumSum([1, 1, 1, 1]) AS res
```
-```
+```text
┌─res──────────┐
│ [1, 2, 3, 4] │
└──────────────┘
@@ -125,11 +127,11 @@ SELECT arrayCumSum([1, 1, 1, 1]) AS res
Same as `arrayCumSum`, returns an array of partial sums of elements in the source array (a running sum). Different `arrayCumSum`, when then returned value contains a value less than zero, the value is replace with zero and the subsequent calculation is performed with zero parameters. For example:
-``` sql
+```sql
SELECT arrayCumSumNonNegative([1, 1, -4, 1]) AS res
```
-```
+```text
┌─res───────┐
│ [1,2,0,1] │
└───────────┘
@@ -143,11 +145,11 @@ The [Schwartzian transform](https://en.wikipedia.org/wiki/Schwartzian_transform)
Example:
-``` sql
+```sql
SELECT arraySort((x, y) -> y, ['hello', 'world'], [2, 1]);
```
-```
+```text
┌─res────────────────┐
│ ['world', 'hello'] │
└────────────────────┘
@@ -161,10 +163,10 @@ Returns an array as result of sorting the elements of `arr1` in descending order
Example:
-``` sql
+```sql
SELECT arrayReverseSort((x, y) -> y, ['hello', 'world'], [2, 1]) as res;
```
-``` sql
+```text
┌─res───────────────┐
│ ['hello','world'] │
└───────────────────┘
diff --git a/docs/en/query_language/functions/ip_address_functions.md b/docs/en/query_language/functions/ip_address_functions.md
index ce58a187853..411b882d6b2 100644
--- a/docs/en/query_language/functions/ip_address_functions.md
+++ b/docs/en/query_language/functions/ip_address_functions.md
@@ -14,7 +14,7 @@ Similar to IPv4NumToString, but using xxx instead of the last octet.
Example:
-``` sql
+```sql
SELECT
IPv4NumToStringClassC(ClientIP) AS k,
count() AS c
@@ -24,7 +24,7 @@ ORDER BY c DESC
LIMIT 10
```
-```
+```text
┌─k──────────────┬─────c─┐
│ 83.149.9.xxx │ 26238 │
│ 217.118.81.xxx │ 26074 │
@@ -46,17 +46,17 @@ Since using 'xxx' is highly unusual, this may be changed in the future. We recom
Accepts a FixedString(16) value containing the IPv6 address in binary format. Returns a string containing this address in text format.
IPv6-mapped IPv4 addresses are output in the format ::ffff:111.222.33.44. Examples:
-``` sql
+```sql
SELECT IPv6NumToString(toFixedString(unhex('2A0206B8000000000000000000000011'), 16)) AS addr
```
-```
+```text
┌─addr─────────┐
│ 2a02:6b8::11 │
└──────────────┘
```
-``` sql
+```sql
SELECT
IPv6NumToString(ClientIP6 AS k),
count() AS c
@@ -67,7 +67,7 @@ ORDER BY c DESC
LIMIT 10
```
-```
+```text
┌─IPv6NumToString(ClientIP6)──────────────┬─────c─┐
│ 2a02:2168:aaa:bbbb::2 │ 24695 │
│ 2a02:2698:abcd:abcd:abcd:abcd:8888:5555 │ 22408 │
@@ -82,7 +82,7 @@ LIMIT 10
└─────────────────────────────────────────┴───────┘
```
-``` sql
+```sql
SELECT
IPv6NumToString(ClientIP6 AS k),
count() AS c
@@ -93,7 +93,7 @@ ORDER BY c DESC
LIMIT 10
```
-```
+```text
┌─IPv6NumToString(ClientIP6)─┬──────c─┐
│ ::ffff:94.26.111.111 │ 747440 │
│ ::ffff:37.143.222.4 │ 529483 │
@@ -117,11 +117,11 @@ HEX can be uppercase or lowercase.
Takes a `UInt32` number. Interprets it as an IPv4 address in [big endian](https://en.wikipedia.org/wiki/Endianness). Returns a `FixedString(16)` value containing the IPv6 address in binary format. Examples:
-``` sql
+```sql
SELECT IPv6NumToString(IPv4ToIPv6(IPv4StringToNum('192.168.0.1'))) AS addr
```
-```
+```text
┌─addr───────────────┐
│ ::ffff:192.168.0.1 │
└────────────────────┘
@@ -131,7 +131,7 @@ SELECT IPv6NumToString(IPv4ToIPv6(IPv4StringToNum('192.168.0.1'))) AS addr
Accepts a FixedString(16) value containing the IPv6 address in binary format. Returns a string containing the address of the specified number of bits removed in text format. For example:
-``` sql
+```sql
WITH
IPv6StringToNum('2001:0DB8:AC10:FE01:FEED:BABE:CAFE:F00D') AS ipv6,
IPv4ToIPv6(IPv4StringToNum('192.168.0.1')) AS ipv4
@@ -141,7 +141,7 @@ SELECT
```
-```
+```text
┌─cutIPv6(ipv6, 2, 0)─────────────────┬─cutIPv6(ipv4, 0, 2)─┐
│ 2001:db8:ac10:fe01:feed:babe:cafe:0 │ ::ffff:192.168.0.0 │
└─────────────────────────────────────┴─────────────────────┘
@@ -155,7 +155,7 @@ Accepts an IPv4 and an UInt8 value containing the [CIDR](https://en.wikipedia.or
```sql
SELECT IPv4CIDRToRange(toIPv4('192.168.5.2'), 16)
```
-```
+```text
┌─IPv4CIDRToRange(toIPv4('192.168.5.2'), 16)─┐
│ ('192.168.0.0','192.168.255.255') │
└────────────────────────────────────────────┘
@@ -171,7 +171,7 @@ Accepts an IPv6 and an UInt8 value containing the CIDR. Return a tuple with two
SELECT IPv6CIDRToRange(toIPv6('2001:0db8:0000:85a3:0000:0000:ac1f:8001'), 32);
```
-```
+```text
┌─IPv6CIDRToRange(toIPv6('2001:0db8:0000:85a3:0000:0000:ac1f:8001'), 32)─┐
│ ('2001:db8::','2001:db8:ffff:ffff:ffff:ffff:ffff:ffff') │
└────────────────────────────────────────────────────────────────────────┘
@@ -181,7 +181,7 @@ SELECT IPv6CIDRToRange(toIPv6('2001:0db8:0000:85a3:0000:0000:ac1f:8001'), 32);
An alias to `IPv4StringToNum()` that takes a string form of IPv4 address and returns value of [IPv4](../../data_types/domains/ipv4.md) type, which is binary equal to value returned by `IPv4StringToNum()`.
-``` sql
+```sql
WITH
'171.225.130.45' as IPv4_string
SELECT
@@ -189,13 +189,13 @@ SELECT
toTypeName(toIPv4(IPv4_string))
```
-```
+```text
┌─toTypeName(IPv4StringToNum(IPv4_string))─┬─toTypeName(toIPv4(IPv4_string))─┐
│ UInt32 │ IPv4 │
└──────────────────────────────────────────┴─────────────────────────────────┘
```
-``` sql
+```sql
WITH
'171.225.130.45' as IPv4_string
SELECT
@@ -203,7 +203,7 @@ SELECT
hex(toIPv4(IPv4_string))
```
-```
+```text
┌─hex(IPv4StringToNum(IPv4_string))─┬─hex(toIPv4(IPv4_string))─┐
│ ABE1822D │ ABE1822D │
└───────────────────────────────────┴──────────────────────────┘
@@ -213,7 +213,7 @@ SELECT
An alias to `IPv6StringToNum()` that takes a string form of IPv6 address and returns value of [IPv6](../../data_types/domains/ipv6.md) type, which is binary equal to value returned by `IPv6StringToNum()`.
-``` sql
+```sql
WITH
'2001:438:ffff::407d:1bc1' as IPv6_string
SELECT
@@ -221,13 +221,13 @@ SELECT
toTypeName(toIPv6(IPv6_string))
```
-```
+```text
┌─toTypeName(IPv6StringToNum(IPv6_string))─┬─toTypeName(toIPv6(IPv6_string))─┐
│ FixedString(16) │ IPv6 │
└──────────────────────────────────────────┴─────────────────────────────────┘
```
-``` sql
+```sql
WITH
'2001:438:ffff::407d:1bc1' as IPv6_string
SELECT
@@ -235,7 +235,7 @@ SELECT
hex(toIPv6(IPv6_string))
```
-```
+```text
┌─hex(IPv6StringToNum(IPv6_string))─┬─hex(toIPv6(IPv6_string))─────────┐
│ 20010438FFFF000000000000407D1BC1 │ 20010438FFFF000000000000407D1BC1 │
└───────────────────────────────────┴──────────────────────────────────┘
diff --git a/docs/en/query_language/functions/json_functions.md b/docs/en/query_language/functions/json_functions.md
index 21e346e9ef9..4de06ae53fb 100644
--- a/docs/en/query_language/functions/json_functions.md
+++ b/docs/en/query_language/functions/json_functions.md
@@ -35,7 +35,7 @@ Returns the value of a field, including separators.
Examples:
-```
+```sql
visitParamExtractRaw('{"abc":"\\n\\u0000"}', 'abc') = '"\\n\\u0000"'
visitParamExtractRaw('{"abc":{"def":[1,2,3]}}', 'abc') = '{"def":[1,2,3]}'
```
@@ -46,7 +46,7 @@ Parses the string in double quotes. The value is unescaped. If unescaping failed
Examples:
-```
+```sql
visitParamExtractString('{"abc":"\\n\\u0000"}', 'abc') = '\n\0'
visitParamExtractString('{"abc":"\\u263a"}', 'abc') = '☺'
visitParamExtractString('{"abc":"\\u263"}', 'abc') = ''
@@ -65,9 +65,9 @@ If the value does not exist, `0` will be returned.
Examples:
-```
-select JSONHas('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = 1
-select JSONHas('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', 4) = 0
+```sql
+SELECT JSONHas('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = 1
+SELECT JSONHas('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', 4) = 0
```
`indices_or_keys` is a list of zero or more arguments each of them can be either string or integer.
@@ -82,12 +82,12 @@ You may use integers to access both JSON arrays and JSON objects.
So, for example:
-```
-select JSONExtractKey('{"a": "hello", "b": [-100, 200.0, 300]}', 1) = 'a'
-select JSONExtractKey('{"a": "hello", "b": [-100, 200.0, 300]}', 2) = 'b'
-select JSONExtractKey('{"a": "hello", "b": [-100, 200.0, 300]}', -1) = 'b'
-select JSONExtractKey('{"a": "hello", "b": [-100, 200.0, 300]}', -2) = 'a'
-select JSONExtractString('{"a": "hello", "b": [-100, 200.0, 300]}', 1) = 'hello'
+```sql
+SELECT JSONExtractKey('{"a": "hello", "b": [-100, 200.0, 300]}', 1) = 'a'
+SELECT JSONExtractKey('{"a": "hello", "b": [-100, 200.0, 300]}', 2) = 'b'
+SELECT JSONExtractKey('{"a": "hello", "b": [-100, 200.0, 300]}', -1) = 'b'
+SELECT JSONExtractKey('{"a": "hello", "b": [-100, 200.0, 300]}', -2) = 'a'
+SELECT JSONExtractString('{"a": "hello", "b": [-100, 200.0, 300]}', 1) = 'hello'
```
## JSONLength(json[, indices_or_keys]...)
@@ -98,9 +98,9 @@ If the value does not exist or has a wrong type, `0` will be returned.
Examples:
-```
-select JSONLength('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = 3
-select JSONLength('{"a": "hello", "b": [-100, 200.0, 300]}') = 2
+```sql
+SELECT JSONLength('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = 3
+SELECT JSONLength('{"a": "hello", "b": [-100, 200.0, 300]}') = 2
```
## JSONType(json[, indices_or_keys]...)
@@ -111,10 +111,10 @@ If the value does not exist, `Null` will be returned.
Examples:
-```
-select JSONType('{"a": "hello", "b": [-100, 200.0, 300]}') = 'Object'
-select JSONType('{"a": "hello", "b": [-100, 200.0, 300]}', 'a') = 'String'
-select JSONType('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = 'Array'
+```sql
+SELECT JSONType('{"a": "hello", "b": [-100, 200.0, 300]}') = 'Object'
+SELECT JSONType('{"a": "hello", "b": [-100, 200.0, 300]}', 'a') = 'String'
+SELECT JSONType('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = 'Array'
```
## JSONExtractUInt(json[, indices_or_keys]...)
@@ -128,10 +128,10 @@ If the value does not exist or has a wrong type, `0` will be returned.
Examples:
-```
-select JSONExtractInt('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', 1) = -100
-select JSONExtractFloat('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', 2) = 200.0
-select JSONExtractUInt('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', -1) = 300
+```sql
+SELECT JSONExtractInt('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', 1) = -100
+SELECT JSONExtractFloat('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', 2) = 200.0
+SELECT JSONExtractUInt('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', -1) = 300
```
## JSONExtractString(json[, indices_or_keys]...)
@@ -144,12 +144,12 @@ The value is unescaped. If unescaping failed, it returns an empty string.
Examples:
-```
-select JSONExtractString('{"a": "hello", "b": [-100, 200.0, 300]}', 'a') = 'hello'
-select JSONExtractString('{"abc":"\\n\\u0000"}', 'abc') = '\n\0'
-select JSONExtractString('{"abc":"\\u263a"}', 'abc') = '☺'
-select JSONExtractString('{"abc":"\\u263"}', 'abc') = ''
-select JSONExtractString('{"abc":"hello}', 'abc') = ''
+```sql
+SELECT JSONExtractString('{"a": "hello", "b": [-100, 200.0, 300]}', 'a') = 'hello'
+SELECT JSONExtractString('{"abc":"\\n\\u0000"}', 'abc') = '\n\0'
+SELECT JSONExtractString('{"abc":"\\u263a"}', 'abc') = '☺'
+SELECT JSONExtractString('{"abc":"\\u263"}', 'abc') = ''
+SELECT JSONExtractString('{"abc":"hello}', 'abc') = ''
```
## JSONExtract(json[, indices_or_keys...], return_type)
@@ -163,7 +163,7 @@ This means
Examples:
-```
+```sql
SELECT JSONExtract('{"a": "hello", "b": [-100, 200.0, 300]}', 'Tuple(String, Array(Float64))') = ('hello',[-100,200,300])
SELECT JSONExtract('{"a": "hello", "b": [-100, 200.0, 300]}', 'Tuple(b Array(Float64), a String)') = ([-100,200,300],'hello')
SELECT JSONExtract('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', 'Array(Nullable(Int8))') = [-100, NULL, NULL]
@@ -179,7 +179,7 @@ Parse key-value pairs from a JSON where the values are of the given ClickHouse d
Example:
-```
+```sql
SELECT JSONExtractKeysAndValues('{"x": {"a": 5, "b": 7, "c": 11}}', 'x', 'Int8') = [('a',5),('b',7),('c',11)];
```
@@ -191,8 +191,8 @@ If the part does not exist or has a wrong type, an empty string will be returned
Example:
-```
-select JSONExtractRaw('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = '[-100, 200.0, 300]'
+```sql
+SELECT JSONExtractRaw('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = '[-100, 200.0, 300]'
```
[Original article](https://clickhouse.yandex/docs/en/query_language/functions/json_functions/)
diff --git a/docs/en/query_language/functions/math_functions.md b/docs/en/query_language/functions/math_functions.md
index 31deb337fdb..d2673ddda41 100644
--- a/docs/en/query_language/functions/math_functions.md
+++ b/docs/en/query_language/functions/math_functions.md
@@ -48,11 +48,11 @@ If 'x' is non-negative, then erf(x / σ√2) is the probability that a random
Example (three sigma rule):
-``` sql
+```sql
SELECT erf(3 / sqrt(2))
```
-```
+```text
┌─erf(divide(3, sqrt(2)))─┐
│ 0.9973002039367398 │
└─────────────────────────┘
diff --git a/docs/en/query_language/functions/other_functions.md b/docs/en/query_language/functions/other_functions.md
index 131f961b22a..b1471b7dfe4 100644
--- a/docs/en/query_language/functions/other_functions.md
+++ b/docs/en/query_language/functions/other_functions.md
@@ -8,7 +8,7 @@ Returns a string with the name of the host that this function was performed on.
Extracts the trailing part of a string after the last slash or backslash. This function if often used to extract the filename from a path.
-```
+```sql
basename( expr )
```
@@ -60,9 +60,10 @@ This function is used by the system for implementing Pretty formats.
`NULL` is represented as a string corresponding to `NULL` in `Pretty` formats.
-```
+```sql
SELECT visibleWidth(NULL)
-
+```
+```text
┌─visibleWidth(NULL)─┐
│ 4 │
└────────────────────┘
@@ -139,7 +140,7 @@ The band is drawn with accuracy to one eighth of a symbol.
Example:
-``` sql
+```sql
SELECT
toHour(EventTime) AS h,
count() AS c,
@@ -149,7 +150,7 @@ GROUP BY h
ORDER BY h ASC
```
-```
+```text
┌──h─┬──────c─┬─bar────────────────┐
│ 0 │ 292907 │ █████████▋ │
│ 1 │ 180563 │ ██████ │
@@ -208,7 +209,7 @@ If the 'x' value is equal to one of the elements in the 'array_from' array, it r
Example:
-``` sql
+```sql
SELECT
transform(SearchEngineID, [2, 3], ['Yandex', 'Google'], 'Other') AS title,
count() AS c
@@ -218,7 +219,7 @@ GROUP BY title
ORDER BY c DESC
```
-```
+```text
┌─title─────┬──────c─┐
│ Yandex │ 498635 │
│ Google │ 229872 │
@@ -237,7 +238,7 @@ Types:
Example:
-``` sql
+```sql
SELECT
transform(domain(Referer), ['yandex.ru', 'google.ru', 'vk.com'], ['www.yandex', 'example.com']) AS s,
count() AS c
@@ -247,7 +248,7 @@ ORDER BY count() DESC
LIMIT 10
```
-```
+```text
┌─s──────────────┬───────c─┐
│ │ 2906259 │
│ www.yandex │ 867767 │
@@ -267,13 +268,13 @@ Accepts the size (number of bytes). Returns a rounded size with a suffix (KiB, M
Example:
-``` sql
+```sql
SELECT
arrayJoin([1, 1024, 1024*1024, 192851925]) AS filesize_bytes,
formatReadableSize(filesize_bytes) AS filesize
```
-```
+```text
┌─filesize_bytes─┬─filesize───┐
│ 1 │ 1.00 B │
│ 1024 │ 1.00 KiB │
@@ -325,7 +326,7 @@ If you make a subquery with ORDER BY and call the function from outside the subq
If `offset` value is outside block bounds, a default value for `column` returned. If `default_value` is given, then it will be used.
This function can be used to compute year-over-year metric value:
-``` sql
+```sql
WITH toDate('2018-01-01') AS start_date
SELECT
toStartOfMonth(start_date + (number * 32)) AS month,
@@ -335,7 +336,7 @@ SELECT
FROM numbers(16)
```
-```
+```text
┌──────month─┬─money─┬─prev_year─┬─year_over_year─┐
│ 2018-01-01 │ 32 │ 0 │ 0 │
│ 2018-02-01 │ 63 │ 0 │ 0 │
@@ -367,7 +368,7 @@ If you make a subquery with ORDER BY and call the function from outside the subq
Example:
-``` sql
+```sql
SELECT
EventID,
EventTime,
@@ -384,7 +385,7 @@ FROM
)
```
-```
+```text
┌─EventID─┬───────────EventTime─┬─delta─┐
│ 1106 │ 2016-11-24 00:00:04 │ 0 │
│ 1107 │ 2016-11-24 00:00:05 │ 1 │
@@ -396,19 +397,22 @@ FROM
Please note - block size affects the result. With each new block, the `runningDifference` state is reset.
-``` sql
+```sql
SELECT
number,
runningDifference(number + 1) AS diff
FROM numbers(100000)
WHERE diff != 1
+```
+```text
┌─number─┬─diff─┐
│ 0 │ 0 │
└────────┴──────┘
┌─number─┬─diff─┐
│ 65536 │ 0 │
└────────┴──────┘
-
+```
+```sql
set max_block_size=100000 -- default value is 65536!
SELECT
@@ -416,6 +420,8 @@ SELECT
runningDifference(number + 1) AS diff
FROM numbers(100000)
WHERE diff != 1
+```
+```text
┌─number─┬─diff─┐
│ 0 │ 0 │
└────────┴──────┘
@@ -441,7 +447,7 @@ Accepts a MAC address in the format AA:BB:CC:DD:EE:FF (colon-separated numbers i
Returns the number of fields in [Enum](../../data_types/enum.md).
-```
+```sql
getSizeOfEnumType(value)
```
@@ -456,9 +462,10 @@ getSizeOfEnumType(value)
**Example**
-```
+```sql
SELECT getSizeOfEnumType( CAST('a' AS Enum8('a' = 1, 'b' = 2) ) ) AS x
-
+```
+```text
┌─x─┐
│ 2 │
└───┘
@@ -468,7 +475,7 @@ SELECT getSizeOfEnumType( CAST('a' AS Enum8('a' = 1, 'b' = 2) ) ) AS x
Returns the name of the class that represents the data type of the column in RAM.
-```
+```sql
toColumnTypeName(value)
```
@@ -482,21 +489,18 @@ toColumnTypeName(value)
**Example of the difference between` toTypeName ' and ' toColumnTypeName`**
+```sql
+SELECT toTypeName(CAST('2018-01-01 01:02:03' AS DateTime))
```
-:) select toTypeName(cast('2018-01-01 01:02:03' AS DateTime))
-
-SELECT toTypeName(CAST('2018-01-01 01:02:03', 'DateTime'))
-
+```text
┌─toTypeName(CAST('2018-01-01 01:02:03', 'DateTime'))─┐
│ DateTime │
└─────────────────────────────────────────────────────┘
-
-1 rows in set. Elapsed: 0.008 sec.
-
-:) select toColumnTypeName(cast('2018-01-01 01:02:03' AS DateTime))
-
-SELECT toColumnTypeName(CAST('2018-01-01 01:02:03', 'DateTime'))
-
+```
+```sql
+SELECT toColumnTypeName(CAST('2018-01-01 01:02:03' AS DateTime))
+```
+```text
┌─toColumnTypeName(CAST('2018-01-01 01:02:03', 'DateTime'))─┐
│ Const(UInt32) │
└───────────────────────────────────────────────────────────┘
@@ -508,7 +512,7 @@ The example shows that the `DateTime` data type is stored in memory as `Const(UI
Outputs a detailed description of data structures in RAM
-```
+```sql
dumpColumnStructure(value)
```
@@ -522,9 +526,10 @@ dumpColumnStructure(value)
**Example**
-```
+```sql
SELECT dumpColumnStructure(CAST('2018-01-01 01:02:03', 'DateTime'))
-
+```
+```text
┌─dumpColumnStructure(CAST('2018-01-01 01:02:03', 'DateTime'))─┐
│ DateTime, Const(size = 1, UInt32(size = 1)) │
└──────────────────────────────────────────────────────────────┘
@@ -536,7 +541,7 @@ Outputs the default value for the data type.
Does not include default values for custom columns set by the user.
-```
+```sql
defaultValueOfArgumentType(expression)
```
@@ -552,26 +557,21 @@ defaultValueOfArgumentType(expression)
**Example**
+```sql
+SELECT defaultValueOfArgumentType( CAST(1 AS Int8) )
```
-:) SELECT defaultValueOfArgumentType( CAST(1 AS Int8) )
-
-SELECT defaultValueOfArgumentType(CAST(1, 'Int8'))
-
+```text
┌─defaultValueOfArgumentType(CAST(1, 'Int8'))─┐
│ 0 │
└─────────────────────────────────────────────┘
-
-1 rows in set. Elapsed: 0.002 sec.
-
-:) SELECT defaultValueOfArgumentType( CAST(1 AS Nullable(Int8) ) )
-
-SELECT defaultValueOfArgumentType(CAST(1, 'Nullable(Int8)'))
-
+```
+```sql
+SELECT defaultValueOfArgumentType( CAST(1 AS Nullable(Int8) ) )
+```
+```text
┌─defaultValueOfArgumentType(CAST(1, 'Nullable(Int8)'))─┐
│ ᴺᵁᴸᴸ │
└───────────────────────────────────────────────────────┘
-
-1 rows in set. Elapsed: 0.002 sec.
```
## indexHint
@@ -588,9 +588,10 @@ The expression passed to the function is not calculated, but ClickHouse applies
Here is a table with the test data for [ontime](../../getting_started/example_datasets/ontime.md).
-```
+```sql
SELECT count() FROM ontime
-
+```
+```text
┌─count()─┐
│ 4276457 │
└─────────┘
@@ -600,15 +601,10 @@ The table has indexes for the fields `(FlightDate, (Year, FlightDate))`.
Create a selection by date like this:
+```sql
+SELECT FlightDate AS k, count() FROM ontime GROUP BY k ORDER BY k
```
-:) SELECT FlightDate AS k, count() FROM ontime GROUP BY k ORDER BY k
-
-SELECT
- FlightDate AS k,
- count()
-FROM ontime
-GROUP BY k
-ORDER BY k ASC
+```text
┌──────────k─┬─count()─┐
│ 2017-01-01 │ 13970 │
@@ -618,37 +614,24 @@ ORDER BY k ASC
│ 2017-09-29 │ 16384 │
│ 2017-09-30 │ 12520 │
└────────────┴─────────┘
-
-273 rows in set. Elapsed: 0.072 sec. Processed 4.28 million rows, 8.55 MB (59.00 million rows/s., 118.01 MB/s.)
```
In this selection, the index is not used and ClickHouse processed the entire table (`Processed 4.28 million rows`). To apply the index, select a specific date and run the following query:
+```sql
+SELECT FlightDate AS k, count() FROM ontime WHERE k = '2017-09-15' GROUP BY k ORDER BY k
```
-:) SELECT FlightDate AS k, count() FROM ontime WHERE k = '2017-09-15' GROUP BY k ORDER BY k
-
-SELECT
- FlightDate AS k,
- count()
-FROM ontime
-WHERE k = '2017-09-15'
-GROUP BY k
-ORDER BY k ASC
-
+```text
┌──────────k─┬─count()─┐
│ 2017-09-15 │ 16428 │
└────────────┴─────────┘
-
-1 rows in set. Elapsed: 0.014 sec. Processed 32.74 thousand rows, 65.49 KB (2.31 million rows/s., 4.63 MB/s.)
```
The last line of output shows that by using the index, ClickHouse processed a significantly smaller number of rows (`Processed 32.74 thousand rows`).
Now pass the expression `k = '2017-09-15'` to the `indexHint` function:
-```
-:) SELECT FlightDate AS k, count() FROM ontime WHERE indexHint(k = '2017-09-15') GROUP BY k ORDER BY k
-
+```sql
SELECT
FlightDate AS k,
count()
@@ -656,15 +639,14 @@ FROM ontime
WHERE indexHint(k = '2017-09-15')
GROUP BY k
ORDER BY k ASC
-
+```
+```text
┌──────────k─┬─count()─┐
│ 2017-09-14 │ 7071 │
│ 2017-09-15 │ 16428 │
│ 2017-09-16 │ 1077 │
│ 2017-09-30 │ 8167 │
└────────────┴─────────┘
-
-4 rows in set. Elapsed: 0.004 sec. Processed 32.74 thousand rows, 65.49 KB (8.97 million rows/s., 17.94 MB/s.)
```
The response to the request shows that ClickHouse applied the index in the same way as the previous time (`Processed 32.74 thousand rows`). However, the resulting set of rows shows that the expression `k = '2017-09-15'` was not used when generating the result.
@@ -677,7 +659,7 @@ Creates an array with a single value.
Used for internal implementation of [arrayJoin](array_join.md#functions_arrayjoin).
-```
+```sql
replicate(x, arr)
```
@@ -692,9 +674,10 @@ replicate(x, arr)
**Example**
-```
+```sql
SELECT replicate(1, ['a', 'b', 'c'])
-
+```
+```text
┌─replicate(1, ['a', 'b', 'c'])─┐
│ [1,1,1] │
└───────────────────────────────┘
@@ -704,7 +687,7 @@ SELECT replicate(1, ['a', 'b', 'c'])
Returns the amount of remaining space in the filesystem where the files of the databases located. See the [path](../../operations/server_settings/settings.md#server_settings-path) server setting description.
-```
+```sql
filesystemAvailable()
```
@@ -756,7 +739,8 @@ custom_message - is an optional parameter: a constant string, provides an error
```sql
SELECT throwIf(number = 3, 'Too many') FROM numbers(10);
-
+```
+```text
↙ Progress: 0.00 rows, 0.00 B (0.00 rows/s., 0.00 B/s.) Received exception from server (version 19.14.1):
Code: 395. DB::Exception: Received from localhost:9000. DB::Exception: Too many.
```
@@ -767,7 +751,8 @@ Returns the same value that was used as its argument.
```sql
SELECT identity(42)
-
+```
+```text
┌─identity(42)─┐
│ 42 │
└──────────────┘
diff --git a/docs/en/query_language/functions/rounding_functions.md b/docs/en/query_language/functions/rounding_functions.md
index 2a52db6c865..ea2d899b13c 100644
--- a/docs/en/query_language/functions/rounding_functions.md
+++ b/docs/en/query_language/functions/rounding_functions.md
@@ -22,7 +22,7 @@ Rounds a value to a specified number of decimal places.
The function returns the nearest number of the specified order. In case when given number has equal distance to surrounding numbers the function returns the number having the nearest even digit (banker's rounding).
-```
+```sql
round(expression [, decimal_places])
```
@@ -42,10 +42,10 @@ The rounded number of the same type as the input number.
**Example of use**
-``` sql
+```sql
SELECT number / 2 AS x, round(x) FROM system.numbers LIMIT 3
```
-```
+```text
┌───x─┬─round(divide(number, 2))─┐
│ 0 │ 0 │
│ 0.5 │ 0 │
@@ -57,7 +57,7 @@ SELECT number / 2 AS x, round(x) FROM system.numbers LIMIT 3
Rounding to the nearest number.
-```
+```text
round(3.2, 0) = 3
round(4.1267, 2) = 4.13
round(22,-1) = 20
@@ -67,7 +67,7 @@ round(-467,-2) = -500
Banker's rounding.
-```
+```text
round(3.5) = 4
round(4.5) = 4
round(3.55, 1) = 3.6
diff --git a/docs/en/query_language/functions/splitting_merging_functions.md b/docs/en/query_language/functions/splitting_merging_functions.md
index 0e1cf98ee20..8c4e1ff8b45 100644
--- a/docs/en/query_language/functions/splitting_merging_functions.md
+++ b/docs/en/query_language/functions/splitting_merging_functions.md
@@ -20,9 +20,10 @@ Selects substrings of consecutive bytes from the ranges a-z and A-Z.Returns an a
**Example:**
-```
+```sql
SELECT alphaTokens('abca1abc')
-
+```
+```text
┌─alphaTokens('abca1abc')─┐
│ ['abca','abc'] │
└─────────────────────────┘
diff --git a/docs/en/query_language/functions/string_functions.md b/docs/en/query_language/functions/string_functions.md
index 1eca9c0e815..0f60749d307 100644
--- a/docs/en/query_language/functions/string_functions.md
+++ b/docs/en/query_language/functions/string_functions.md
@@ -64,7 +64,7 @@ Returns 1, if the set of bytes is valid UTF-8 encoded, otherwise 0.
Replaces invalid UTF-8 characters by the `�` (U+FFFD) character. All running in a row invalid characters are collapsed into the one replacement character.
-```
+```sql
toValidUTF8( input_string )
```
@@ -100,13 +100,16 @@ Formatting constant pattern with the string listed in the arguments. `pattern` i
```sql
SELECT format('{1} {0} {1}', 'World', 'Hello')
-
+```
+```text
┌─format('{1} {0} {1}', 'World', 'Hello')─┐
│ Hello World Hello │
└─────────────────────────────────────────┘
-
+```
+```sql
SELECT format('{} {}', 'Hello', 'World')
-
+```
+```text
┌─format('{} {}', 'Hello', 'World')─┐
│ Hello World │
└───────────────────────────────────┘
diff --git a/docs/en/query_language/functions/string_replace_functions.md b/docs/en/query_language/functions/string_replace_functions.md
index 19339dd474d..22c0a27bd41 100644
--- a/docs/en/query_language/functions/string_replace_functions.md
+++ b/docs/en/query_language/functions/string_replace_functions.md
@@ -19,7 +19,7 @@ Also keep in mind that a string literal requires an extra escape.
Example 1. Converting the date to American format:
-``` sql
+```sql
SELECT DISTINCT
EventDate,
replaceRegexpOne(toString(EventDate), '(\\d{4})-(\\d{2})-(\\d{2})', '\\2/\\3/\\1') AS res
@@ -28,7 +28,7 @@ LIMIT 7
FORMAT TabSeparated
```
-```
+```text
2014-03-17 03/17/2014
2014-03-18 03/18/2014
2014-03-19 03/19/2014
@@ -40,11 +40,11 @@ FORMAT TabSeparated
Example 2. Copying a string ten times:
-``` sql
+```sql
SELECT replaceRegexpOne('Hello, World!', '.*', '\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0') AS res
```
-```
+```text
┌─res────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ Hello, World!Hello, World!Hello, World!Hello, World!Hello, World!Hello, World!Hello, World!Hello, World!Hello, World!Hello, World! │
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
@@ -54,11 +54,11 @@ SELECT replaceRegexpOne('Hello, World!', '.*', '\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0')
This does the same thing, but replaces all the occurrences. Example:
-``` sql
+```sql
SELECT replaceRegexpAll('Hello, World!', '.', '\\0\\0') AS res
```
-```
+```text
┌─res────────────────────────┐
│ HHeelllloo,, WWoorrlldd!! │
└────────────────────────────┘
@@ -67,11 +67,11 @@ SELECT replaceRegexpAll('Hello, World!', '.', '\\0\\0') AS res
As an exception, if a regular expression worked on an empty substring, the replacement is not made more than once.
Example:
-``` sql
+```sql
SELECT replaceRegexpAll('Hello, World!', '^', 'here: ') AS res
```
-```
+```text
┌─res─────────────────┐
│ here: Hello, World! │
└─────────────────────┘
diff --git a/docs/en/query_language/functions/type_conversion_functions.md b/docs/en/query_language/functions/type_conversion_functions.md
index 3930c942462..9245ec00120 100644
--- a/docs/en/query_language/functions/type_conversion_functions.md
+++ b/docs/en/query_language/functions/type_conversion_functions.md
@@ -194,7 +194,7 @@ When converting dates with times to numbers or vice versa, the date with time co
The date and date-with-time formats for the toDate/toDateTime functions are defined as follows:
-```
+```text
YYYY-MM-DD
YYYY-MM-DD hh:mm:ss
```
@@ -207,13 +207,13 @@ Conversion between numeric types uses the same rules as assignments between diff
Additionally, the toString function of the DateTime argument can take a second String argument containing the name of the time zone. Example: `Asia/Yekaterinburg` In this case, the time is formatted according to the specified time zone.
-``` sql
+```sql
SELECT
now() AS now_local,
toString(now(), 'Asia/Yekaterinburg') AS now_yekat
```
-```
+```text
┌───────────now_local─┬─now_yekat───────────┐
│ 2016-06-15 00:11:21 │ 2016-06-15 02:11:21 │
└─────────────────────┴─────────────────────┘
@@ -232,21 +232,21 @@ Accepts a String or FixedString argument. Returns the String with the content tr
Example:
-``` sql
+```sql
SELECT toFixedString('foo', 8) AS s, toStringCutToZero(s) AS s_cut
```
-```
+```text
┌─s─────────────┬─s_cut─┐
│ foo\0\0\0\0\0 │ foo │
└───────────────┴───────┘
```
-``` sql
+```sql
SELECT toFixedString('foo\0bar', 8) AS s, toStringCutToZero(s) AS s_cut
```
-```
+```text
┌─s──────────┬─s_cut─┐
│ foo\0bar\0 │ foo │
└────────────┴───────┘
@@ -278,7 +278,7 @@ Converts 'x' to the 't' data type. The syntax CAST(x AS t) is also supported.
Example:
-``` sql
+```sql
SELECT
'2016-06-15 23:00:00' AS timestamp,
CAST(timestamp AS DateTime) AS datetime,
@@ -287,7 +287,7 @@ SELECT
CAST(timestamp, 'FixedString(22)') AS fixed_string
```
-```
+```text
┌─timestamp───────────┬────────────datetime─┬───────date─┬─string──────────────┬─fixed_string──────────────┐
│ 2016-06-15 23:00:00 │ 2016-06-15 23:00:00 │ 2016-06-15 │ 2016-06-15 23:00:00 │ 2016-06-15 23:00:00\0\0\0 │
└─────────────────────┴─────────────────────┴────────────┴─────────────────────┴───────────────────────────┘
@@ -297,16 +297,19 @@ Conversion to FixedString(N) only works for arguments of type String or FixedStr
Type conversion to [Nullable](../../data_types/nullable.md) and back is supported. Example:
-```
+```sql
SELECT toTypeName(x) FROM t_null
-
+```
+```text
┌─toTypeName(x)─┐
│ Int8 │
│ Int8 │
└───────────────┘
-
+```
+```sql
SELECT toTypeName(CAST(x, 'Nullable(UInt16)')) FROM t_null
-
+```
+```text
┌─toTypeName(CAST(x, 'Nullable(UInt16)'))─┐
│ Nullable(UInt16) │
│ Nullable(UInt16) │
@@ -328,7 +331,7 @@ SELECT
date + interval_to_week
```
-```
+```text
┌─plus(date, interval_week)─┬─plus(date, interval_to_week)─┐
│ 2019-01-08 │ 2019-01-08 │
└───────────────────────────┴──────────────────────────────┘
diff --git a/docs/en/query_language/functions/url_functions.md b/docs/en/query_language/functions/url_functions.md
index f4ff63ce021..43b92633653 100644
--- a/docs/en/query_language/functions/url_functions.md
+++ b/docs/en/query_language/functions/url_functions.md
@@ -16,7 +16,7 @@ Examples of typical returned values: http, https, ftp, mailto, tel, magnet...
Extracts the hostname from a URL.
-```
+```sql
domain(url)
```
@@ -27,7 +27,7 @@ domain(url)
The URL can be specified with or without a scheme. Examples:
-```
+```text
svn+ssh://some.svn-hosting.com:80/repo/trunk
some.svn-hosting.com:80/repo/trunk
https://yandex.com/time/
@@ -35,7 +35,7 @@ https://yandex.com/time/
For these examples, the `domain` function returns the following results:
-```
+```text
some.svn-hosting.com
some.svn-hosting.com
yandex.com
@@ -67,7 +67,7 @@ Returns the domain and removes no more than one 'www.' from the beginning of it,
Extracts the the top-level domain from a URL.
-```
+```sql
topLevelDomain(url)
```
@@ -77,7 +77,7 @@ topLevelDomain(url)
The URL can be specified with or without a scheme. Examples:
-```
+```text
svn+ssh://some.svn-hosting.com:80/repo/trunk
some.svn-hosting.com:80/repo/trunk
https://yandex.com/time/
@@ -151,7 +151,7 @@ Returns an array containing the URL, truncated at the end by the symbols /,? in
The same as above, but without the protocol and host in the result. The / element (root) is not included. Example: the function is used to implement tree reports the URL in Yandex. Metric.
-```
+```text
URLPathHierarchy('https://example.com/browse/CONV-6788') =
[
'/browse/',
@@ -164,11 +164,11 @@ URLPathHierarchy('https://example.com/browse/CONV-6788') =
Returns the decoded URL.
Example:
-``` sql
+```sql
SELECT decodeURLComponent('http://127.0.0.1:8123/?query=SELECT%201%3B') AS DecodedURL;
```
-```
+```text
┌─DecodedURL─────────────────────────────┐
│ http://127.0.0.1:8123/?query=SELECT 1; │
└────────────────────────────────────────┘
diff --git a/docs/en/query_language/functions/uuid_functions.md b/docs/en/query_language/functions/uuid_functions.md
index e353ed58f86..2ab89dabe9f 100644
--- a/docs/en/query_language/functions/uuid_functions.md
+++ b/docs/en/query_language/functions/uuid_functions.md
@@ -18,13 +18,14 @@ The UUID type value.
This example demonstrates creating a table with the UUID type column and inserting a value into the table.
-``` sql
-:) CREATE TABLE t_uuid (x UUID) ENGINE=TinyLog
+```sql
+CREATE TABLE t_uuid (x UUID) ENGINE=TinyLog
-:) INSERT INTO t_uuid SELECT generateUUIDv4()
-
-:) SELECT * FROM t_uuid
+INSERT INTO t_uuid SELECT generateUUIDv4()
+SELECT * FROM t_uuid
+```
+```text
┌────────────────────────────────────x─┐
│ f4bf890f-f9dc-4332-ad5c-0c18e73f28e9 │
└──────────────────────────────────────┘
@@ -44,9 +45,10 @@ The UUID type value.
**Usage example**
-``` sql
-:) SELECT toUUID('61f0c404-5cb3-11e7-907b-a6006ad3dba0') AS uuid
-
+```sql
+SELECT toUUID('61f0c404-5cb3-11e7-907b-a6006ad3dba0') AS uuid
+```
+```text
┌─────────────────────────────────uuid─┐
│ 61f0c404-5cb3-11e7-907b-a6006ad3dba0 │
└──────────────────────────────────────┘
@@ -56,7 +58,7 @@ The UUID type value.
Accepts a string containing 36 characters in the format `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`, and returns it as a set of bytes in a [FixedString(16)](../../data_types/fixedstring.md).
-``` sql
+```sql
UUIDStringToNum(String)
```
@@ -66,10 +68,12 @@ FixedString(16)
**Usage examples**
-``` sql
-:) SELECT
+```sql
+SELECT
'612f3c40-5d3b-217e-707b-6a546a3d7b29' AS uuid,
UUIDStringToNum(uuid) AS bytes
+```
+```text
┌─uuid─────────────────────────────────┬─bytes────────────┐
│ 612f3c40-5d3b-217e-707b-6a546a3d7b29 │ a/<@];!~p{jTj={) │
@@ -80,7 +84,7 @@ FixedString(16)
Accepts a [FixedString(16)](../../data_types/fixedstring.md) value, and returns a string containing 36 characters in text format.
-``` sql
+```sql
UUIDNumToString(FixedString(16))
```
@@ -90,11 +94,12 @@ String.
**Usage example**
-``` sql
+```sql
SELECT
'a/<@];!~p{jTj={)' AS bytes,
UUIDNumToString(toFixedString(bytes, 16)) AS uuid
-
+```
+```text
┌─bytes────────────┬─uuid─────────────────────────────────┐
│ a/<@];!~p{jTj={) │ 612f3c40-5d3b-217e-707b-6a546a3d7b29 │
└──────────────────┴──────────────────────────────────────┘
diff --git a/docs/en/query_language/functions/ym_dict_functions.md b/docs/en/query_language/functions/ym_dict_functions.md
index 8039c77edea..b04c16dab33 100644
--- a/docs/en/query_language/functions/ym_dict_functions.md
+++ b/docs/en/query_language/functions/ym_dict_functions.md
@@ -20,7 +20,7 @@ All the dictionaries are re-loaded in runtime (once every certain number of seco
All functions for working with regions have an optional argument at the end – the dictionary key. It is referred to as the geobase.
Example:
-```
+```sql
regionToCountry(RegionID) – Uses the default dictionary: /opt/geo/regions_hierarchy.txt
regionToCountry(RegionID, '') – Uses the default dictionary: /opt/geo/regions_hierarchy.txt
regionToCountry(RegionID, 'ua') – Uses the dictionary for the 'ua' key: /opt/geo/regions_hierarchy_ua.txt
@@ -34,13 +34,13 @@ Accepts a UInt32 number – the region ID from the Yandex geobase. If this regio
Converts a region to an area (type 5 in the geobase). In every other way, this function is the same as 'regionToCity'.
-``` sql
+```sql
SELECT DISTINCT regionToName(regionToArea(toUInt32(number), 'ua'))
FROM system.numbers
LIMIT 15
```
-```
+```text
┌─regionToName(regionToArea(toUInt32(number), \'ua\'))─┐
│ │
│ Moscow and Moscow region │
@@ -64,13 +64,13 @@ LIMIT 15
Converts a region to a federal district (type 4 in the geobase). In every other way, this function is the same as 'regionToCity'.
-``` sql
+```sql
SELECT DISTINCT regionToName(regionToDistrict(toUInt32(number), 'ua'))
FROM system.numbers
LIMIT 15
```
-```
+```text
┌─regionToName(regionToDistrict(toUInt32(number), \'ua\'))─┐
│ │
│ Central federal district │
diff --git a/docs/en/query_language/insert_into.md b/docs/en/query_language/insert_into.md
index d954a470847..2b361fd5a18 100644
--- a/docs/en/query_language/insert_into.md
+++ b/docs/en/query_language/insert_into.md
@@ -5,7 +5,7 @@ Adding data.
Basic query format:
-``` sql
+```sql
INSERT INTO [db.]table [(c1, c2, c3)] VALUES (v11, v12, v13), (v21, v22, v23), ...
```
@@ -18,13 +18,13 @@ If [strict_insert_defaults=1](../operations/settings/settings.md), columns that
Data can be passed to the INSERT in any [format](../interfaces/formats.md#formats) supported by ClickHouse. The format must be specified explicitly in the query:
-``` sql
+```sql
INSERT INTO [db.]table [(c1, c2, c3)] FORMAT format_name data_set
```
For example, the following query format is identical to the basic version of INSERT ... VALUES:
-``` sql
+```sql
INSERT INTO [db.]table [(c1, c2, c3)] FORMAT Values (v11, v12, v13), (v21, v22, v23), ...
```
@@ -32,7 +32,7 @@ ClickHouse removes all spaces and one line feed (if there is one) before the dat
Example:
-``` sql
+```sql
INSERT INTO t FORMAT TabSeparated
11 Hello, world!
22 Qwerty
@@ -46,7 +46,7 @@ If table has [constraints](create.md#constraints), their expressions will be che
### Inserting The Results of `SELECT` {#insert_query_insert-select}
-``` sql
+```sql
INSERT INTO [db.]table [(c1, c2, c3)] SELECT ...
```
diff --git a/docs/en/query_language/misc.md b/docs/en/query_language/misc.md
index 337049d6624..4272bb5c155 100644
--- a/docs/en/query_language/misc.md
+++ b/docs/en/query_language/misc.md
@@ -10,7 +10,7 @@ After executing an ATTACH query, the server will know about the existence of the
If the table was previously detached (``DETACH``), meaning that its structure is known, you can use shorthand without defining the structure.
-``` sql
+```sql
ATTACH TABLE [IF NOT EXISTS] [db.]name [ON CLUSTER cluster]
```
@@ -20,7 +20,7 @@ This query is used when starting the server. The server stores table metadata as
Checks if the data in the table is corrupted.
-``` sql
+```sql
CHECK TABLE [db.]name
```
@@ -56,7 +56,7 @@ If the table is corrupted, you can copy the non-corrupted data to another table.
## DESCRIBE TABLE {#misc-describe-table}
-``` sql
+```sql
DESC|DESCRIBE TABLE [db.]table [INTO OUTFILE filename] [FORMAT format]
```
@@ -74,7 +74,7 @@ Nested data structures are output in "expanded" format. Each column is shown sep
Deletes information about the 'name' table from the server. The server stops knowing about the table's existence.
-``` sql
+```sql
DETACH TABLE [IF EXISTS] [db.]name [ON CLUSTER cluster]
```
@@ -87,14 +87,14 @@ There is no `DETACH DATABASE` query.
This query has two types: `DROP DATABASE` and `DROP TABLE`.
-``` sql
+```sql
DROP DATABASE [IF EXISTS] db [ON CLUSTER cluster]
```
Deletes all tables inside the 'db' database, then deletes the 'db' database itself.
If `IF EXISTS` is specified, it doesn't return an error if the database doesn't exist.
-``` sql
+```sql
DROP [TEMPORARY] TABLE [IF EXISTS] [db.]name [ON CLUSTER cluster]
```
@@ -103,7 +103,7 @@ If `IF EXISTS` is specified, it doesn't return an error if the table doesn't exi
## EXISTS
-``` sql
+```sql
EXISTS [TEMPORARY] TABLE [db.]name [INTO OUTFILE filename] [FORMAT format]
```
@@ -111,7 +111,7 @@ Returns a single `UInt8`-type column, which contains the single value `0` if the
## KILL QUERY
-``` sql
+```sql
KILL QUERY [ON CLUSTER cluster]
WHERE
[SYNC|ASYNC|TEST]
@@ -123,7 +123,7 @@ The queries to terminate are selected from the system.processes table using the
Examples:
-``` sql
+```sql
-- Forcibly terminates all queries with the specified query_id:
KILL QUERY WHERE query_id='2-857d-4a57-9ee0-327da5d60a90'
@@ -173,26 +173,26 @@ Changes already made by the mutation are not rolled back.
## OPTIMIZE {#misc_operations-optimize}
-``` sql
+```sql
OPTIMIZE TABLE [db.]name [ON CLUSTER cluster] [PARTITION partition] [FINAL]
```
-This query tries to initialize an unscheduled merge of data parts for tables with a table engine of [MergeTree](../operations/table_engines/mergetree.md) family. Other kinds of table engines are not supported.
+This query tries to initialize an unscheduled merge of data parts for tables with a table engine from the [MergeTree](../operations/table_engines/mergetree.md) family. Other kinds of table engines aren't supported.
-When `OPTIMIZE` is used with [ReplicatedMergeTree](../operations/table_engines/replication.md) family of table engines, ClickHouse creates a task for merging and waits for execution on all nodes (if the `replication_alter_partitions_sync` setting is enabled).
+When `OPTIMIZE` is used with the [ReplicatedMergeTree](../operations/table_engines/replication.md) family of table engines, ClickHouse creates a task for merging and waits for execution on all nodes (if the `replication_alter_partitions_sync` setting is enabled).
-- If `OPTIMIZE` doesn't perform merging for any reason, it doesn't notify the client about it. To enable notification use the [optimize_throw_if_noop](../operations/settings/settings.md#setting-optimize_throw_if_noop) setting.
+- If `OPTIMIZE` doesn't perform a merge for any reason, it doesn't notify the client. To enable notifications, use the [optimize_throw_if_noop](../operations/settings/settings.md#setting-optimize_throw_if_noop) setting.
- If you specify a `PARTITION`, only the specified partition is optimized.
- If you specify `FINAL`, optimization is performed even when all the data is already in one part.
-!!! warning
- OPTIMIZE can't fix the "Too many parts" error.
+!!! warning "Warning"
+ `OPTIMIZE` can't fix the "Too many parts" error.
## RENAME {#misc_operations-rename}
Renames one or more tables.
-``` sql
+```sql
RENAME TABLE [db11.]name11 TO [db12.]name12, [db21.]name21 TO [db22.]name22, ... [ON CLUSTER cluster]
```
@@ -216,7 +216,7 @@ For more information, see [Settings](../operations/settings/settings.md).
## SHOW CREATE TABLE
-``` sql
+```sql
SHOW CREATE [TEMPORARY] TABLE [db.]table [INTO OUTFILE filename] [FORMAT format]
```
@@ -224,7 +224,7 @@ Returns a single `String`-type 'statement' column, which contains a single value
## SHOW DATABASES {#show-databases}
-``` sql
+```sql
SHOW DATABASES [INTO OUTFILE filename] [FORMAT format]
```
@@ -235,7 +235,7 @@ See also the section "Formats".
## SHOW PROCESSLIST
-``` sql
+```sql
SHOW PROCESSLIST [INTO OUTFILE filename] [FORMAT format]
```
@@ -262,12 +262,12 @@ This query is nearly identical to: `SELECT * FROM system.processes`. The differe
Tip (execute in the console):
```bash
-watch -n1 "clickhouse-client --query='SHOW PROCESSLIST'"
+$ watch -n1 "clickhouse-client --query='SHOW PROCESSLIST'"
```
## SHOW TABLES
-``` sql
+```sql
SHOW [TEMPORARY] TABLES [FROM db] [LIKE 'pattern'] [INTO OUTFILE filename] [FORMAT format]
```
@@ -282,7 +282,7 @@ See also the section "LIKE operator".
## TRUNCATE
-``` sql
+```sql
TRUNCATE TABLE [IF EXISTS] [db.]name [ON CLUSTER cluster]
```
@@ -292,7 +292,7 @@ The `TRUNCATE` query is not supported for [View](../operations/table_engines/vie
## USE
-``` sql
+```sql
USE db
```
diff --git a/docs/en/query_language/operators.md b/docs/en/query_language/operators.md
index 4e79c9e805f..0a7a81550a2 100644
--- a/docs/en/query_language/operators.md
+++ b/docs/en/query_language/operators.md
@@ -67,7 +67,7 @@ Groups of operators are listed in order of priority (the higher it is in the lis
## Operator for Working With Dates and Times {#operators-datetime}
-``` sql
+```sql
EXTRACT(part FROM date);
```
@@ -88,7 +88,7 @@ The `date` parameter specifies the date or the time to process. Either [Date](..
Examples:
-``` sql
+```sql
SELECT EXTRACT(DAY FROM toDate('2017-06-15'));
SELECT EXTRACT(MONTH FROM toDate('2017-06-15'));
SELECT EXTRACT(YEAR FROM toDate('2017-06-15'));
@@ -96,7 +96,7 @@ SELECT EXTRACT(YEAR FROM toDate('2017-06-15'));
In the following example we create a table and insert into it a value with the `DateTime` type.
-``` sql
+```sql
CREATE TABLE test.Orders
(
OrderId UInt64,
@@ -106,10 +106,10 @@ CREATE TABLE test.Orders
ENGINE = Log;
```
-``` sql
+```sql
INSERT INTO test.Orders VALUES (1, 'Jarlsberg Cheese', toDateTime('2008-10-11 13:23:44'));
```
-``` sql
+```sql
SELECT
toYear(OrderDate) AS OrderYear,
toMonth(OrderDate) AS OrderMonth,
@@ -118,13 +118,15 @@ SELECT
toMinute(OrderDate) AS OrderMinute,
toSecond(OrderDate) AS OrderSecond
FROM test.Orders;
+```
+```text
┌─OrderYear─┬─OrderMonth─┬─OrderDay─┬─OrderHour─┬─OrderMinute─┬─OrderSecond─┐
│ 2008 │ 10 │ 11 │ 13 │ 23 │ 44 │
└───────────┴────────────┴──────────┴───────────┴─────────────┴─────────────┘
```
-You can see more examples in [tests](https://github.com/yandex/ClickHouse/blob/master/dbms/tests/queries/0_stateless/00619_extract.sql).
+You can see more examples in [tests](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/tests/queries/0_stateless/00619_extract.sql).
## Logical Negation Operator
@@ -148,7 +150,7 @@ The conditional operator calculates the values of b and c, then checks whether c
## Conditional Expression {#operator_case}
-``` sql
+```sql
CASE [x]
WHEN a THEN b
[WHEN ... THEN ...]
@@ -198,18 +200,13 @@ ClickHouse supports the `IS NULL` and `IS NOT NULL` operators.
- `0` otherwise.
- For other values, the `IS NULL` operator always returns `0`.
-```bash
-:) SELECT x+100 FROM t_null WHERE y IS NULL
-
-SELECT x + 100
-FROM t_null
-WHERE isNull(y)
-
+```sql
+SELECT x+100 FROM t_null WHERE y IS NULL
+```
+```text
┌─plus(x, 100)─┐
│ 101 │
└──────────────┘
-
-1 rows in set. Elapsed: 0.002 sec.
```
@@ -220,18 +217,13 @@ WHERE isNull(y)
- `1` otherwise.
- For other values, the `IS NOT NULL` operator always returns `1`.
-```bash
-:) SELECT * FROM t_null WHERE y IS NOT NULL
-
-SELECT *
-FROM t_null
-WHERE isNotNull(y)
-
+```sql
+SELECT * FROM t_null WHERE y IS NOT NULL
+```
+```text
┌─x─┬─y─┐
│ 2 │ 3 │
└───┴───┘
-
-1 rows in set. Elapsed: 0.002 sec.
```
[Original article](https://clickhouse.yandex/docs/en/query_language/operators/)
diff --git a/docs/en/query_language/select.md b/docs/en/query_language/select.md
index afa2df73c3d..fb1c529a75b 100644
--- a/docs/en/query_language/select.md
+++ b/docs/en/query_language/select.md
@@ -2,7 +2,7 @@
`SELECT` performs data retrieval.
-``` sql
+```sql
[WITH expr_list|(subquery)]
SELECT [DISTINCT] expr_list
[FROM [db.]table | (subquery) | table_function] [FINAL]
@@ -35,7 +35,7 @@ This section provides support for Common Table Expressions ([CTE](https://en.wik
Results of WITH clause expressions can be used inside SELECT clause.
Example 1: Using constant expression as "variable"
-```
+```sql
WITH '2019-08-01 15:23:00' as ts_upper_bound
SELECT *
FROM hits
@@ -45,7 +45,7 @@ WHERE
```
Example 2: Evicting sum(bytes) expression result from SELECT clause column list
-```
+```sql
WITH sum(bytes) as s
SELECT
formatReadableSize(s),
@@ -56,7 +56,7 @@ ORDER BY s
```
Example 3: Using results of scalar subquery
-```
+```sql
/* this example would return TOP 10 of most huge tables */
WITH
(
@@ -75,7 +75,7 @@ LIMIT 10
Example 4: Re-using expression in subquery
As a workaround for current limitation for expression usage in subqueries, you may duplicate it.
-```
+```sql
WITH ['hello'] AS hello
SELECT
hello,
@@ -85,7 +85,8 @@ FROM
WITH ['hello'] AS hello
SELECT hello
)
-
+```
+```text
┌─hello─────┬─hello─────┐
│ ['hello'] │ ['hello'] │
└───────────┴───────────┘
@@ -111,7 +112,8 @@ In contrast to standard SQL, a synonym does not need to be specified after a sub
To execute a query, all the columns listed in the query are extracted from the appropriate table. Any columns not needed for the external query are thrown out of the subqueries.
If a query does not list any columns (for example, `SELECT count() FROM t`), some column is extracted from the table anyway (the smallest one is preferred), in order to calculate the number of rows.
-The `FINAL` modifier can be used in the `SELECT` select query for aggregating engines from the [MergeTree](../operations/table_engines/mergetree.md) family. When you specify `FINAL`, data is selected fully "merged". Keep in mind that using `FINAL` leads to a selection that includes columns related to the primary key, in addition to the columns specified in the `SELECT`. Additionally, the query will be executed in a single stream, and data will be merged during query execution. This means that when using `FINAL`, the query is processed slowly. In the most cases, avoid using `FINAL`.
+The `FINAL` modifier can be used in the `SELECT` select query for engines from the [MergeTree](../operations/table_engines/mergetree.md) family. When you specify `FINAL`, data is selected fully "merged". Keep in mind that using `FINAL` leads to reading columns related to the primary key, in addition to the columns specified in the query. Additionally, the query will be executed in a single thread, and data will be merged during query execution. This means that when using `FINAL`, the query is processed slowly. In the most cases, avoid using `FINAL`.
+The `FINAL` modifier can be applied for all engines of MergeTree family that do data transformations in background merges (except GraphiteMergeTree).
### SAMPLE Clause {#select-sample-clause}
@@ -149,7 +151,7 @@ Here `k` is the number from 0 to 1 (both fractional and decimal notations are su
In a `SAMPLE k` clause, the sample is taken from the `k` fraction of data. The example is shown below:
-``` sql
+```sql
SELECT
Title,
count() * 10 AS PageViews
@@ -177,27 +179,27 @@ The `_sample_factor` column contains relative coefficients that are calculated d
Let's consider the table `visits`, which contains the statistics about site visits. The first example shows how to calculate the number of page views:
-``` sql
+```sql
SELECT sum(PageViews * _sample_factor)
FROM visits
SAMPLE 10000000
-```
+```
The next example shows how to calculate the total number of visits:
-``` sql
+```sql
SELECT sum(_sample_factor)
FROM visits
SAMPLE 10000000
-```
+```
The example below shows how to calculate the average session duration. Note that you don't need to use the relative coefficient to calculate the average values.
-``` sql
+```sql
SELECT avg(Duration)
FROM visits
SAMPLE 10000000
-```
+```
#### SAMPLE k OFFSET m {#select-sample-offset}
@@ -205,7 +207,7 @@ Here `k` and `m` are numbers from 0 to 1. Examples are shown below.
**Example 1**
-``` sql
+```sql
SAMPLE 1/10
```
@@ -215,7 +217,7 @@ In this example, the sample is 1/10th of all data:
**Example 2**
-``` sql
+```sql
SAMPLE 1/10 OFFSET 1/2
```
@@ -227,7 +229,7 @@ Here, a sample of 10% is taken from the second half of the data.
Allows executing `JOIN` with an array or nested data structure. The intent is similar to the [arrayJoin](functions/array_join.md#functions_arrayjoin) function, but its functionality is broader.
-``` sql
+```sql
SELECT
FROM
[LEFT] ARRAY JOIN
@@ -246,7 +248,7 @@ Supported types of `ARRAY JOIN` are listed below:
The examples below demonstrate the usage of the `ARRAY JOIN` and `LEFT ARRAY JOIN` clauses. Let's create a table with an [Array](../data_types/array.md) type column and insert values into it:
-``` sql
+```sql
CREATE TABLE arrays_test
(
s String,
@@ -256,7 +258,7 @@ CREATE TABLE arrays_test
INSERT INTO arrays_test
VALUES ('Hello', [1,2]), ('World', [3,4,5]), ('Goodbye', []);
```
-```
+```text
┌─s───────────┬─arr─────┐
│ Hello │ [1,2] │
│ World │ [3,4,5] │
@@ -266,12 +268,12 @@ VALUES ('Hello', [1,2]), ('World', [3,4,5]), ('Goodbye', []);
The example below uses the `ARRAY JOIN` clause:
-``` sql
+```sql
SELECT s, arr
FROM arrays_test
ARRAY JOIN arr;
```
-```
+```text
┌─s─────┬─arr─┐
│ Hello │ 1 │
│ Hello │ 2 │
@@ -283,12 +285,12 @@ ARRAY JOIN arr;
The next example uses the `LEFT ARRAY JOIN` clause:
-``` sql
+```sql
SELECT s, arr
FROM arrays_test
LEFT ARRAY JOIN arr;
```
-```
+```text
┌─s───────────┬─arr─┐
│ Hello │ 1 │
│ Hello │ 2 │
@@ -303,13 +305,13 @@ LEFT ARRAY JOIN arr;
An alias can be specified for an array in the `ARRAY JOIN` clause. In this case, an array item can be accessed by this alias, but the array itself is accessed by the original name. Example:
-``` sql
+```sql
SELECT s, arr, a
FROM arrays_test
ARRAY JOIN arr AS a;
```
-```
+```text
┌─s─────┬─arr─────┬─a─┐
│ Hello │ [1,2] │ 1 │
│ Hello │ [1,2] │ 2 │
@@ -321,13 +323,13 @@ ARRAY JOIN arr AS a;
Using aliases, you can perform `ARRAY JOIN` with an external array. For example:
-``` sql
+```sql
SELECT s, arr_external
FROM arrays_test
ARRAY JOIN [1, 2, 3] AS arr_external;
```
-```
+```text
┌─s───────────┬─arr_external─┐
│ Hello │ 1 │
│ Hello │ 2 │
@@ -343,13 +345,13 @@ ARRAY JOIN [1, 2, 3] AS arr_external;
Multiple arrays can be comma-separated in the `ARRAY JOIN` clause. In this case, `JOIN` is performed with them simultaneously (the direct sum, not the cartesian product). Note that all the arrays must have the same size. Example:
-``` sql
+```sql
SELECT s, arr, a, num, mapped
FROM arrays_test
ARRAY JOIN arr AS a, arrayEnumerate(arr) AS num, arrayMap(x -> x + 1, arr) AS mapped;
```
-```
+```text
┌─s─────┬─arr─────┬─a─┬─num─┬─mapped─┐
│ Hello │ [1,2] │ 1 │ 1 │ 2 │
│ Hello │ [1,2] │ 2 │ 2 │ 3 │
@@ -361,13 +363,13 @@ ARRAY JOIN arr AS a, arrayEnumerate(arr) AS num, arrayMap(x -> x + 1, arr) AS ma
The example below uses the [arrayEnumerate](functions/array_functions.md#array_functions-arrayenumerate) function:
-``` sql
+```sql
SELECT s, arr, a, num, arrayEnumerate(arr)
FROM arrays_test
ARRAY JOIN arr AS a, arrayEnumerate(arr) AS num;
```
-```
+```text
┌─s─────┬─arr─────┬─a─┬─num─┬─arrayEnumerate(arr)─┐
│ Hello │ [1,2] │ 1 │ 1 │ [1,2] │
│ Hello │ [1,2] │ 2 │ 2 │ [1,2] │
@@ -381,7 +383,7 @@ ARRAY JOIN arr AS a, arrayEnumerate(arr) AS num;
`ARRAY `JOIN`` also works with [nested data structures](../data_types/nested_data_structures/nested.md). Example:
-``` sql
+```sql
CREATE TABLE nested_test
(
s String,
@@ -394,7 +396,7 @@ INSERT INTO nested_test
VALUES ('Hello', [1,2], [10,20]), ('World', [3,4,5], [30,40,50]), ('Goodbye', [], []);
```
-```
+```text
┌─s───────┬─nest.x──┬─nest.y─────┐
│ Hello │ [1,2] │ [10,20] │
│ World │ [3,4,5] │ [30,40,50] │
@@ -402,13 +404,13 @@ VALUES ('Hello', [1,2], [10,20]), ('World', [3,4,5], [30,40,50]), ('Goodbye', []
└─────────┴─────────┴────────────┘
```
-``` sql
+```sql
SELECT s, `nest.x`, `nest.y`
FROM nested_test
ARRAY JOIN nest;
```
-```
+```text
┌─s─────┬─nest.x─┬─nest.y─┐
│ Hello │ 1 │ 10 │
│ Hello │ 2 │ 20 │
@@ -420,13 +422,13 @@ ARRAY JOIN nest;
When specifying names of nested data structures in `ARRAY JOIN`, the meaning is the same as `ARRAY JOIN` with all the array elements that it consists of. Examples are listed below:
-``` sql
+```sql
SELECT s, `nest.x`, `nest.y`
FROM nested_test
ARRAY JOIN `nest.x`, `nest.y`;
```
-```
+```text
┌─s─────┬─nest.x─┬─nest.y─┐
│ Hello │ 1 │ 10 │
│ Hello │ 2 │ 20 │
@@ -438,13 +440,13 @@ ARRAY JOIN `nest.x`, `nest.y`;
This variation also makes sense:
-``` sql
+```sql
SELECT s, `nest.x`, `nest.y`
FROM nested_test
ARRAY JOIN `nest.x`;
```
-```
+```text
┌─s─────┬─nest.x─┬─nest.y─────┐
│ Hello │ 1 │ [10,20] │
│ Hello │ 2 │ [10,20] │
@@ -456,13 +458,13 @@ ARRAY JOIN `nest.x`;
An alias may be used for a nested data structure, in order to select either the `JOIN` result or the source array. Example:
-``` sql
+```sql
SELECT s, `n.x`, `n.y`, `nest.x`, `nest.y`
FROM nested_test
ARRAY JOIN nest AS n;
```
-```
+```text
┌─s─────┬─n.x─┬─n.y─┬─nest.x──┬─nest.y─────┐
│ Hello │ 1 │ 10 │ [1,2] │ [10,20] │
│ Hello │ 2 │ 20 │ [1,2] │ [10,20] │
@@ -474,13 +476,13 @@ ARRAY JOIN nest AS n;
Example of using the [arrayEnumerate](functions/array_functions.md#array_functions-arrayenumerate) function:
-``` sql
+```sql
SELECT s, `n.x`, `n.y`, `nest.x`, `nest.y`, num
FROM nested_test
ARRAY JOIN nest AS n, arrayEnumerate(`nest.x`) AS num;
```
-```
+```text
┌─s─────┬─n.x─┬─n.y─┬─nest.x──┬─nest.y─────┬─num─┐
│ Hello │ 1 │ 10 │ [1,2] │ [10,20] │ 1 │
│ Hello │ 2 │ 20 │ [1,2] │ [10,20] │ 2 │
@@ -497,7 +499,7 @@ Joins the data in the normal [SQL JOIN](https://en.wikipedia.org/wiki/Join_(SQL)
!!! info "Note"
Not related to [ARRAY JOIN](#select-array-join-clause).
-``` sql
+```sql
SELECT
FROM
[GLOBAL] [ANY|ALL] [INNER|LEFT|RIGHT|FULL|CROSS] [OUTER] JOIN
@@ -524,13 +526,13 @@ If a query contains the `WHERE` clause, ClickHouse tries to pushdown filters fro
We recommend the `JOIN ON` or `JOIN USING` syntax for creating queries. For example:
-```
+```sql
SELECT * FROM t1 JOIN t2 ON t1.a = t2.a JOIN t3 ON t1.a = t3.a
```
You can use comma-separated lists of tables in the `FROM` clause. This works only with the [allow_experimental_cross_to_join_conversion = 1](../operations/settings/settings.md#settings-allow_experimental_cross_to_join_conversion) setting. For example:
-```
+```sql
SELECT * FROM t1, t2, t3 WHERE t1.a = t2.a AND t1.a = t3.a
```
@@ -561,7 +563,7 @@ You can use the following types of syntax:
ON equi_cond AND closest_match_cond
```
- You can use any number of equality conditions and exactly one closest match condition. For example, `SELECT count() FROM A ASOF LEFT JOIN B ON A.a == B.b AND B.t <= A.t`. There is just `table_2.some_col <= table_1.some_col` and `table_1.some_col >= table2.some_col` types of conditions are available. You cannot apply other conditions like `>` or `!=`.
+ You can use any number of equality conditions and exactly one closest match condition. For example, `SELECT count() FROM A ASOF LEFT JOIN B ON A.a == B.b AND B.t <= A.t`. Only `table_2.some_col <= table_1.some_col` and `table_1.some_col >= table2.some_col` condition types are available. You can't apply other conditions like `>` or `!=`.
- `ASOF JOIN ... USING`
@@ -576,7 +578,7 @@ You can use the following types of syntax:
For example, consider the following tables:
-```
+```text
table_1 table_2
event | ev_time | user_id event | ev_time | user_id
@@ -588,7 +590,7 @@ event_1_2 | 13:00 | 42 event_2_3 | 13:00 | 42
... ...
```
-`ASOF JOIN` can take the timestamp of a user event from `table_1` and find an event in `table_2` where the timestamp is closest (equal or less) to the timestamp of the event from `table_1`. Herewith the `user_id` column can be used for joining on equality and the `ev_time` column can be used for joining on the closest match. In our example, `event_1_1` can be joined with `event_2_1`, `event_1_2` can be joined with `event_2_3`, but `event_2_2` cannot be joined.
+`ASOF JOIN` can take the timestamp of a user event from `table_1` and find an event in `table_2` where the timestamp is closest (equal to or less) to the timestamp of the event from `table_1`. Here, the `user_id` column can be used for joining on equality and the `ev_time` column can be used for joining on the closest match. In our example, `event_1_1` can be joined with `event_2_1` and `event_1_2` can be joined with `event_2_3`, but `event_2_2` can't be joined.
!!! note "Note"
@@ -610,7 +612,7 @@ When running a `JOIN`, there is no optimization of the order of execution in rel
Example:
-``` sql
+```sql
SELECT
CounterID,
hits,
@@ -634,7 +636,7 @@ ORDER BY hits DESC
LIMIT 10
```
-```
+```text
┌─CounterID─┬───hits─┬─visits─┐
│ 1143050 │ 523264 │ 13665 │
│ 731962 │ 475698 │ 102716 │
@@ -724,7 +726,7 @@ If a query contains only table columns inside aggregate functions, the GROUP BY
Example:
-``` sql
+```sql
SELECT
count(),
median(FetchTiming > 60 ? 60 : FetchTiming),
@@ -738,7 +740,7 @@ As opposed to MySQL (and conforming to standard SQL), you can't get some value o
Example:
-``` sql
+```sql
SELECT
domainWithoutWWW(URL) AS domain,
count(),
@@ -761,7 +763,7 @@ Here's an example to show what this means.
Assume you have this table:
-```
+```text
┌─x─┬────y─┐
│ 1 │ 2 │
│ 2 │ ᴺᵁᴸᴸ │
@@ -773,7 +775,7 @@ Assume you have this table:
The query `SELECT sum(x), y FROM t_null_big GROUP BY y` results in:
-```
+```text
┌─sum(x)─┬────y─┐
│ 4 │ 2 │
│ 3 │ 3 │
@@ -877,7 +879,7 @@ The `SELECT * FROM limit_by ORDER BY id, val LIMIT 2 OFFSET 1 BY id` query retur
The following query returns the top 5 referrers for each `domain, device_type` pair with a maximum of 100 rows in total (`LIMIT n BY + LIMIT`).
-``` sql
+```sql
SELECT
domainWithoutWWW(URL) AS domain,
domainWithoutWWW(REFERRER_URL) AS referrer,
@@ -918,7 +920,7 @@ Example:
For the table
-```
+```text
┌─x─┬────y─┐
│ 1 │ ᴺᵁᴸᴸ │
│ 2 │ 2 │
@@ -935,7 +937,7 @@ For the table
Run the query `SELECT * FROM t_null_nan ORDER BY y NULLS FIRST` to get:
-```
+```text
┌─x─┬────y─┐
│ 1 │ ᴺᵁᴸᴸ │
│ 7 │ ᴺᵁᴸᴸ │
@@ -1031,7 +1033,7 @@ If there isn't an `ORDER BY` clause that explicitly sorts results, the result ma
You can use UNION ALL to combine any number of queries. Example:
-``` sql
+```sql
SELECT CounterID, 1 AS table, toInt64(count()) AS c
FROM test.hits
GROUP BY CounterID
@@ -1078,7 +1080,7 @@ The left side of the operator is either a single column or a tuple.
Examples:
-``` sql
+```sql
SELECT UserID IN (123, 456) FROM ...
SELECT (CounterID, UserID) IN ((34, 123), (101500, 456)) FROM ...
```
@@ -1096,7 +1098,7 @@ If the right side of the operator is a table name that has the Set engine (a pre
The subquery may specify more than one column for filtering tuples.
Example:
-``` sql
+```sql
SELECT (CounterID, UserID) IN (SELECT CounterID, UserID FROM ...) FROM ...
```
@@ -1105,7 +1107,7 @@ The columns to the left and right of the IN operator should have the same type.
The IN operator and subquery may occur in any part of the query, including in aggregate functions and lambda functions.
Example:
-``` sql
+```sql
SELECT
EventDate,
avg(UserID IN
@@ -1119,7 +1121,7 @@ GROUP BY EventDate
ORDER BY EventDate ASC
```
-```
+```text
┌──EventDate─┬────ratio─┐
│ 2014-03-17 │ 1 │
│ 2014-03-18 │ 0.807696 │
@@ -1140,7 +1142,7 @@ During request processing, the IN operator assumes that the result of an operati
Here is an example with the `t_null` table:
-```
+```text
┌─x─┬────y─┐
│ 1 │ ᴺᵁᴸᴸ │
│ 2 │ 3 │
@@ -1149,7 +1151,7 @@ Here is an example with the `t_null` table:
Running the query `SELECT x FROM t_null WHERE y IN (NULL,3)` gives you the following result:
-```
+```text
┌─x─┐
│ 2 │
└───┘
@@ -1157,10 +1159,11 @@ Running the query `SELECT x FROM t_null WHERE y IN (NULL,3)` gives you the follo
You can see that the row in which `y = NULL` is thrown out of the query results. This is because ClickHouse can't decide whether `NULL` is included in the `(NULL,3)` set, returns `0` as the result of the operation, and `SELECT` excludes this row from the final output.
-```
+```sql
SELECT y IN (NULL, 3)
FROM t_null
-
+```
+```text
┌─in(y, tuple(NULL, 3))─┐
│ 0 │
│ 1 │
@@ -1189,13 +1192,13 @@ For a query to the **distributed_table**, the query will be sent to all the remo
For example, the query
-``` sql
+```sql
SELECT uniq(UserID) FROM distributed_table
```
will be sent to all remote servers as
-``` sql
+```sql
SELECT uniq(UserID) FROM local_table
```
@@ -1203,7 +1206,7 @@ and run on each of them in parallel, until it reaches the stage where intermedia
Now let's examine a query with IN:
-``` sql
+```sql
SELECT uniq(UserID) FROM distributed_table WHERE CounterID = 101500 AND UserID IN (SELECT UserID FROM local_table WHERE CounterID = 34)
```
@@ -1211,7 +1214,7 @@ SELECT uniq(UserID) FROM distributed_table WHERE CounterID = 101500 AND UserID I
This query will be sent to all remote servers as
-``` sql
+```sql
SELECT uniq(UserID) FROM local_table WHERE CounterID = 101500 AND UserID IN (SELECT UserID FROM local_table WHERE CounterID = 34)
```
@@ -1221,19 +1224,19 @@ This will work correctly and optimally if you are prepared for this case and hav
To correct how the query works when data is spread randomly across the cluster servers, you could specify **distributed_table** inside a subquery. The query would look like this:
-``` sql
+```sql
SELECT uniq(UserID) FROM distributed_table WHERE CounterID = 101500 AND UserID IN (SELECT UserID FROM distributed_table WHERE CounterID = 34)
```
This query will be sent to all remote servers as
-``` sql
+```sql
SELECT uniq(UserID) FROM local_table WHERE CounterID = 101500 AND UserID IN (SELECT UserID FROM distributed_table WHERE CounterID = 34)
```
The subquery will begin running on each remote server. Since the subquery uses a distributed table, the subquery that is on each remote server will be resent to every remote server as
-``` sql
+```sql
SELECT UserID FROM local_table WHERE CounterID = 34
```
@@ -1241,19 +1244,19 @@ For example, if you have a cluster of 100 servers, executing the entire query wi
In such cases, you should always use GLOBAL IN instead of IN. Let's look at how it works for the query
-``` sql
+```sql
SELECT uniq(UserID) FROM distributed_table WHERE CounterID = 101500 AND UserID GLOBAL IN (SELECT UserID FROM distributed_table WHERE CounterID = 34)
```
The requestor server will run the subquery
-``` sql
+```sql
SELECT UserID FROM distributed_table WHERE CounterID = 34
```
and the result will be put in a temporary table in RAM. Then the request will be sent to each remote server as
-``` sql
+```sql
SELECT uniq(UserID) FROM local_table WHERE CounterID = 101500 AND UserID GLOBAL IN _data1
```
diff --git a/docs/en/query_language/syntax.md b/docs/en/query_language/syntax.md
index 9c6e69f51bd..a67a0983961 100644
--- a/docs/en/query_language/syntax.md
+++ b/docs/en/query_language/syntax.md
@@ -4,7 +4,7 @@ There are two types of parsers in the system: the full SQL parser (a recursive d
In all cases except the `INSERT` query, only the full SQL parser is used.
The `INSERT` query uses both parsers:
-``` sql
+```sql
INSERT INTO t VALUES (1, 'Hello, world'), (2, 'abc'), (3, 'def')
```
@@ -112,7 +112,7 @@ Data types and table engines in the `CREATE` query are written the same way as i
An alias is a user-defined name for an expression in a query.
-```
+```sql
expr AS alias
```
@@ -140,7 +140,7 @@ If an alias is defined for the result columns in the `SELECT` clause of a subque
Be careful with aliases that are the same as column or table names. Let's consider the following example:
-```
+```sql
CREATE TABLE t
(
a Int,
@@ -149,12 +149,13 @@ CREATE TABLE t
ENGINE = TinyLog()
```
-```
+```sql
SELECT
argMax(a, b),
sum(b) AS b
FROM t
-
+```
+```text
Received exception from server (version 18.14.17):
Code: 184. DB::Exception: Received from localhost:9000, 127.0.0.1. DB::Exception: Aggregate function sum(b) is found inside another aggregate function in query.
```
diff --git a/docs/en/query_language/table_functions/file.md b/docs/en/query_language/table_functions/file.md
index 0cb1f0d36bf..b4f74bd2d8c 100644
--- a/docs/en/query_language/table_functions/file.md
+++ b/docs/en/query_language/table_functions/file.md
@@ -1,9 +1,9 @@
# file
-Creates a table from a file.
+Creates a table from a file. This table function is similar to [url](url.md) and [hdfs](hdfs.md) ones.
-```
+```sql
file(path, format, structure)
```
@@ -39,28 +39,63 @@ FROM file('test.csv', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32')
LIMIT 2
```
-```
+```text
┌─column1─┬─column2─┬─column3─┐
│ 1 │ 2 │ 3 │
│ 3 │ 2 │ 1 │
└─────────┴─────────┴─────────┘
```
-``` sql
+```sql
-- getting the first 10 lines of a table that contains 3 columns of UInt32 type from a CSV file
SELECT * FROM file('test.csv', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32') LIMIT 10
```
**Globs in path**
-- `*` — Matches any number of any characters including none.
-- `?` — Matches any single character.
-- `{some_string,another_string,yet_another_one}` — Matches any of strings `'some_string', 'another_string', 'yet_another_one'`.
-- `{N..M}` — Matches any number in range from N to M including both borders.
+Multiple path components can have globs. For being processed file should exists and matches to the whole path pattern (not only suffix or prefix).
+- `*` — Substitutes any number of any characters except `/` including empty string.
+- `?` — Substitutes any single character.
+- `{some_string,another_string,yet_another_one}` — Substitutes any of strings `'some_string', 'another_string', 'yet_another_one'`.
+- `{N..M}` — Substitutes any number in range from N to M including both borders.
+
+ Constructions with `{}` are similar to the [remote table function](../../query_language/table_functions/remote.md)).
+
+**Example**
+
+1. Suppose we have several files with the following relative paths:
+
+- 'some_dir/some_file_1'
+- 'some_dir/some_file_2'
+- 'some_dir/some_file_3'
+- 'another_dir/some_file_1'
+- 'another_dir/some_file_2'
+- 'another_dir/some_file_3'
+
+2. Query the amount of rows in these files:
+
+```sql
+SELECT count(*)
+FROM file('{some,another}_dir/some_file_{1..3}', 'TSV', 'name String, value UInt32')
+```
+
+3. Query the amount of rows in all files of these two directories:
+
+```sql
+SELECT count(*)
+FROM file('{some,another}_dir/*', 'TSV', 'name String, value UInt32')
+```
!!! warning
If your listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use `?`.
-Multiple path components can have globs. For being processed file should exists and matches to the whole path pattern.
+**Example**
+
+Query the data from files named `file000`, `file001`, ... , `file999`:
+
+```sql
+SELECT count(*)
+FROM file('big_dir/file{0..9}{0..9}{0..9}', 'CSV', 'name String, value UInt32')
+```
[Original article](https://clickhouse.yandex/docs/en/query_language/table_functions/file/)
diff --git a/docs/en/query_language/table_functions/hdfs.md b/docs/en/query_language/table_functions/hdfs.md
index cce9b308101..b069ce5a9c0 100644
--- a/docs/en/query_language/table_functions/hdfs.md
+++ b/docs/en/query_language/table_functions/hdfs.md
@@ -1,9 +1,9 @@
# hdfs
-Creates a table from a file in HDFS.
+Creates a table from files in HDFS. This table function is similar to [url](url.md) and [file](file.md) ones.
-```
+```sql
hdfs(URI, format, structure)
```
@@ -27,7 +27,7 @@ FROM hdfs('hdfs://hdfs1:9000/test', 'TSV', 'column1 UInt32, column2 UInt32, colu
LIMIT 2
```
-```
+```text
┌─column1─┬─column2─┬─column3─┐
│ 1 │ 2 │ 3 │
│ 3 │ 2 │ 1 │
@@ -36,14 +36,51 @@ LIMIT 2
**Globs in path**
-- `*` — Matches any number of any characters including none.
-- `?` — Matches any single character.
-- `{some_string,another_string,yet_another_one}` — Matches any of strings `'some_string', 'another_string', 'yet_another_one'`.
-- `{N..M}` — Matches any number in range from N to M including both borders.
+Multiple path components can have globs. For being processed file should exists and matches to the whole path pattern (not only suffix or prefix).
+
+- `*` — Substitutes any number of any characters except `/` including empty string.
+- `?` — Substitutes any single character.
+- `{some_string,another_string,yet_another_one}` — Substitutes any of strings `'some_string', 'another_string', 'yet_another_one'`.
+- `{N..M}` — Substitutes any number in range from N to M including both borders.
+
+
+ Constructions with `{}` are similar to the [remote table function](../../query_language/table_functions/remote.md)).
+
+**Example**
+
+1. Suppose that we have several files with following URIs on HDFS:
+
+- 'hdfs://hdfs1:9000/some_dir/some_file_1'
+- 'hdfs://hdfs1:9000/some_dir/some_file_2'
+- 'hdfs://hdfs1:9000/some_dir/some_file_3'
+- 'hdfs://hdfs1:9000/another_dir/some_file_1'
+- 'hdfs://hdfs1:9000/another_dir/some_file_2'
+- 'hdfs://hdfs1:9000/another_dir/some_file_3'
+
+2. Query the amount of rows in these files:
+
+```sql
+SELECT count(*)
+FROM hdfs('hdfs://hdfs1:9000/{some,another}_dir/some_file_{1..3}', 'TSV', 'name String, value UInt32')
+```
+
+3. Query the amount of rows in all files of these two directories:
+
+```sql
+SELECT count(*)
+FROM hdfs('hdfs://hdfs1:9000/{some,another}_dir/*', 'TSV', 'name String, value UInt32')
+```
!!! warning
If your listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use `?`.
-Multiple path components can have globs. For being processed file should exists and matches to the whole path pattern.
+**Example**
+
+Query the data from files named `file000`, `file001`, ... , `file999`:
+
+```sql
+SELECT count(*)
+FROM hdfs('hdfs://hdfs1:9000/big_dir/file{0..9}{0..9}{0..9}', 'CSV', 'name String, value UInt32')
+```
[Original article](https://clickhouse.yandex/docs/en/query_language/table_functions/hdfs/)
diff --git a/docs/en/query_language/table_functions/index.md b/docs/en/query_language/table_functions/index.md
index dc2665ce7a5..0e27ba7b497 100644
--- a/docs/en/query_language/table_functions/index.md
+++ b/docs/en/query_language/table_functions/index.md
@@ -1,6 +1,6 @@
# Table Functions
-Table function is a method of constructing a table.
+Table functions are methods for constructing tables.
You can use table functions in:
@@ -10,7 +10,7 @@ You can use table functions in:
* [FROM](../select.md#select-from) clause of the `SELECT` query.
- The method of creating a temporary table, that is available only in current query. The table is deleted after the query finishes.
+ The method for creating a temporary table that is available only in the current query. The table is deleted when the query finishes.
!!! warning "Warning"
You can't use table functions if the [allow_ddl](../../operations/settings/permissions_for_queries.md#settings_allow_ddl) setting is disabled.
@@ -19,7 +19,7 @@ Function | Description
---------|------------
[file](file.md) | Creates a [File](../../operations/table_engines/file.md)-engine table.
[merge](merge.md) | Creates a [Merge](../../operations/table_engines/merge.md)-engine table.
-[numbers](numbers.md) | Creates a table with the single column filled with integer numbers.
+[numbers](numbers.md) | Creates a table with a single column filled with integer numbers.
[remote](remote.md) | Allows you to access remote servers without creating a [Distributed](../../operations/table_engines/distributed.md)-engine table.
[url](url.md) | Creates a [Url](../../operations/table_engines/url.md)-engine table.
[mysql](mysql.md) | Creates a [MySQL](../../operations/table_engines/mysql.md)-engine table.
diff --git a/docs/en/query_language/table_functions/input.md b/docs/en/query_language/table_functions/input.md
index 27cbaa4abbd..408ff7939ac 100644
--- a/docs/en/query_language/table_functions/input.md
+++ b/docs/en/query_language/table_functions/input.md
@@ -23,13 +23,13 @@ and data in `data.csv` has a different structure `(col1 String, col2 Date, col3
data from the `data.csv` into the `test` table with simultaneous conversion looks like this:
```bash
-cat data.csv | clickhouse-client --query="INSERT INTO test SELECT lower(col1), col3 * col3 FROM input('col1 String, col2 Date, col3 Int32') FORMAT CSV";
+$ cat data.csv | clickhouse-client --query="INSERT INTO test SELECT lower(col1), col3 * col3 FROM input('col1 String, col2 Date, col3 Int32') FORMAT CSV";
```
- If `data.csv` contains data of the same structure `test_structure` as the table `test` then these two queries are equal:
```bash
-cat data.csv | clickhouse-client --query="INSERT INTO test FORMAT CSV"
-cat data.csv | clickhouse-client --query="INSERT INTO test SELECT * FROM input('test_structure') FORMAT CSV"
+$ cat data.csv | clickhouse-client --query="INSERT INTO test FORMAT CSV"
+$ cat data.csv | clickhouse-client --query="INSERT INTO test SELECT * FROM input('test_structure') FORMAT CSV"
```
[Original article](https://clickhouse.yandex/docs/en/query_language/table_functions/input/)
diff --git a/docs/en/query_language/table_functions/jdbc.md b/docs/en/query_language/table_functions/jdbc.md
index c7108a5af9f..03cd1710e6b 100644
--- a/docs/en/query_language/table_functions/jdbc.md
+++ b/docs/en/query_language/table_functions/jdbc.md
@@ -8,15 +8,15 @@ It supports Nullable types (based on DDL of remote table that is queried).
**Examples**
-``` sql
+```sql
SELECT * FROM jdbc('jdbc:mysql://localhost:3306/?user=root&password=root', 'schema', 'table')
```
-``` sql
+```sql
SELECT * FROM jdbc('mysql://localhost:3306/?user=root&password=root', 'schema', 'table')
```
-``` sql
+```sql
SELECT * FROM jdbc('datasource://mysql-local', 'schema', 'table')
```
diff --git a/docs/en/query_language/table_functions/mysql.md b/docs/en/query_language/table_functions/mysql.md
index f1d4773a6e8..4d643ba286d 100644
--- a/docs/en/query_language/table_functions/mysql.md
+++ b/docs/en/query_language/table_functions/mysql.md
@@ -2,7 +2,7 @@
Allows `SELECT` queries to be performed on data that is stored on a remote MySQL server.
-```
+```sql
mysql('host:port', 'database', 'table', 'user', 'password'[, replace_query, 'on_duplicate_clause']);
```
@@ -32,7 +32,7 @@ A table object with the same columns as the original MySQL table.
Table in MySQL:
-```
+```text
mysql> CREATE TABLE `test`.`test` (
-> `int_id` INT NOT NULL AUTO_INCREMENT,
-> `int_nullable` INT NULL DEFAULT NULL,
diff --git a/docs/en/query_language/table_functions/numbers.md b/docs/en/query_language/table_functions/numbers.md
index e8b025d922e..f3f64cf6399 100644
--- a/docs/en/query_language/table_functions/numbers.md
+++ b/docs/en/query_language/table_functions/numbers.md
@@ -7,7 +7,7 @@ Similar to the `system.numbers` table, it can be used for testing and generating
The following queries are equivalent:
-``` sql
+```sql
SELECT * FROM numbers(10);
SELECT * FROM numbers(0, 10);
SELECT * FROM system.numbers LIMIT 10;
@@ -15,7 +15,7 @@ SELECT * FROM system.numbers LIMIT 10;
Examples:
-``` sql
+```sql
-- Generate a sequence of dates from 2010-01-01 to 2010-12-31
select toDate('2010-01-01') + number as d FROM numbers(365);
```
diff --git a/docs/en/query_language/table_functions/odbc.md b/docs/en/query_language/table_functions/odbc.md
index 0c3204d4ca5..65036a6219d 100644
--- a/docs/en/query_language/table_functions/odbc.md
+++ b/docs/en/query_language/table_functions/odbc.md
@@ -2,7 +2,7 @@
Returns table that is connected via [ODBC](https://en.wikipedia.org/wiki/Open_Database_Connectivity).
-```
+```sql
odbc(connection_settings, external_database, external_table)
```
@@ -26,15 +26,17 @@ Ensure that unixODBC and MySQL Connector are installed.
By default (if installed from packages), ClickHouse starts as user `clickhouse`. Thus you need to create and configure this user in the MySQL server.
+```bash
+$ sudo mysql
```
-sudo mysql
+```sql
mysql> CREATE USER 'clickhouse'@'localhost' IDENTIFIED BY 'clickhouse';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'clickhouse'@'clickhouse' WITH GRANT OPTION;
```
Then configure the connection in `/etc/odbc.ini`.
-```
+```bash
$ cat /etc/odbc.ini
[mysqlconn]
DRIVER = /usr/local/lib/libmyodbc5w.so
@@ -47,8 +49,8 @@ PASSWORD = clickhouse
You can check the connection using the `isql` utility from the unixODBC installation.
-```
-isql -v mysqlconn
+```bash
+$ isql -v mysqlconn
+---------------------------------------+
| Connected! |
| |
@@ -57,7 +59,7 @@ isql -v mysqlconn
Table in MySQL:
-```
+```text
mysql> CREATE TABLE `test`.`test` (
-> `int_id` INT NOT NULL AUTO_INCREMENT,
-> `int_nullable` INT NULL DEFAULT NULL,
diff --git a/docs/en/query_language/table_functions/remote.md b/docs/en/query_language/table_functions/remote.md
index 3b9263d959c..0543d17cade 100644
--- a/docs/en/query_language/table_functions/remote.md
+++ b/docs/en/query_language/table_functions/remote.md
@@ -5,7 +5,7 @@ Allows you to access remote servers without creating a `Distributed` table.
Signatures:
-``` sql
+```sql
remote('addresses_expr', db, table[, 'user'[, 'password']])
remote('addresses_expr', db.table[, 'user'[, 'password']])
```
@@ -17,7 +17,7 @@ remote('addresses_expr', db.table[, 'user'[, 'password']])
Examples:
-```
+```text
example01-01-1
example01-01-1:9000
localhost
@@ -30,19 +30,19 @@ Multiple addresses can be comma-separated. In this case, ClickHouse will use dis
Example:
-```
+```text
example01-01-1,example01-02-1
```
Part of the expression can be specified in curly brackets. The previous example can be written as follows:
-```
+```text
example01-0{1,2}-1
```
Curly brackets can contain a range of numbers separated by two dots (non-negative integers). In this case, the range is expanded to a set of values that generate shard addresses. If the first number starts with zero, the values are formed with the same zero alignment. The previous example can be written as follows:
-```
+```text
example01-{01..02}-1
```
@@ -52,7 +52,7 @@ Addresses and parts of addresses in curly brackets can be separated by the pipe
Example:
-```
+```text
example01-{01..02}-{1|2}
```
diff --git a/docs/en/query_language/table_functions/url.md b/docs/en/query_language/table_functions/url.md
index edeabdc6902..005a23b6bdc 100644
--- a/docs/en/query_language/table_functions/url.md
+++ b/docs/en/query_language/table_functions/url.md
@@ -12,7 +12,7 @@ structure - table structure in `'UserID UInt64, Name String'` format. Determines
**Example**
-``` sql
+```sql
-- getting the first 3 lines of a table that contains columns of String and UInt32 type from HTTP-server which answers in CSV format.
SELECT * FROM url('http://127.0.0.1:12345/', CSV, 'column1 String, column2 UInt32') LIMIT 3
```
diff --git a/docs/fa/interfaces/jdbc.md b/docs/fa/interfaces/jdbc.md
index 5cb82ff4274..7a498a32c63 100644
--- a/docs/fa/interfaces/jdbc.md
+++ b/docs/fa/interfaces/jdbc.md
@@ -2,7 +2,7 @@
# درایور JDBC
-درایور رسمی JDBC برای ClickHouse وجود دارد. برای اطلاعات بیشتر [اینجا](https://github.com/yandex/clickhouse-jdbc) را ببینید.
+درایور رسمی JDBC برای ClickHouse وجود دارد. برای اطلاعات بیشتر [اینجا](https://github.com/ClickHouse/clickhouse-jdbc) را ببینید.
درایور JDBC توسط سازمان های دیگر اجرا می شوند.
diff --git a/docs/fa/interfaces/odbc.md b/docs/fa/interfaces/odbc.md
index 692fc92651d..da96813ce8d 100644
--- a/docs/fa/interfaces/odbc.md
+++ b/docs/fa/interfaces/odbc.md
@@ -1,7 +1,7 @@
# ODBC درایور
-درایور رسمی ODBC برای ClickHouse وجود دارد. برای اطلاعات بیشتر [اینجا](https://github.com/yandex/clickhouse-odbc) را ببینید.
+درایور رسمی ODBC برای ClickHouse وجود دارد. برای اطلاعات بیشتر [اینجا](https://github.com/ClickHouse/clickhouse-odbc) را ببینید.
[مقاله اصلی](https://clickhouse.yandex/docs/fa/interfaces/odbc/)
diff --git a/docs/fa/interfaces/tcp.md b/docs/fa/interfaces/tcp.md
index bd902aedc58..d6fad9bd5c7 100644
--- a/docs/fa/interfaces/tcp.md
+++ b/docs/fa/interfaces/tcp.md
@@ -2,7 +2,7 @@
# رابط بومی (TCP)
-پروتکل بومی در [خط فرمان خط] (cli.md)، برای برقراری ارتباط بین سرور در طی پردازش پرس و جو توزیع شده، و همچنین در سایر برنامه های C ++ استفاده می شود. متاسفانه، پروتکل ClickHouse بومی هنوز مشخصات رسمی ندارد، اما می توان آن را از کد منبع ClickHouse (شروع [از اینجا](https://github.com/yandex/ClickHouse/tree/master/dbms/src/Client)) و / یا با رهگیری و تجزیه و تحلیل ترافیک TCP.
+پروتکل بومی در [خط فرمان خط] (cli.md)، برای برقراری ارتباط بین سرور در طی پردازش پرس و جو توزیع شده، و همچنین در سایر برنامه های C ++ استفاده می شود. متاسفانه، پروتکل ClickHouse بومی هنوز مشخصات رسمی ندارد، اما می توان آن را از کد منبع ClickHouse (شروع [از اینجا](https://github.com/ClickHouse/ClickHouse/tree/master/dbms/src/Client)) و / یا با رهگیری و تجزیه و تحلیل ترافیک TCP.
[مقاله اصلی](https://clickhouse.yandex/docs/fa/interfaces/tcp/)
diff --git a/docs/ru/data_types/array.md b/docs/ru/data_types/array.md
index 890e314be99..c6cc6af4b13 100644
--- a/docs/ru/data_types/array.md
+++ b/docs/ru/data_types/array.md
@@ -8,41 +8,32 @@
Массив можно создать с помощью функции:
-```
+```sql
array(T)
```
Также можно использовать квадратные скобки
-```
+```sql
[]
```
Пример создания массива:
+```sql
+SELECT array(1, 2) AS x, toTypeName(x)
```
-:) SELECT array(1, 2) AS x, toTypeName(x)
-
-SELECT
- [1, 2] AS x,
- toTypeName(x)
-
+```text
┌─x─────┬─toTypeName(array(1, 2))─┐
│ [1,2] │ Array(UInt8) │
└───────┴─────────────────────────┘
-
-1 rows in set. Elapsed: 0.002 sec.
-
-:) SELECT [1, 2] AS x, toTypeName(x)
-
-SELECT
- [1, 2] AS x,
- toTypeName(x)
-
+```
+```sql
+SELECT [1, 2] AS x, toTypeName(x)
+```
+```text
┌─x─────┬─toTypeName([1, 2])─┐
│ [1,2] │ Array(UInt8) │
└───────┴────────────────────┘
-
-1 rows in set. Elapsed: 0.002 sec.
```
## Особенности работы с типами данных
@@ -53,31 +44,23 @@ SELECT
Примеры автоматического определения типа данных:
+```sql
+SELECT array(1, 2, NULL) AS x, toTypeName(x)
```
-:) SELECT array(1, 2, NULL) AS x, toTypeName(x)
-
-SELECT
- [1, 2, NULL] AS x,
- toTypeName(x)
-
+```text
┌─x──────────┬─toTypeName(array(1, 2, NULL))─┐
│ [1,2,NULL] │ Array(Nullable(UInt8)) │
└────────────┴───────────────────────────────┘
-
-1 rows in set. Elapsed: 0.002 sec.
```
Если попытаться создать массив из несовместимых типов данных, то ClickHouse выбросит исключение:
+```sql
+SELECT array(1, 'a')
```
-:) SELECT array(1, 'a')
-
-SELECT [1, 'a']
-
+```text
Received exception from server (version 1.1.54388):
Code: 386. DB::Exception: Received from localhost:9000, 127.0.0.1. DB::Exception: There is no supertype for types UInt8, String because some of them are String/FixedString and some of them are not.
-
-0 rows in set. Elapsed: 0.246 sec.
```
[Оригинальная статья](https://clickhouse.yandex/docs/ru/data_types/array/)
diff --git a/docs/ru/data_types/decimal.md b/docs/ru/data_types/decimal.md
index 1dc2679213c..110eb84f31f 100644
--- a/docs/ru/data_types/decimal.md
+++ b/docs/ru/data_types/decimal.md
@@ -50,35 +50,35 @@
## Проверка переполнений
При выполнении операций над типом Decimal могут происходить целочисленные переполнения. Лишняя дробная часть отбрасывается (не округляется). Лишняя целочисленная часть приводит к исключению.
-```
+```sql
SELECT toDecimal32(2, 4) AS x, x / 3
```
-```
+```text
┌──────x─┬─divide(toDecimal32(2, 4), 3)─┐
│ 2.0000 │ 0.6666 │
└────────┴──────────────────────────────┘
```
-```
+```sql
SELECT toDecimal32(4.2, 8) AS x, x * x
```
-```
+```text
DB::Exception: Scale is out of bounds.
```
-```
+```sql
SELECT toDecimal32(4.2, 8) AS x, 6 * x
```
-```
+```text
DB::Exception: Decimal math overflow.
```
Проверка переполнения приводит к замедлению операций. При уверенности, что типа результата хватит для его записи проверку переполнения можно отключить настройкой decimal_check_overflow. В этом случае при переполнении вернется неверное значение:
-```
+```sql
SET decimal_check_overflow = 0;
SELECT toDecimal32(4.2, 8) AS x, 6 * x
```
-```
+```text
┌──────────x─┬─multiply(6, toDecimal32(4.2, 8))─┐
│ 4.20000000 │ -17.74967296 │
└────────────┴──────────────────────────────────┘
@@ -86,10 +86,10 @@ SELECT toDecimal32(4.2, 8) AS x, 6 * x
Переполнения происходят не только на арифметических операциях, но и на операциях сравнения. Отключать проверку стоит только при полной уверенности в корректности результата:
-```
+```sql
SELECT toDecimal32(1, 8) < 100
```
-```
+```text
DB::Exception: Can't compare.
```
diff --git a/docs/ru/data_types/domains/ipv4.md b/docs/ru/data_types/domains/ipv4.md
index 03f8f5900aa..8d8b5bf9fc5 100644
--- a/docs/ru/data_types/domains/ipv4.md
+++ b/docs/ru/data_types/domains/ipv4.md
@@ -10,7 +10,7 @@ CREATE TABLE hits (url String, from IPv4) ENGINE = MergeTree() ORDER BY url;
DESCRIBE TABLE hits;
```
-```
+```text
┌─name─┬─type───┬─default_type─┬─default_expression─┬─comment─┬─codec_expression─┐
│ url │ String │ │ │ │ │
│ from │ IPv4 │ │ │ │ │
@@ -31,7 +31,7 @@ INSERT INTO hits (url, from) VALUES ('https://wikipedia.org', '116.253.40.133')(
SELECT * FROM hits;
```
-```
+```text
┌─url────────────────────────────────┬───────────from─┐
│ https://clickhouse.yandex/docs/en/ │ 116.106.34.242 │
│ https://wikipedia.org │ 116.253.40.133 │
@@ -45,7 +45,7 @@ SELECT * FROM hits;
SELECT toTypeName(from), hex(from) FROM hits LIMIT 1;
```
-```
+```text
┌─toTypeName(from)─┬─hex(from)─┐
│ IPv4 │ B7F7E83A │
└──────────────────┴───────────┘
@@ -58,7 +58,7 @@ SELECT toTypeName(from), hex(from) FROM hits LIMIT 1;
SELECT toTypeName(s), IPv4NumToString(from) AS s FROM hits LIMIT 1;
```
-```
+```text
┌─toTypeName(IPv4NumToString(from))─┬─s──────────────┐
│ String │ 183.247.232.58 │
└───────────────────────────────────┴────────────────┘
@@ -70,7 +70,7 @@ SELECT toTypeName(s), IPv4NumToString(from) AS s FROM hits LIMIT 1;
SELECT toTypeName(i), CAST(from AS UInt32) AS i FROM hits LIMIT 1;
```
-```
+```text
┌─toTypeName(CAST(from, 'UInt32'))─┬──────────i─┐
│ UInt32 │ 3086477370 │
└──────────────────────────────────┴────────────┘
diff --git a/docs/ru/data_types/domains/ipv6.md b/docs/ru/data_types/domains/ipv6.md
index e3f1c190060..796f8ef4040 100644
--- a/docs/ru/data_types/domains/ipv6.md
+++ b/docs/ru/data_types/domains/ipv6.md
@@ -10,7 +10,7 @@ CREATE TABLE hits (url String, from IPv6) ENGINE = MergeTree() ORDER BY url;
DESCRIBE TABLE hits;
```
-```
+```text
┌─name─┬─type───┬─default_type─┬─default_expression─┬─comment─┬─codec_expression─┐
│ url │ String │ │ │ │ │
│ from │ IPv6 │ │ │ │ │
@@ -31,7 +31,7 @@ INSERT INTO hits (url, from) VALUES ('https://wikipedia.org', '2a02:aa08:e000:31
SELECT * FROM hits;
```
-```
+```text
┌─url────────────────────────────────┬─from──────────────────────────┐
│ https://clickhouse.yandex │ 2001:44c8:129:2632:33:0:252:2 │
│ https://clickhouse.yandex/docs/en/ │ 2a02:e980:1e::1 │
@@ -45,7 +45,7 @@ SELECT * FROM hits;
SELECT toTypeName(from), hex(from) FROM hits LIMIT 1;
```
-```
+```text
┌─toTypeName(from)─┬─hex(from)────────────────────────┐
│ IPv6 │ 200144C8012926320033000002520002 │
└──────────────────┴──────────────────────────────────┘
@@ -58,7 +58,7 @@ SELECT toTypeName(from), hex(from) FROM hits LIMIT 1;
SELECT toTypeName(s), IPv6NumToString(from) AS s FROM hits LIMIT 1;
```
-```
+```text
┌─toTypeName(IPv6NumToString(from))─┬─s─────────────────────────────┐
│ String │ 2001:44c8:129:2632:33:0:252:2 │
└───────────────────────────────────┴───────────────────────────────┘
@@ -70,7 +70,7 @@ SELECT toTypeName(s), IPv6NumToString(from) AS s FROM hits LIMIT 1;
SELECT toTypeName(i), CAST(from AS FixedString(16)) AS i FROM hits LIMIT 1;
```
-```
+```text
┌─toTypeName(CAST(from, 'FixedString(16)'))─┬─i───────┐
│ FixedString(16) │ ��� │
└───────────────────────────────────────────┴─────────┘
diff --git a/docs/ru/data_types/enum.md b/docs/ru/data_types/enum.md
index 8d32cce1648..764df79eb74 100644
--- a/docs/ru/data_types/enum.md
+++ b/docs/ru/data_types/enum.md
@@ -26,19 +26,15 @@ ENGINE = TinyLog
В столбец `x` можно сохранять только значения, перечисленные при определении типа, т.е. `'hello'` или `'world'`. Если вы попытаетесь сохранить любое другое значение, ClickHouse сгенерирует исключение. ClickHouse автоматически выберет размерность 8-bit для этого `Enum`.
```sql
-:) INSERT INTO t_enum VALUES ('hello'), ('world'), ('hello')
-
-INSERT INTO t_enum VALUES
-
+INSERT INTO t_enum VALUES ('hello'), ('world'), ('hello')
+```
+```text
Ok.
-
-3 rows in set. Elapsed: 0.002 sec.
-
-:) insert into t_enum values('a')
-
-INSERT INTO t_enum VALUES
-
-
+```
+```sql
+insert into t_enum values('a')
+```
+```text
Exception on client:
Code: 49. DB::Exception: Unknown element 'a' for type Enum('hello' = 1, 'world' = 2)
```
@@ -47,7 +43,8 @@ Code: 49. DB::Exception: Unknown element 'a' for type Enum('hello' = 1, 'world'
```sql
SELECT * FROM t_enum
-
+```
+```text
┌─x─────┐
│ hello │
│ world │
@@ -59,7 +56,8 @@ SELECT * FROM t_enum
```sql
SELECT CAST(x AS Int8) FROM t_enum
-
+```
+```text
┌─CAST(x, 'Int8')─┐
│ 1 │
│ 2 │
@@ -71,7 +69,8 @@ SELECT CAST(x AS Int8) FROM t_enum
```sql
SELECT toTypeName(CAST('a', 'Enum(\'a\' = 1, \'b\' = 2)'))
-
+```
+```text
┌─toTypeName(CAST('a', 'Enum(\'a\' = 1, \'b\' = 2)'))─┐
│ Enum8('a' = 1, 'b' = 2) │
└─────────────────────────────────────────────────────┘
@@ -85,7 +84,7 @@ SELECT toTypeName(CAST('a', 'Enum(\'a\' = 1, \'b\' = 2)'))
`Enum` может быть передан в тип [Nullable](nullable.md). Таким образом, если создать таблицу запросом
-```
+```sql
CREATE TABLE t_enum_nullable
(
x Nullable( Enum8('hello' = 1, 'world' = 2) )
@@ -95,7 +94,7 @@ ENGINE = TinyLog
, то в ней можно будет хранить не только `'hello'` и `'world'`, но и `NULL`.
-```
+```sql
INSERT INTO t_enum_nullable Values('hello'),('world'),(NULL)
```
diff --git a/docs/ru/data_types/fixedstring.md b/docs/ru/data_types/fixedstring.md
index 1634a95bc67..64b8dbf6409 100644
--- a/docs/ru/data_types/fixedstring.md
+++ b/docs/ru/data_types/fixedstring.md
@@ -4,7 +4,7 @@
Чтобы объявить столбец типа `FixedString`, используйте следующий синтаксис:
-```
+```sql
FixedString(N)
```
@@ -30,7 +30,7 @@
Рассмотрим следующую таблицу с единственным столбцом типа `FixedString(2)`:
-```
+```text
┌─name──┐
│ b │
└───────┘
@@ -38,15 +38,15 @@
Запрос `SELECT * FROM FixedStringTable WHERE a = 'b'` не возвращает необходимых данных. Необходимо дополнить шаблон фильтра нулевыми байтами.
-```
+```sql
SELECT * FROM FixedStringTable
WHERE a = 'b\0'
-
+```
+```text
┌─a─┐
│ b │
└───┘
-1 rows in set. Elapsed: 0.002 sec.
```
Это поведение отличается от поведения MySQL для типа `CHAR`, где строки дополняются пробелами, а пробелы перед выводом вырезаются.
diff --git a/docs/ru/data_types/float.md b/docs/ru/data_types/float.md
index 395d53cda7d..3eb9f4b8078 100644
--- a/docs/ru/data_types/float.md
+++ b/docs/ru/data_types/float.md
@@ -13,10 +13,10 @@
- При вычислениях с числами с плавающей запятой возможна ошибка округления.
-``` sql
+```sql
SELECT 1 - 0.9
```
-```
+```text
┌───────minus(1, 0.9)─┐
│ 0.09999999999999998 │
└─────────────────────┘
@@ -26,17 +26,17 @@ SELECT 1 - 0.9
- При вычислениях с плавающей запятой возможно появление таких категорий числа как бесконечность (`Inf`) и "не число" (`NaN`). Это необходимо учитывать при обработке результатов вычислений.
- При чтении чисел с плавающей запятой из строк, в качестве результата может быть получено не обязательно ближайшее машинно-представимое число.
-## NaN и Inf
+## NaN и Inf {#data_type-float-nan-inf}
В отличие от стандартного SQL, ClickHouse поддерживает следующие категории чисел с плавающей запятой:
- `Inf` - бесконечность.
-``` sql
+```sql
SELECT 0.5 / 0
```
-```
+```text
┌─divide(0.5, 0)─┐
│ inf │
└────────────────┘
@@ -44,11 +44,11 @@ SELECT 0.5 / 0
- `-Inf` - отрицательная бесконечность;
-``` sql
+```sql
SELECT -0.5 / 0
```
-```
+```text
┌─divide(-0.5, 0)─┐
│ -inf │
└─────────────────┘
@@ -56,10 +56,10 @@ SELECT -0.5 / 0
- `NaN` - не число.
-```
+```sql
SELECT 0 / 0
```
-```
+```text
┌─divide(0, 0)─┐
│ nan │
└──────────────┘
diff --git a/docs/ru/data_types/nested_data_structures/aggregatefunction.md b/docs/ru/data_types/nested_data_structures/aggregatefunction.md
index 6e4cabc098a..b772aaf935b 100644
--- a/docs/ru/data_types/nested_data_structures/aggregatefunction.md
+++ b/docs/ru/data_types/nested_data_structures/aggregatefunction.md
@@ -33,7 +33,7 @@ CREATE TABLE t
**Примеры функций**
-```
+```sql
uniqState(UserID)
quantilesState(0.5, 0.9)(SendTiming)
```
diff --git a/docs/ru/data_types/nested_data_structures/nested.md b/docs/ru/data_types/nested_data_structures/nested.md
index 06a62801e6e..58c7c48da3e 100644
--- a/docs/ru/data_types/nested_data_structures/nested.md
+++ b/docs/ru/data_types/nested_data_structures/nested.md
@@ -4,7 +4,7 @@
Пример:
-``` sql
+```sql
CREATE TABLE test.visits
(
CounterID UInt32,
@@ -35,7 +35,7 @@ CREATE TABLE test.visits
Пример:
-``` sql
+```sql
SELECT
Goals.ID,
Goals.EventTime
@@ -44,7 +44,7 @@ WHERE CounterID = 101500 AND length(Goals.ID) < 5
LIMIT 10
```
-```
+```text
┌─Goals.ID───────────────────────┬─Goals.EventTime───────────────────────────────────────────────────────────────────────────┐
│ [1073752,591325,591325] │ ['2014-03-17 16:38:10','2014-03-17 16:38:48','2014-03-17 16:42:27'] │
│ [1073752] │ ['2014-03-17 00:28:25'] │
@@ -63,7 +63,7 @@ LIMIT 10
Единственное место, где в запросе SELECT можно указать имя целой вложенной структуры данных, а не отдельных столбцов - секция ARRAY JOIN. Подробнее см. раздел "Секция ARRAY JOIN". Пример:
-``` sql
+```sql
SELECT
Goal.ID,
Goal.EventTime
@@ -73,7 +73,7 @@ WHERE CounterID = 101500 AND length(Goals.ID) < 5
LIMIT 10
```
-```
+```text
┌─Goal.ID─┬──────Goal.EventTime─┐
│ 1073752 │ 2014-03-17 16:38:10 │
│ 591325 │ 2014-03-17 16:38:48 │
diff --git a/docs/ru/data_types/nullable.md b/docs/ru/data_types/nullable.md
index f9c3715519b..a953f6bb2b7 100644
--- a/docs/ru/data_types/nullable.md
+++ b/docs/ru/data_types/nullable.md
@@ -24,40 +24,21 @@
## Пример использования
+```sql
+CREATE TABLE t_null(x Int8, y Nullable(Int8)) ENGINE TinyLog
```
-:) CREATE TABLE t_null(x Int8, y Nullable(Int8)) ENGINE TinyLog
-
-CREATE TABLE t_null
-(
- x Int8,
- y Nullable(Int8)
-)
-ENGINE = TinyLog
-
-Ok.
-
-0 rows in set. Elapsed: 0.012 sec.
-
-:) INSERT INTO t_null VALUES (1, NULL), (2, 3)
-
-INSERT INTO t_null VALUES
-
-Ok.
-
-1 rows in set. Elapsed: 0.007 sec.
-
-:) SELECT x + y from t_null
-
-SELECT x + y
-FROM t_null
-
+```sql
+INSERT INTO t_null VALUES (1, NULL), (2, 3)
+```
+```sql
+SELECT x + y from t_null
+```
+```text
┌─plus(x, y)─┐
│ ᴺᵁᴸᴸ │
│ 5 │
└────────────┘
-2 rows in set. Elapsed: 0.144 sec.
-
```
[Оригинальная статья](https://clickhouse.yandex/docs/ru/data_types/nullable/)
diff --git a/docs/ru/data_types/special_data_types/nothing.md b/docs/ru/data_types/special_data_types/nothing.md
index 19f879e2242..65b90ca8c89 100644
--- a/docs/ru/data_types/special_data_types/nothing.md
+++ b/docs/ru/data_types/special_data_types/nothing.md
@@ -5,16 +5,13 @@
Невозможно создать значение типа `Nothing`, поэтому он используется там, где значение не подразумевается. Например, `NULL` записывается как `Nullable(Nothing)` ([Nullable](../../data_types/nullable.md) — это тип данных, позволяющий хранить `NULL` в таблицах). Также тип `Nothing` используется для обозначения пустых массивов:
-```bash
-:) SELECT toTypeName(Array())
-
-SELECT toTypeName([])
-
+```sql
+SELECT toTypeName(Array())
+```
+```text
┌─toTypeName(array())─┐
│ Array(Nothing) │
└─────────────────────┘
-
-1 rows in set. Elapsed: 0.062 sec.
```
[Оригинальная статья](https://clickhouse.yandex/docs/ru/data_types/special_data_types/nothing/)
diff --git a/docs/ru/data_types/tuple.md b/docs/ru/data_types/tuple.md
index baf06908087..7b6d11ca168 100644
--- a/docs/ru/data_types/tuple.md
+++ b/docs/ru/data_types/tuple.md
@@ -11,24 +11,19 @@
Кортеж можно создать с помощью функции
-```
+```sql
tuple(T1, T2, ...)
```
Пример создания кортежа:
+```sql
+SELECT tuple(1,'a') AS x, toTypeName(x)
```
-:) SELECT tuple(1,'a') AS x, toTypeName(x)
-
-SELECT
- (1, 'a') AS x,
- toTypeName(x)
-
+```text
┌─x───────┬─toTypeName(tuple(1, 'a'))─┐
│ (1,'a') │ Tuple(UInt8, String) │
└─────────┴───────────────────────────┘
-
-1 rows in set. Elapsed: 0.021 sec.
```
## Особенности работы с типами данных
@@ -37,18 +32,13 @@ SELECT
Пример автоматического определения типа данных:
-```
+```sql
SELECT tuple(1,NULL) AS x, toTypeName(x)
-
-SELECT
- (1, NULL) AS x,
- toTypeName(x)
-
+```
+```text
┌─x────────┬─toTypeName(tuple(1, NULL))──────┐
│ (1,NULL) │ Tuple(UInt8, Nullable(Nothing)) │
└──────────┴─────────────────────────────────┘
-
-1 rows in set. Elapsed: 0.002 sec.
```
[Оригинальная статья](https://clickhouse.yandex/docs/ru/data_types/tuple/)
diff --git a/docs/ru/data_types/uuid.md b/docs/ru/data_types/uuid.md
index 19696f51f65..8c583bf72bf 100644
--- a/docs/ru/data_types/uuid.md
+++ b/docs/ru/data_types/uuid.md
@@ -4,13 +4,13 @@
Пример UUID значения представлен ниже:
-```
+```text
61f0c404-5cb3-11e7-907b-a6006ad3dba0
```
Если при вставке новой записи значение для UUID-колонки не указано, UUID идентификатор будет заполнен нулями:
-```
+```text
00000000-0000-0000-0000-000000000000
```
@@ -26,15 +26,16 @@
Этот пример демонстрирует, как создать таблицу с UUID-колонкой и добавить в нее сгенерированный UUID.
-``` sql
-:) CREATE TABLE t_uuid (x UUID, y String) ENGINE=TinyLog
-
-:) INSERT INTO t_uuid SELECT generateUUIDv4(), 'Example 1'
-
-:) SELECT * FROM t_uuid
+```sql
+CREATE TABLE t_uuid (x UUID, y String) ENGINE=TinyLog
```
-
+```sql
+INSERT INTO t_uuid SELECT generateUUIDv4(), 'Example 1'
```
+```sql
+SELECT * FROM t_uuid
+```
+```text
┌────────────────────────────────────x─┬─y─────────┐
│ 417ddc5d-e556-4d27-95dd-a34d84e46a50 │ Example 1 │
└──────────────────────────────────────┴───────────┘
@@ -44,13 +45,13 @@
В этом примере, при добавлении записи в таблицу значение для UUID-колонки не задано. UUID будет заполнен нулями.
-``` sql
-:) INSERT INTO t_uuid (y) VALUES ('Example 2')
-
-:) SELECT * FROM t_uuid
+```sql
+INSERT INTO t_uuid (y) VALUES ('Example 2')
```
-
+```sql
+SELECT * FROM t_uuid
```
+```text
┌────────────────────────────────────x─┬─y─────────┐
│ 417ddc5d-e556-4d27-95dd-a34d84e46a50 │ Example 1 │
│ 00000000-0000-0000-0000-000000000000 │ Example 2 │
diff --git a/docs/ru/database_engines/mysql.md b/docs/ru/database_engines/mysql.md
index acfb71d839c..28922638744 100644
--- a/docs/ru/database_engines/mysql.md
+++ b/docs/ru/database_engines/mysql.md
@@ -51,8 +51,7 @@ ENGINE = MySQL('host:port', 'database', 'user', 'password')
## Примеры использования
Таблица в MySQL:
-
-```
+```text
mysql> USE test;
Database changed
@@ -73,7 +72,6 @@ mysql> select * from mysql_table;
+--------+-------+
1 row in set (0,00 sec)
```
-
База данных в ClickHouse, позволяющая обмениваться данными с сервером MySQL:
```sql
diff --git a/docs/ru/development/style.md b/docs/ru/development/style.md
index 0920f35e817..d72a11d1d49 100644
--- a/docs/ru/development/style.md
+++ b/docs/ru/development/style.md
@@ -401,13 +401,13 @@ enum class CompressionMethod
**15.** Все имена - по английски. Транслит с русского использовать нельзя.
-```
+```text
не Stroka
```
**16.** Сокращения (из нескольких букв разных слов) в именах можно использовать только если они являются общепринятыми (если для сокращения можно найти расшифровку в английской википедии или сделав поисковый запрос).
-```
+```text
`AST`, `SQL`.
Не `NVDH` (что-то неведомое)
diff --git a/docs/ru/faq/general.md b/docs/ru/faq/general.md
index cc388b4ef5d..010926d2cf9 100644
--- a/docs/ru/faq/general.md
+++ b/docs/ru/faq/general.md
@@ -17,7 +17,7 @@
**Пример**
-```
+```sql
NLS_LANG=RUSSIAN_RUSSIA.UTF8
```
diff --git a/docs/ru/getting_started/example_datasets/amplab_benchmark.md b/docs/ru/getting_started/example_datasets/amplab_benchmark.md
index 87b8de2be43..f1ef7230ed4 100644
--- a/docs/ru/getting_started/example_datasets/amplab_benchmark.md
+++ b/docs/ru/getting_started/example_datasets/amplab_benchmark.md
@@ -8,16 +8,16 @@
Выполните следующее в консоли:
```bash
-sudo apt-get install s3cmd
-mkdir tiny; cd tiny;
-s3cmd sync s3://big-data-benchmark/pavlo/text-deflate/tiny/ .
-cd ..
-mkdir 1node; cd 1node;
-s3cmd sync s3://big-data-benchmark/pavlo/text-deflate/1node/ .
-cd ..
-mkdir 5nodes; cd 5nodes;
-s3cmd sync s3://big-data-benchmark/pavlo/text-deflate/5nodes/ .
-cd ..
+$ sudo apt-get install s3cmd
+$ mkdir tiny; cd tiny;
+$ s3cmd sync s3://big-data-benchmark/pavlo/text-deflate/tiny/ .
+$ cd ..
+$ mkdir 1node; cd 1node;
+$ s3cmd sync s3://big-data-benchmark/pavlo/text-deflate/1node/ .
+$ cd ..
+$ mkdir 5nodes; cd 5nodes;
+$ s3cmd sync s3://big-data-benchmark/pavlo/text-deflate/5nodes/ .
+$ cd ..
```
Выполните следующие запросы к ClickHouse:
@@ -87,12 +87,12 @@ CREATE TABLE uservisits_5nodes_on_single
Возвращаемся в консоль:
```bash
-for i in tiny/rankings/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO rankings_tiny FORMAT CSV"; done
-for i in tiny/uservisits/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO uservisits_tiny FORMAT CSV"; done
-for i in 1node/rankings/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO rankings_1node FORMAT CSV"; done
-for i in 1node/uservisits/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO uservisits_1node FORMAT CSV"; done
-for i in 5nodes/rankings/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO rankings_5nodes_on_single FORMAT CSV"; done
-for i in 5nodes/uservisits/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO uservisits_5nodes_on_single FORMAT CSV"; done
+$ for i in tiny/rankings/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO rankings_tiny FORMAT CSV"; done
+$ for i in tiny/uservisits/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO uservisits_tiny FORMAT CSV"; done
+$ for i in 1node/rankings/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO rankings_1node FORMAT CSV"; done
+$ for i in 1node/uservisits/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO uservisits_1node FORMAT CSV"; done
+$ for i in 5nodes/rankings/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO rankings_5nodes_on_single FORMAT CSV"; done
+$ for i in 5nodes/uservisits/*.deflate; do echo $i; zlib-flate -uncompress < $i | clickhouse-client --host=example-perftest01j --query="INSERT INTO uservisits_5nodes_on_single FORMAT CSV"; done
```
Запросы для получения выборок данных:
diff --git a/docs/ru/getting_started/example_datasets/criteo.md b/docs/ru/getting_started/example_datasets/criteo.md
index 32fc8e234dc..5afb8046a57 100644
--- a/docs/ru/getting_started/example_datasets/criteo.md
+++ b/docs/ru/getting_started/example_datasets/criteo.md
@@ -4,19 +4,19 @@
Создайте таблицу для импорта лога:
-``` sql
+```sql
CREATE TABLE criteo_log (date Date, clicked UInt8, int1 Int32, int2 Int32, int3 Int32, int4 Int32, int5 Int32, int6 Int32, int7 Int32, int8 Int32, int9 Int32, int10 Int32, int11 Int32, int12 Int32, int13 Int32, cat1 String, cat2 String, cat3 String, cat4 String, cat5 String, cat6 String, cat7 String, cat8 String, cat9 String, cat10 String, cat11 String, cat12 String, cat13 String, cat14 String, cat15 String, cat16 String, cat17 String, cat18 String, cat19 String, cat20 String, cat21 String, cat22 String, cat23 String, cat24 String, cat25 String, cat26 String) ENGINE = Log
```
Загрузите данные:
```bash
-for i in {00..23}; do echo $i; zcat datasets/criteo/day_${i#0}.gz | sed -r 's/^/2000-01-'${i/00/24}'\t/' | clickhouse-client --host=example-perftest01j --query="INSERT INTO criteo_log FORMAT TabSeparated"; done
+$ for i in {00..23}; do echo $i; zcat datasets/criteo/day_${i#0}.gz | sed -r 's/^/2000-01-'${i/00/24}'\t/' | clickhouse-client --host=example-perftest01j --query="INSERT INTO criteo_log FORMAT TabSeparated"; done
```
Создайте таблицу для сконвертированных данных:
-``` sql
+```sql
CREATE TABLE criteo
(
date Date,
@@ -65,7 +65,7 @@ CREATE TABLE criteo
Преобразуем данные из сырого лога и положим во вторую таблицу:
-``` sql
+```sql
INSERT INTO criteo SELECT date, clicked, int1, int2, int3, int4, int5, int6, int7, int8, int9, int10, int11, int12, int13, reinterpretAsUInt32(unhex(cat1)) AS icat1, reinterpretAsUInt32(unhex(cat2)) AS icat2, reinterpretAsUInt32(unhex(cat3)) AS icat3, reinterpretAsUInt32(unhex(cat4)) AS icat4, reinterpretAsUInt32(unhex(cat5)) AS icat5, reinterpretAsUInt32(unhex(cat6)) AS icat6, reinterpretAsUInt32(unhex(cat7)) AS icat7, reinterpretAsUInt32(unhex(cat8)) AS icat8, reinterpretAsUInt32(unhex(cat9)) AS icat9, reinterpretAsUInt32(unhex(cat10)) AS icat10, reinterpretAsUInt32(unhex(cat11)) AS icat11, reinterpretAsUInt32(unhex(cat12)) AS icat12, reinterpretAsUInt32(unhex(cat13)) AS icat13, reinterpretAsUInt32(unhex(cat14)) AS icat14, reinterpretAsUInt32(unhex(cat15)) AS icat15, reinterpretAsUInt32(unhex(cat16)) AS icat16, reinterpretAsUInt32(unhex(cat17)) AS icat17, reinterpretAsUInt32(unhex(cat18)) AS icat18, reinterpretAsUInt32(unhex(cat19)) AS icat19, reinterpretAsUInt32(unhex(cat20)) AS icat20, reinterpretAsUInt32(unhex(cat21)) AS icat21, reinterpretAsUInt32(unhex(cat22)) AS icat22, reinterpretAsUInt32(unhex(cat23)) AS icat23, reinterpretAsUInt32(unhex(cat24)) AS icat24, reinterpretAsUInt32(unhex(cat25)) AS icat25, reinterpretAsUInt32(unhex(cat26)) AS icat26 FROM criteo_log;
DROP TABLE criteo_log;
diff --git a/docs/ru/getting_started/example_datasets/metrica.md b/docs/ru/getting_started/example_datasets/metrica.md
index 38c346d2b74..096c48e9d30 100644
--- a/docs/ru/getting_started/example_datasets/metrica.md
+++ b/docs/ru/getting_started/example_datasets/metrica.md
@@ -4,48 +4,48 @@
## Получение таблиц из партиций
**Скачивание и импортирование партиций hits:**
```bash
-curl -O https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_v1.tar
-tar xvf hits_v1.tar -C /var/lib/clickhouse # путь к папке с данными ClickHouse
-# убедитесь, что установлены корректные права доступа на файлы
-sudo service clickhouse-server restart
-clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1"
+$ curl -O https://clickhouse-datasets.s3.yandex.net/hits/partitions/hits_v1.tar
+$ tar xvf hits_v1.tar -C /var/lib/clickhouse # путь к папке с данными ClickHouse
+$ # убедитесь, что установлены корректные права доступа на файлы
+$ sudo service clickhouse-server restart
+$ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1"
```
**Скачивание и импортирование партиций visits:**
```bash
-curl -O https://clickhouse-datasets.s3.yandex.net/visits/partitions/visits_v1.tar
-tar xvf visits_v1.tar -C /var/lib/clickhouse # путь к папке с данными ClickHouse
-# убедитесь, что установлены корректные права доступа на файлы
-sudo service clickhouse-server restart
-clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1"
+$ curl -O https://clickhouse-datasets.s3.yandex.net/visits/partitions/visits_v1.tar
+$ tar xvf visits_v1.tar -C /var/lib/clickhouse # путь к папке с данными ClickHouse
+$ # убедитесь, что установлены корректные права доступа на файлы
+$ sudo service clickhouse-server restart
+$ clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1"
```
## Получение таблиц из сжатых tsv-файлов
**Скачивание и импортирование hits из сжатого tsv-файла**
```bash
-curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
-# теперь создадим таблицу
-clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets"
-clickhouse-client --query "CREATE TABLE datasets.hits_v1 ( WatchID UInt64, JavaEnable UInt8, Title String, GoodEvent Int16, EventTime DateTime, EventDate Date, CounterID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RegionID UInt32, UserID UInt64, CounterClass Int8, OS UInt8, UserAgent UInt8, URL String, Referer String, URLDomain String, RefererDomain String, Refresh UInt8, IsRobot UInt8, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), ResolutionWidth UInt16, ResolutionHeight UInt16, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, FlashMinor2 String, NetMajor UInt8, NetMinor UInt8, UserAgentMajor UInt16, UserAgentMinor FixedString(2), CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, MobilePhone UInt8, MobilePhoneModel String, Params String, IPNetworkID UInt32, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, IsArtifical UInt8, WindowClientWidth UInt16, WindowClientHeight UInt16, ClientTimeZone Int16, ClientEventTime DateTime, SilverlightVersion1 UInt8, SilverlightVersion2 UInt8, SilverlightVersion3 UInt32, SilverlightVersion4 UInt16, PageCharset String, CodeVersion UInt32, IsLink UInt8, IsDownload UInt8, IsNotBounce UInt8, FUniqID UInt64, HID UInt32, IsOldCounter UInt8, IsEvent UInt8, IsParameter UInt8, DontCountHits UInt8, WithHash UInt8, HitColor FixedString(1), UTCEventTime DateTime, Age UInt8, Sex UInt8, Income UInt8, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), RemoteIP UInt32, RemoteIP6 FixedString(16), WindowName Int32, OpenerName Int32, HistoryLength Int16, BrowserLanguage FixedString(2), BrowserCountry FixedString(2), SocialNetwork String, SocialAction String, HTTPError UInt16, SendTiming Int32, DNSTiming Int32, ConnectTiming Int32, ResponseStartTiming Int32, ResponseEndTiming Int32, FetchTiming Int32, RedirectTiming Int32, DOMInteractiveTiming Int32, DOMContentLoadedTiming Int32, DOMCompleteTiming Int32, LoadEventStartTiming Int32, LoadEventEndTiming Int32, NSToDOMContentLoadedTiming Int32, FirstPaintTiming Int32, RedirectCount Int8, SocialSourceNetworkID UInt8, SocialSourcePage String, ParamPrice Int64, ParamOrderID String, ParamCurrency FixedString(3), ParamCurrencyID UInt16, GoalsReached Array(UInt32), OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, RefererHash UInt64, URLHash UInt64, CLID UInt32, YCLID UInt64, ShareService String, ShareURL String, ShareTitle String, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), IslandID FixedString(16), RequestNum UInt32, RequestTry UInt8) ENGINE = MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192"
-# импортируем данные
-cat hits_v1.tsv | clickhouse-client --query "INSERT INTO datasets.hits_v1 FORMAT TSV" --max_insert_block_size=100000
-# опционально можно оптимизировать таблицу
-clickhouse-client --query "OPTIMIZE TABLE datasets.hits_v1 FINAL"
-clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1"
+$ curl https://clickhouse-datasets.s3.yandex.net/hits/tsv/hits_v1.tsv.xz | unxz --threads=`nproc` > hits_v1.tsv
+$ # теперь создадим таблицу
+$ clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets"
+$ clickhouse-client --query "CREATE TABLE datasets.hits_v1 ( WatchID UInt64, JavaEnable UInt8, Title String, GoodEvent Int16, EventTime DateTime, EventDate Date, CounterID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RegionID UInt32, UserID UInt64, CounterClass Int8, OS UInt8, UserAgent UInt8, URL String, Referer String, URLDomain String, RefererDomain String, Refresh UInt8, IsRobot UInt8, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), ResolutionWidth UInt16, ResolutionHeight UInt16, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, FlashMinor2 String, NetMajor UInt8, NetMinor UInt8, UserAgentMajor UInt16, UserAgentMinor FixedString(2), CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, MobilePhone UInt8, MobilePhoneModel String, Params String, IPNetworkID UInt32, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, IsArtifical UInt8, WindowClientWidth UInt16, WindowClientHeight UInt16, ClientTimeZone Int16, ClientEventTime DateTime, SilverlightVersion1 UInt8, SilverlightVersion2 UInt8, SilverlightVersion3 UInt32, SilverlightVersion4 UInt16, PageCharset String, CodeVersion UInt32, IsLink UInt8, IsDownload UInt8, IsNotBounce UInt8, FUniqID UInt64, HID UInt32, IsOldCounter UInt8, IsEvent UInt8, IsParameter UInt8, DontCountHits UInt8, WithHash UInt8, HitColor FixedString(1), UTCEventTime DateTime, Age UInt8, Sex UInt8, Income UInt8, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), RemoteIP UInt32, RemoteIP6 FixedString(16), WindowName Int32, OpenerName Int32, HistoryLength Int16, BrowserLanguage FixedString(2), BrowserCountry FixedString(2), SocialNetwork String, SocialAction String, HTTPError UInt16, SendTiming Int32, DNSTiming Int32, ConnectTiming Int32, ResponseStartTiming Int32, ResponseEndTiming Int32, FetchTiming Int32, RedirectTiming Int32, DOMInteractiveTiming Int32, DOMContentLoadedTiming Int32, DOMCompleteTiming Int32, LoadEventStartTiming Int32, LoadEventEndTiming Int32, NSToDOMContentLoadedTiming Int32, FirstPaintTiming Int32, RedirectCount Int8, SocialSourceNetworkID UInt8, SocialSourcePage String, ParamPrice Int64, ParamOrderID String, ParamCurrency FixedString(3), ParamCurrencyID UInt16, GoalsReached Array(UInt32), OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, RefererHash UInt64, URLHash UInt64, CLID UInt32, YCLID UInt64, ShareService String, ShareURL String, ShareTitle String, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), IslandID FixedString(16), RequestNum UInt32, RequestTry UInt8) ENGINE = MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192"
+$ # импортируем данные
+$ cat hits_v1.tsv | clickhouse-client --query "INSERT INTO datasets.hits_v1 FORMAT TSV" --max_insert_block_size=100000
+$ # опционально можно оптимизировать таблицу
+$ clickhouse-client --query "OPTIMIZE TABLE datasets.hits_v1 FINAL"
+$ clickhouse-client --query "SELECT COUNT(*) FROM datasets.hits_v1"
```
**Скачивание и импортирование visits из сжатого tsv-файла**
```bash
-curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
-# теперь создадим таблицу
-clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets"
-clickhouse-client --query "CREATE TABLE datasets.visits_v1 ( CounterID UInt32, StartDate Date, Sign Int8, IsNew UInt8, VisitID UInt64, UserID UInt64, StartTime DateTime, Duration UInt32, UTCStartTime DateTime, PageViews Int32, Hits Int32, IsBounce UInt8, Referer String, StartURL String, RefererDomain String, StartURLDomain String, EndURL String, LinkURL String, IsDownload UInt8, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, PlaceID Int32, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), IsYandex UInt8, GoalReachesDepth Int32, GoalReachesURL Int32, GoalReachesAny Int32, SocialSourceNetworkID UInt8, SocialSourcePage String, MobilePhoneModel String, ClientEventTime DateTime, RegionID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RemoteIP UInt32, RemoteIP6 FixedString(16), IPNetworkID UInt32, SilverlightVersion3 UInt32, CodeVersion UInt32, ResolutionWidth UInt16, ResolutionHeight UInt16, UserAgentMajor UInt16, UserAgentMinor UInt16, WindowClientWidth UInt16, WindowClientHeight UInt16, SilverlightVersion2 UInt8, SilverlightVersion4 UInt16, FlashVersion3 UInt16, FlashVersion4 UInt16, ClientTimeZone Int16, OS UInt8, UserAgent UInt8, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, NetMajor UInt8, NetMinor UInt8, MobilePhone UInt8, SilverlightVersion1 UInt8, Age UInt8, Sex UInt8, Income UInt8, JavaEnable UInt8, CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, BrowserLanguage UInt16, BrowserCountry UInt16, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), Params Array(String), Goals Nested(ID UInt32, Serial UInt32, EventTime DateTime, Price Int64, OrderID String, CurrencyID UInt32), WatchIDs Array(UInt64), ParamSumPrice Int64, ParamCurrency FixedString(3), ParamCurrencyID UInt16, ClickLogID UInt64, ClickEventID Int32, ClickGoodEvent Int32, ClickEventTime DateTime, ClickPriorityID Int32, ClickPhraseID Int32, ClickPageID Int32, ClickPlaceID Int32, ClickTypeID Int32, ClickResourceID Int32, ClickCost UInt32, ClickClientIP UInt32, ClickDomainID UInt32, ClickURL String, ClickAttempt UInt8, ClickOrderID UInt32, ClickBannerID UInt32, ClickMarketCategoryID UInt32, ClickMarketPP UInt32, ClickMarketCategoryName String, ClickMarketPPName String, ClickAWAPSCampaignName String, ClickPageName String, ClickTargetType UInt16, ClickTargetPhraseID UInt64, ClickContextType UInt8, ClickSelectType Int8, ClickOptions String, ClickGroupBannerID Int32, OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, FirstVisit DateTime, PredLastVisit Date, LastVisit Date, TotalVisits UInt32, TraficSource Nested(ID Int8, SearchEngineID UInt16, AdvEngineID UInt8, PlaceID UInt16, SocialSourceNetworkID UInt8, Domain String, SearchPhrase String, SocialSourcePage String), Attendance FixedString(16), CLID UInt32, YCLID UInt64, NormalizedRefererHash UInt64, SearchPhraseHash UInt64, RefererDomainHash UInt64, NormalizedStartURLHash UInt64, StartURLDomainHash UInt64, NormalizedEndURLHash UInt64, TopLevelDomain UInt64, URLScheme UInt64, OpenstatServiceNameHash UInt64, OpenstatCampaignIDHash UInt64, OpenstatAdIDHash UInt64, OpenstatSourceIDHash UInt64, UTMSourceHash UInt64, UTMMediumHash UInt64, UTMCampaignHash UInt64, UTMContentHash UInt64, UTMTermHash UInt64, FromHash UInt64, WebVisorEnabled UInt8, WebVisorActivity UInt32, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), Market Nested(Type UInt8, GoalID UInt32, OrderID String, OrderPrice Int64, PP UInt32, DirectPlaceID UInt32, DirectOrderID UInt32, DirectBannerID UInt32, GoodID String, GoodName String, GoodQuantity Int32, GoodPrice Int64), IslandID FixedString(16)) ENGINE = CollapsingMergeTree(StartDate, intHash32(UserID), (CounterID, StartDate, intHash32(UserID), VisitID), 8192, Sign)"
-# импортируем данные
-cat visits_v1.tsv | clickhouse-client --query "INSERT INTO datasets.visits_v1 FORMAT TSV" --max_insert_block_size=100000
-# опционально можно оптимизировать таблицу
-clickhouse-client --query "OPTIMIZE TABLE datasets.visits_v1 FINAL"
-clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1"
+$ curl https://clickhouse-datasets.s3.yandex.net/visits/tsv/visits_v1.tsv.xz | unxz --threads=`nproc` > visits_v1.tsv
+$ # теперь создадим таблицу
+$ clickhouse-client --query "CREATE DATABASE IF NOT EXISTS datasets"
+$ clickhouse-client --query "CREATE TABLE datasets.visits_v1 ( CounterID UInt32, StartDate Date, Sign Int8, IsNew UInt8, VisitID UInt64, UserID UInt64, StartTime DateTime, Duration UInt32, UTCStartTime DateTime, PageViews Int32, Hits Int32, IsBounce UInt8, Referer String, StartURL String, RefererDomain String, StartURLDomain String, EndURL String, LinkURL String, IsDownload UInt8, TraficSourceID Int8, SearchEngineID UInt16, SearchPhrase String, AdvEngineID UInt8, PlaceID Int32, RefererCategories Array(UInt16), URLCategories Array(UInt16), URLRegions Array(UInt32), RefererRegions Array(UInt32), IsYandex UInt8, GoalReachesDepth Int32, GoalReachesURL Int32, GoalReachesAny Int32, SocialSourceNetworkID UInt8, SocialSourcePage String, MobilePhoneModel String, ClientEventTime DateTime, RegionID UInt32, ClientIP UInt32, ClientIP6 FixedString(16), RemoteIP UInt32, RemoteIP6 FixedString(16), IPNetworkID UInt32, SilverlightVersion3 UInt32, CodeVersion UInt32, ResolutionWidth UInt16, ResolutionHeight UInt16, UserAgentMajor UInt16, UserAgentMinor UInt16, WindowClientWidth UInt16, WindowClientHeight UInt16, SilverlightVersion2 UInt8, SilverlightVersion4 UInt16, FlashVersion3 UInt16, FlashVersion4 UInt16, ClientTimeZone Int16, OS UInt8, UserAgent UInt8, ResolutionDepth UInt8, FlashMajor UInt8, FlashMinor UInt8, NetMajor UInt8, NetMinor UInt8, MobilePhone UInt8, SilverlightVersion1 UInt8, Age UInt8, Sex UInt8, Income UInt8, JavaEnable UInt8, CookieEnable UInt8, JavascriptEnable UInt8, IsMobile UInt8, BrowserLanguage UInt16, BrowserCountry UInt16, Interests UInt16, Robotness UInt8, GeneralInterests Array(UInt16), Params Array(String), Goals Nested(ID UInt32, Serial UInt32, EventTime DateTime, Price Int64, OrderID String, CurrencyID UInt32), WatchIDs Array(UInt64), ParamSumPrice Int64, ParamCurrency FixedString(3), ParamCurrencyID UInt16, ClickLogID UInt64, ClickEventID Int32, ClickGoodEvent Int32, ClickEventTime DateTime, ClickPriorityID Int32, ClickPhraseID Int32, ClickPageID Int32, ClickPlaceID Int32, ClickTypeID Int32, ClickResourceID Int32, ClickCost UInt32, ClickClientIP UInt32, ClickDomainID UInt32, ClickURL String, ClickAttempt UInt8, ClickOrderID UInt32, ClickBannerID UInt32, ClickMarketCategoryID UInt32, ClickMarketPP UInt32, ClickMarketCategoryName String, ClickMarketPPName String, ClickAWAPSCampaignName String, ClickPageName String, ClickTargetType UInt16, ClickTargetPhraseID UInt64, ClickContextType UInt8, ClickSelectType Int8, ClickOptions String, ClickGroupBannerID Int32, OpenstatServiceName String, OpenstatCampaignID String, OpenstatAdID String, OpenstatSourceID String, UTMSource String, UTMMedium String, UTMCampaign String, UTMContent String, UTMTerm String, FromTag String, HasGCLID UInt8, FirstVisit DateTime, PredLastVisit Date, LastVisit Date, TotalVisits UInt32, TraficSource Nested(ID Int8, SearchEngineID UInt16, AdvEngineID UInt8, PlaceID UInt16, SocialSourceNetworkID UInt8, Domain String, SearchPhrase String, SocialSourcePage String), Attendance FixedString(16), CLID UInt32, YCLID UInt64, NormalizedRefererHash UInt64, SearchPhraseHash UInt64, RefererDomainHash UInt64, NormalizedStartURLHash UInt64, StartURLDomainHash UInt64, NormalizedEndURLHash UInt64, TopLevelDomain UInt64, URLScheme UInt64, OpenstatServiceNameHash UInt64, OpenstatCampaignIDHash UInt64, OpenstatAdIDHash UInt64, OpenstatSourceIDHash UInt64, UTMSourceHash UInt64, UTMMediumHash UInt64, UTMCampaignHash UInt64, UTMContentHash UInt64, UTMTermHash UInt64, FromHash UInt64, WebVisorEnabled UInt8, WebVisorActivity UInt32, ParsedParams Nested(Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), Market Nested(Type UInt8, GoalID UInt32, OrderID String, OrderPrice Int64, PP UInt32, DirectPlaceID UInt32, DirectOrderID UInt32, DirectBannerID UInt32, GoodID String, GoodName String, GoodQuantity Int32, GoodPrice Int64), IslandID FixedString(16)) ENGINE = CollapsingMergeTree(StartDate, intHash32(UserID), (CounterID, StartDate, intHash32(UserID), VisitID), 8192, Sign)"
+$ # импортируем данные
+$ cat visits_v1.tsv | clickhouse-client --query "INSERT INTO datasets.visits_v1 FORMAT TSV" --max_insert_block_size=100000
+$ # опционально можно оптимизировать таблицу
+$ clickhouse-client --query "OPTIMIZE TABLE datasets.visits_v1 FINAL"
+$ clickhouse-client --query "SELECT COUNT(*) FROM datasets.visits_v1"
```
## Запросы
-Примеры запросов к этим таблицам (они называются `test.hits` и `test.visits`) можно найти среди [stateful тестов](https://github.com/yandex/ClickHouse/tree/master/dbms/tests/queries/1_stateful) и в некоторых [performance тестах](https://github.com/yandex/ClickHouse/tree/master/dbms/tests/performance/test_hits) ClickHouse.
+Примеры запросов к этим таблицам (они называются `test.hits` и `test.visits`) можно найти среди [stateful тестов](https://github.com/ClickHouse/ClickHouse/tree/master/dbms/tests/queries/1_stateful) и в некоторых [performance тестах](https://github.com/ClickHouse/ClickHouse/tree/master/dbms/tests/performance/test_hits) ClickHouse.
diff --git a/docs/ru/getting_started/example_datasets/nyc_taxi.md b/docs/ru/getting_started/example_datasets/nyc_taxi.md
index 0f1bf99c5cb..1b23636c5c8 100644
--- a/docs/ru/getting_started/example_datasets/nyc_taxi.md
+++ b/docs/ru/getting_started/example_datasets/nyc_taxi.md
@@ -29,9 +29,9 @@ mv data/yellow_tripdata_2010-03.csv_ data/yellow_tripdata_2010-03.csv
Проверить количество загруженных строк можно следующим образом:
-```
-time psql nyc-taxi-data -c "SELECT count(*) FROM trips;"
-## count
+```bash
+$ time psql nyc-taxi-data -c "SELECT count(*) FROM trips;"
+## Count
1298979494
(1 row)
@@ -44,7 +44,7 @@ real 7m9.164s
Экспорт данных из PostgreSQL:
-``` sql
+```sql
COPY
(
SELECT trips.id,
@@ -119,7 +119,7 @@ COPY
Создание временной таблицы в ClickHouse:
-``` sql
+```sql
CREATE TABLE trips
(
trip_id UInt32,
@@ -178,8 +178,8 @@ dropoff_puma Nullable(String)
Она нужна для преобразование полей к более правильным типам данных и, если возможно, чтобы избавиться от NULL'ов.
-```
-time clickhouse-client --query="INSERT INTO trips FORMAT TabSeparated" < trips.tsv
+```bash
+$ time clickhouse-client --query="INSERT INTO trips FORMAT TabSeparated" < trips.tsv
real 75m56.214s
```
@@ -196,7 +196,7 @@ real 75m56.214s
Создадим и заполним итоговую таблицу:
-```
+```sql
CREATE TABLE trips_mergetree
ENGINE = MergeTree(pickup_date, pickup_datetime, 8192)
AS SELECT
@@ -263,13 +263,10 @@ FROM trips
Таблица заняла 126 Гб дискового пространства.
+```sql
+SELECT formatReadableSize(sum(bytes)) FROM system.parts WHERE table = 'trips_mergetree' AND active
```
-:) SELECT formatReadableSize(sum(bytes)) FROM system.parts WHERE table = 'trips_mergetree' AND active
-
-SELECT formatReadableSize(sum(bytes))
-FROM system.parts
-WHERE (table = 'trips_mergetree') AND active
-
+```text
┌─formatReadableSize(sum(bytes))─┐
│ 126.18 GiB │
└────────────────────────────────┘
@@ -280,11 +277,11 @@ WHERE (table = 'trips_mergetree') AND active
## Скачивание готовых партиций
```bash
-curl -O https://clickhouse-datasets.s3.yandex.net/trips_mergetree/partitions/trips_mergetree.tar
-tar xvf trips_mergetree.tar -C /var/lib/clickhouse # путь к папке с данными ClickHouse
-# убедитесь, что установлены корректные права доступа на файлы
-sudo service clickhouse-server restart
-clickhouse-client --query "SELECT COUNT(*) FROM datasets.trips_mergetree"
+$ curl -O https://clickhouse-datasets.s3.yandex.net/trips_mergetree/partitions/trips_mergetree.tar
+$ tar xvf trips_mergetree.tar -C /var/lib/clickhouse # путь к папке с данными ClickHouse
+$ # убедитесь, что установлены корректные права доступа на файлы
+$ sudo service clickhouse-server restart
+$ clickhouse-client --query "SELECT COUNT(*) FROM datasets.trips_mergetree"
```
!!!info
@@ -303,7 +300,7 @@ SELECT cab_type, count(*) FROM trips_mergetree GROUP BY cab_type
Q2:
-``` sql
+```sql
SELECT passenger_count, avg(total_amount) FROM trips_mergetree GROUP BY passenger_count
```
@@ -311,7 +308,7 @@ SELECT passenger_count, avg(total_amount) FROM trips_mergetree GROUP BY passenge
Q3:
-``` sql
+```sql
SELECT passenger_count, toYear(pickup_date) AS year, count(*) FROM trips_mergetree GROUP BY passenger_count, year
```
@@ -319,7 +316,7 @@ SELECT passenger_count, toYear(pickup_date) AS year, count(*) FROM trips_mergetr
Q4:
-``` sql
+```sql
SELECT passenger_count, toYear(pickup_date) AS year, round(trip_distance) AS distance, count(*)
FROM trips_mergetree
GROUP BY passenger_count, year, distance
@@ -341,19 +338,19 @@ ORDER BY year, count(*) DESC
На каждом сервере:
-```
+```sql
CREATE TABLE default.trips_mergetree_third ( trip_id UInt32, vendor_id Enum8('1' = 1, '2' = 2, 'CMT' = 3, 'VTS' = 4, 'DDS' = 5, 'B02512' = 10, 'B02598' = 11, 'B02617' = 12, 'B02682' = 13, 'B02764' = 14), pickup_date Date, pickup_datetime DateTime, dropoff_date Date, dropoff_datetime DateTime, store_and_fwd_flag UInt8, rate_code_id UInt8, pickup_longitude Float64, pickup_latitude Float64, dropoff_longitude Float64, dropoff_latitude Float64, passenger_count UInt8, trip_distance Float64, fare_amount Float32, extra Float32, mta_tax Float32, tip_amount Float32, tolls_amount Float32, ehail_fee Float32, improvement_surcharge Float32, total_amount Float32, payment_type_ Enum8('UNK' = 0, 'CSH' = 1, 'CRE' = 2, 'NOC' = 3, 'DIS' = 4), trip_type UInt8, pickup FixedString(25), dropoff FixedString(25), cab_type Enum8('yellow' = 1, 'green' = 2, 'uber' = 3), pickup_nyct2010_gid UInt8, pickup_ctlabel Float32, pickup_borocode UInt8, pickup_boroname Enum8('' = 0, 'Manhattan' = 1, 'Bronx' = 2, 'Brooklyn' = 3, 'Queens' = 4, 'Staten Island' = 5), pickup_ct2010 FixedString(6), pickup_boroct2010 FixedString(7), pickup_cdeligibil Enum8(' ' = 0, 'E' = 1, 'I' = 2), pickup_ntacode FixedString(4), pickup_ntaname Enum16('' = 0, 'Airport' = 1, 'Allerton-Pelham Gardens' = 2, 'Annadale-Huguenot-Prince\'s Bay-Eltingville' = 3, 'Arden Heights' = 4, 'Astoria' = 5, 'Auburndale' = 6, 'Baisley Park' = 7, 'Bath Beach' = 8, 'Battery Park City-Lower Manhattan' = 9, 'Bay Ridge' = 10, 'Bayside-Bayside Hills' = 11, 'Bedford' = 12, 'Bedford Park-Fordham North' = 13, 'Bellerose' = 14, 'Belmont' = 15, 'Bensonhurst East' = 16, 'Bensonhurst West' = 17, 'Borough Park' = 18, 'Breezy Point-Belle Harbor-Rockaway Park-Broad Channel' = 19, 'Briarwood-Jamaica Hills' = 20, 'Brighton Beach' = 21, 'Bronxdale' = 22, 'Brooklyn Heights-Cobble Hill' = 23, 'Brownsville' = 24, 'Bushwick North' = 25, 'Bushwick South' = 26, 'Cambria Heights' = 27, 'Canarsie' = 28, 'Carroll Gardens-Columbia Street-Red Hook' = 29, 'Central Harlem North-Polo Grounds' = 30, 'Central Harlem South' = 31, 'Charleston-Richmond Valley-Tottenville' = 32, 'Chinatown' = 33, 'Claremont-Bathgate' = 34, 'Clinton' = 35, 'Clinton Hill' = 36, 'Co-op City' = 37, 'College Point' = 38, 'Corona' = 39, 'Crotona Park East' = 40, 'Crown Heights North' = 41, 'Crown Heights South' = 42, 'Cypress Hills-City Line' = 43, 'DUMBO-Vinegar Hill-Downtown Brooklyn-Boerum Hill' = 44, 'Douglas Manor-Douglaston-Little Neck' = 45, 'Dyker Heights' = 46, 'East Concourse-Concourse Village' = 47, 'East Elmhurst' = 48, 'East Flatbush-Farragut' = 49, 'East Flushing' = 50, 'East Harlem North' = 51, 'East Harlem South' = 52, 'East New York' = 53, 'East New York (Pennsylvania Ave)' = 54, 'East Tremont' = 55, 'East Village' = 56, 'East Williamsburg' = 57, 'Eastchester-Edenwald-Baychester' = 58, 'Elmhurst' = 59, 'Elmhurst-Maspeth' = 60, 'Erasmus' = 61, 'Far Rockaway-Bayswater' = 62, 'Flatbush' = 63, 'Flatlands' = 64, 'Flushing' = 65, 'Fordham South' = 66, 'Forest Hills' = 67, 'Fort Greene' = 68, 'Fresh Meadows-Utopia' = 69, 'Ft. Totten-Bay Terrace-Clearview' = 70, 'Georgetown-Marine Park-Bergen Beach-Mill Basin' = 71, 'Glen Oaks-Floral Park-New Hyde Park' = 72, 'Glendale' = 73, 'Gramercy' = 74, 'Grasmere-Arrochar-Ft. Wadsworth' = 75, 'Gravesend' = 76, 'Great Kills' = 77, 'Greenpoint' = 78, 'Grymes Hill-Clifton-Fox Hills' = 79, 'Hamilton Heights' = 80, 'Hammels-Arverne-Edgemere' = 81, 'Highbridge' = 82, 'Hollis' = 83, 'Homecrest' = 84, 'Hudson Yards-Chelsea-Flatiron-Union Square' = 85, 'Hunters Point-Sunnyside-West Maspeth' = 86, 'Hunts Point' = 87, 'Jackson Heights' = 88, 'Jamaica' = 89, 'Jamaica Estates-Holliswood' = 90, 'Kensington-Ocean Parkway' = 91, 'Kew Gardens' = 92, 'Kew Gardens Hills' = 93, 'Kingsbridge Heights' = 94, 'Laurelton' = 95, 'Lenox Hill-Roosevelt Island' = 96, 'Lincoln Square' = 97, 'Lindenwood-Howard Beach' = 98, 'Longwood' = 99, 'Lower East Side' = 100, 'Madison' = 101, 'Manhattanville' = 102, 'Marble Hill-Inwood' = 103, 'Mariner\'s Harbor-Arlington-Port Ivory-Graniteville' = 104, 'Maspeth' = 105, 'Melrose South-Mott Haven North' = 106, 'Middle Village' = 107, 'Midtown-Midtown South' = 108, 'Midwood' = 109, 'Morningside Heights' = 110, 'Morrisania-Melrose' = 111, 'Mott Haven-Port Morris' = 112, 'Mount Hope' = 113, 'Murray Hill' = 114, 'Murray Hill-Kips Bay' = 115, 'New Brighton-Silver Lake' = 116, 'New Dorp-Midland Beach' = 117, 'New Springville-Bloomfield-Travis' = 118, 'North Corona' = 119, 'North Riverdale-Fieldston-Riverdale' = 120, 'North Side-South Side' = 121, 'Norwood' = 122, 'Oakland Gardens' = 123, 'Oakwood-Oakwood Beach' = 124, 'Ocean Hill' = 125, 'Ocean Parkway South' = 126, 'Old Astoria' = 127, 'Old Town-Dongan Hills-South Beach' = 128, 'Ozone Park' = 129, 'Park Slope-Gowanus' = 130, 'Parkchester' = 131, 'Pelham Bay-Country Club-City Island' = 132, 'Pelham Parkway' = 133, 'Pomonok-Flushing Heights-Hillcrest' = 134, 'Port Richmond' = 135, 'Prospect Heights' = 136, 'Prospect Lefferts Gardens-Wingate' = 137, 'Queens Village' = 138, 'Queensboro Hill' = 139, 'Queensbridge-Ravenswood-Long Island City' = 140, 'Rego Park' = 141, 'Richmond Hill' = 142, 'Ridgewood' = 143, 'Rikers Island' = 144, 'Rosedale' = 145, 'Rossville-Woodrow' = 146, 'Rugby-Remsen Village' = 147, 'Schuylerville-Throgs Neck-Edgewater Park' = 148, 'Seagate-Coney Island' = 149, 'Sheepshead Bay-Gerritsen Beach-Manhattan Beach' = 150, 'SoHo-TriBeCa-Civic Center-Little Italy' = 151, 'Soundview-Bruckner' = 152, 'Soundview-Castle Hill-Clason Point-Harding Park' = 153, 'South Jamaica' = 154, 'South Ozone Park' = 155, 'Springfield Gardens North' = 156, 'Springfield Gardens South-Brookville' = 157, 'Spuyten Duyvil-Kingsbridge' = 158, 'St. Albans' = 159, 'Stapleton-Rosebank' = 160, 'Starrett City' = 161, 'Steinway' = 162, 'Stuyvesant Heights' = 163, 'Stuyvesant Town-Cooper Village' = 164, 'Sunset Park East' = 165, 'Sunset Park West' = 166, 'Todt Hill-Emerson Hill-Heartland Village-Lighthouse Hill' = 167, 'Turtle Bay-East Midtown' = 168, 'University Heights-Morris Heights' = 169, 'Upper East Side-Carnegie Hill' = 170, 'Upper West Side' = 171, 'Van Cortlandt Village' = 172, 'Van Nest-Morris Park-Westchester Square' = 173, 'Washington Heights North' = 174, 'Washington Heights South' = 175, 'West Brighton' = 176, 'West Concourse' = 177, 'West Farms-Bronx River' = 178, 'West New Brighton-New Brighton-St. George' = 179, 'West Village' = 180, 'Westchester-Unionport' = 181, 'Westerleigh' = 182, 'Whitestone' = 183, 'Williamsbridge-Olinville' = 184, 'Williamsburg' = 185, 'Windsor Terrace' = 186, 'Woodhaven' = 187, 'Woodlawn-Wakefield' = 188, 'Woodside' = 189, 'Yorkville' = 190, 'park-cemetery-etc-Bronx' = 191, 'park-cemetery-etc-Brooklyn' = 192, 'park-cemetery-etc-Manhattan' = 193, 'park-cemetery-etc-Queens' = 194, 'park-cemetery-etc-Staten Island' = 195), pickup_puma UInt16, dropoff_nyct2010_gid UInt8, dropoff_ctlabel Float32, dropoff_borocode UInt8, dropoff_boroname Enum8('' = 0, 'Manhattan' = 1, 'Bronx' = 2, 'Brooklyn' = 3, 'Queens' = 4, 'Staten Island' = 5), dropoff_ct2010 FixedString(6), dropoff_boroct2010 FixedString(7), dropoff_cdeligibil Enum8(' ' = 0, 'E' = 1, 'I' = 2), dropoff_ntacode FixedString(4), dropoff_ntaname Enum16('' = 0, 'Airport' = 1, 'Allerton-Pelham Gardens' = 2, 'Annadale-Huguenot-Prince\'s Bay-Eltingville' = 3, 'Arden Heights' = 4, 'Astoria' = 5, 'Auburndale' = 6, 'Baisley Park' = 7, 'Bath Beach' = 8, 'Battery Park City-Lower Manhattan' = 9, 'Bay Ridge' = 10, 'Bayside-Bayside Hills' = 11, 'Bedford' = 12, 'Bedford Park-Fordham North' = 13, 'Bellerose' = 14, 'Belmont' = 15, 'Bensonhurst East' = 16, 'Bensonhurst West' = 17, 'Borough Park' = 18, 'Breezy Point-Belle Harbor-Rockaway Park-Broad Channel' = 19, 'Briarwood-Jamaica Hills' = 20, 'Brighton Beach' = 21, 'Bronxdale' = 22, 'Brooklyn Heights-Cobble Hill' = 23, 'Brownsville' = 24, 'Bushwick North' = 25, 'Bushwick South' = 26, 'Cambria Heights' = 27, 'Canarsie' = 28, 'Carroll Gardens-Columbia Street-Red Hook' = 29, 'Central Harlem North-Polo Grounds' = 30, 'Central Harlem South' = 31, 'Charleston-Richmond Valley-Tottenville' = 32, 'Chinatown' = 33, 'Claremont-Bathgate' = 34, 'Clinton' = 35, 'Clinton Hill' = 36, 'Co-op City' = 37, 'College Point' = 38, 'Corona' = 39, 'Crotona Park East' = 40, 'Crown Heights North' = 41, 'Crown Heights South' = 42, 'Cypress Hills-City Line' = 43, 'DUMBO-Vinegar Hill-Downtown Brooklyn-Boerum Hill' = 44, 'Douglas Manor-Douglaston-Little Neck' = 45, 'Dyker Heights' = 46, 'East Concourse-Concourse Village' = 47, 'East Elmhurst' = 48, 'East Flatbush-Farragut' = 49, 'East Flushing' = 50, 'East Harlem North' = 51, 'East Harlem South' = 52, 'East New York' = 53, 'East New York (Pennsylvania Ave)' = 54, 'East Tremont' = 55, 'East Village' = 56, 'East Williamsburg' = 57, 'Eastchester-Edenwald-Baychester' = 58, 'Elmhurst' = 59, 'Elmhurst-Maspeth' = 60, 'Erasmus' = 61, 'Far Rockaway-Bayswater' = 62, 'Flatbush' = 63, 'Flatlands' = 64, 'Flushing' = 65, 'Fordham South' = 66, 'Forest Hills' = 67, 'Fort Greene' = 68, 'Fresh Meadows-Utopia' = 69, 'Ft. Totten-Bay Terrace-Clearview' = 70, 'Georgetown-Marine Park-Bergen Beach-Mill Basin' = 71, 'Glen Oaks-Floral Park-New Hyde Park' = 72, 'Glendale' = 73, 'Gramercy' = 74, 'Grasmere-Arrochar-Ft. Wadsworth' = 75, 'Gravesend' = 76, 'Great Kills' = 77, 'Greenpoint' = 78, 'Grymes Hill-Clifton-Fox Hills' = 79, 'Hamilton Heights' = 80, 'Hammels-Arverne-Edgemere' = 81, 'Highbridge' = 82, 'Hollis' = 83, 'Homecrest' = 84, 'Hudson Yards-Chelsea-Flatiron-Union Square' = 85, 'Hunters Point-Sunnyside-West Maspeth' = 86, 'Hunts Point' = 87, 'Jackson Heights' = 88, 'Jamaica' = 89, 'Jamaica Estates-Holliswood' = 90, 'Kensington-Ocean Parkway' = 91, 'Kew Gardens' = 92, 'Kew Gardens Hills' = 93, 'Kingsbridge Heights' = 94, 'Laurelton' = 95, 'Lenox Hill-Roosevelt Island' = 96, 'Lincoln Square' = 97, 'Lindenwood-Howard Beach' = 98, 'Longwood' = 99, 'Lower East Side' = 100, 'Madison' = 101, 'Manhattanville' = 102, 'Marble Hill-Inwood' = 103, 'Mariner\'s Harbor-Arlington-Port Ivory-Graniteville' = 104, 'Maspeth' = 105, 'Melrose South-Mott Haven North' = 106, 'Middle Village' = 107, 'Midtown-Midtown South' = 108, 'Midwood' = 109, 'Morningside Heights' = 110, 'Morrisania-Melrose' = 111, 'Mott Haven-Port Morris' = 112, 'Mount Hope' = 113, 'Murray Hill' = 114, 'Murray Hill-Kips Bay' = 115, 'New Brighton-Silver Lake' = 116, 'New Dorp-Midland Beach' = 117, 'New Springville-Bloomfield-Travis' = 118, 'North Corona' = 119, 'North Riverdale-Fieldston-Riverdale' = 120, 'North Side-South Side' = 121, 'Norwood' = 122, 'Oakland Gardens' = 123, 'Oakwood-Oakwood Beach' = 124, 'Ocean Hill' = 125, 'Ocean Parkway South' = 126, 'Old Astoria' = 127, 'Old Town-Dongan Hills-South Beach' = 128, 'Ozone Park' = 129, 'Park Slope-Gowanus' = 130, 'Parkchester' = 131, 'Pelham Bay-Country Club-City Island' = 132, 'Pelham Parkway' = 133, 'Pomonok-Flushing Heights-Hillcrest' = 134, 'Port Richmond' = 135, 'Prospect Heights' = 136, 'Prospect Lefferts Gardens-Wingate' = 137, 'Queens Village' = 138, 'Queensboro Hill' = 139, 'Queensbridge-Ravenswood-Long Island City' = 140, 'Rego Park' = 141, 'Richmond Hill' = 142, 'Ridgewood' = 143, 'Rikers Island' = 144, 'Rosedale' = 145, 'Rossville-Woodrow' = 146, 'Rugby-Remsen Village' = 147, 'Schuylerville-Throgs Neck-Edgewater Park' = 148, 'Seagate-Coney Island' = 149, 'Sheepshead Bay-Gerritsen Beach-Manhattan Beach' = 150, 'SoHo-TriBeCa-Civic Center-Little Italy' = 151, 'Soundview-Bruckner' = 152, 'Soundview-Castle Hill-Clason Point-Harding Park' = 153, 'South Jamaica' = 154, 'South Ozone Park' = 155, 'Springfield Gardens North' = 156, 'Springfield Gardens South-Brookville' = 157, 'Spuyten Duyvil-Kingsbridge' = 158, 'St. Albans' = 159, 'Stapleton-Rosebank' = 160, 'Starrett City' = 161, 'Steinway' = 162, 'Stuyvesant Heights' = 163, 'Stuyvesant Town-Cooper Village' = 164, 'Sunset Park East' = 165, 'Sunset Park West' = 166, 'Todt Hill-Emerson Hill-Heartland Village-Lighthouse Hill' = 167, 'Turtle Bay-East Midtown' = 168, 'University Heights-Morris Heights' = 169, 'Upper East Side-Carnegie Hill' = 170, 'Upper West Side' = 171, 'Van Cortlandt Village' = 172, 'Van Nest-Morris Park-Westchester Square' = 173, 'Washington Heights North' = 174, 'Washington Heights South' = 175, 'West Brighton' = 176, 'West Concourse' = 177, 'West Farms-Bronx River' = 178, 'West New Brighton-New Brighton-St. George' = 179, 'West Village' = 180, 'Westchester-Unionport' = 181, 'Westerleigh' = 182, 'Whitestone' = 183, 'Williamsbridge-Olinville' = 184, 'Williamsburg' = 185, 'Windsor Terrace' = 186, 'Woodhaven' = 187, 'Woodlawn-Wakefield' = 188, 'Woodside' = 189, 'Yorkville' = 190, 'park-cemetery-etc-Bronx' = 191, 'park-cemetery-etc-Brooklyn' = 192, 'park-cemetery-etc-Manhattan' = 193, 'park-cemetery-etc-Queens' = 194, 'park-cemetery-etc-Staten Island' = 195), dropoff_puma UInt16) ENGINE = MergeTree(pickup_date, pickup_datetime, 8192)
```
На исходном сервере:
-``` sql
+```sql
CREATE TABLE trips_mergetree_x3 AS trips_mergetree_third ENGINE = Distributed(perftest, default, trips_mergetree_third, rand())
```
Следующим запрос перераспределит данные:
-``` sql
+```sql
INSERT INTO trips_mergetree_x3 SELECT * FROM trips_mergetree
```
diff --git a/docs/ru/getting_started/example_datasets/ontime.md b/docs/ru/getting_started/example_datasets/ontime.md
index 32f222b0630..cfaf959464c 100644
--- a/docs/ru/getting_started/example_datasets/ontime.md
+++ b/docs/ru/getting_started/example_datasets/ontime.md
@@ -24,7 +24,7 @@ done
Создание таблицы:
-``` sql
+```sql
CREATE TABLE `ontime` (
`Year` UInt16,
`Quarter` UInt8,
@@ -141,17 +141,17 @@ CREATE TABLE `ontime` (
Загрузка данных:
```bash
-for i in *.zip; do echo $i; unzip -cq $i '*.csv' | sed 's/\.00//g' | clickhouse-client --host=example-perftest01j --query="INSERT INTO ontime FORMAT CSVWithNames"; done
+$ for i in *.zip; do echo $i; unzip -cq $i '*.csv' | sed 's/\.00//g' | clickhouse-client --host=example-perftest01j --query="INSERT INTO ontime FORMAT CSVWithNames"; done
```
## Скачивание готовых партиций
```bash
-curl -O https://clickhouse-datasets.s3.yandex.net/ontime/partitions/ontime.tar
-tar xvf ontime.tar -C /var/lib/clickhouse # путь к папке с данными ClickHouse
-# убедитесь, что установлены корректные права доступа на файлы
-sudo service clickhouse-server restart
-clickhouse-client --query "SELECT COUNT(*) FROM datasets.ontime"
+$ curl -O https://clickhouse-datasets.s3.yandex.net/ontime/partitions/ontime.tar
+$ tar xvf ontime.tar -C /var/lib/clickhouse # путь к папке с данными ClickHouse
+$ # убедитесь, что установлены корректные права доступа на файлы
+$ sudo service clickhouse-server restart
+$ clickhouse-client --query "SELECT COUNT(*) FROM datasets.ontime"
```
!!!info
@@ -162,7 +162,7 @@ clickhouse-client --query "SELECT COUNT(*) FROM datasets.ontime"
Q0.
-``` sql
+```sql
SELECT avg(c1)
FROM
(
@@ -174,7 +174,7 @@ FROM
Q1. Количество полетов в день с 2000 по 2008 года
-``` sql
+```sql
SELECT DayOfWeek, count(*) AS c
FROM ontime
WHERE Year>=2000 AND Year<=2008
@@ -184,7 +184,7 @@ ORDER BY c DESC;
Q2. Количество полетов, задержанных более чем на 10 минут, с группировкой по дням неделе, за 2000-2008 года
-``` sql
+```sql
SELECT DayOfWeek, count(*) AS c
FROM ontime
WHERE DepDelay>10 AND Year>=2000 AND Year<=2008
@@ -194,7 +194,7 @@ ORDER BY c DESC;
Q3. Количество задержек по аэропортам за 2000-2008
-``` sql
+```sql
SELECT Origin, count(*) AS c
FROM ontime
WHERE DepDelay>10 AND Year>=2000 AND Year<=2008
@@ -205,7 +205,7 @@ LIMIT 10;
Q4. Количество задержек по перевозчикам за 2007 год
-``` sql
+```sql
SELECT Carrier, count(*)
FROM ontime
WHERE DepDelay>10 AND Year=2007
@@ -215,7 +215,7 @@ ORDER BY count(*) DESC;
Q5. Процент задержек по перевозчикам за 2007 год
-``` sql
+```sql
SELECT Carrier, c, c2, c*100/c2 as c3
FROM
(
@@ -241,7 +241,7 @@ ORDER BY c3 DESC;
Более оптимальная версия того же запроса:
-``` sql
+```sql
SELECT Carrier, avg(DepDelay>10)*100 AS c3
FROM ontime
WHERE Year=2007
@@ -251,7 +251,7 @@ ORDER BY Carrier
Q6. Предыдущий запрос за более широкий диапазон лет, 2000-2008
-``` sql
+```sql
SELECT Carrier, c, c2, c*100/c2 as c3
FROM
(
@@ -277,7 +277,7 @@ ORDER BY c3 DESC;
Более оптимальная версия того же запроса:
-``` sql
+```sql
SELECT Carrier, avg(DepDelay>10)*100 AS c3
FROM ontime
WHERE Year>=2000 AND Year<=2008
@@ -287,7 +287,7 @@ ORDER BY Carrier;
Q7. Процент полетов, задержанных на более 10 минут, в разбивке по годам
-``` sql
+```sql
SELECT Year, c1/c2
FROM
(
@@ -311,7 +311,7 @@ ORDER BY Year;
Более оптимальная версия того же запроса:
-``` sql
+```sql
SELECT Year, avg(DepDelay>10)
FROM ontime
GROUP BY Year
@@ -320,7 +320,7 @@ ORDER BY Year;
Q8. Самые популярные направления по количеству напрямую соединенных городов для различных диапазонов лет
-``` sql
+```sql
SELECT DestCityName, uniqExact(OriginCityName) AS u F
ROM ontime
WHERE Year>=2000 and Year<=2010
@@ -331,7 +331,7 @@ LIMIT 10;
Q9.
-``` sql
+```sql
SELECT Year, count(*) AS c1
FROM ontime
GROUP BY Year;
@@ -339,7 +339,7 @@ GROUP BY Year;
Q10.
-``` sql
+```sql
SELECT
min(Year), max(Year), Carrier, count(*) AS cnt,
sum(ArrDelayMinutes>30) AS flights_delayed,
@@ -357,7 +357,7 @@ LIMIT 1000;
Бонус:
-``` sql
+```sql
SELECT avg(cnt)
FROM
(
diff --git a/docs/ru/getting_started/example_datasets/star_schema.md b/docs/ru/getting_started/example_datasets/star_schema.md
index 545eaeea6a6..2e66ced7149 100644
--- a/docs/ru/getting_started/example_datasets/star_schema.md
+++ b/docs/ru/getting_started/example_datasets/star_schema.md
@@ -2,25 +2,25 @@
Compiling dbgen:
-```
-git clone git@github.com:vadimtk/ssb-dbgen.git
-cd ssb-dbgen
-make
+```bash
+$ git clone git@github.com:vadimtk/ssb-dbgen.git
+$ cd ssb-dbgen
+$ make
```
Generating data:
-```
-./dbgen -s 1000 -T c
-./dbgen -s 1000 -T l
-./dbgen -s 1000 -T p
-./dbgen -s 1000 -T s
-./dbgen -s 1000 -T d
+```bash
+$ ./dbgen -s 1000 -T c
+$ ./dbgen -s 1000 -T l
+$ ./dbgen -s 1000 -T p
+$ ./dbgen -s 1000 -T s
+$ ./dbgen -s 1000 -T d
```
Creating tables in ClickHouse:
-```
+```sql
CREATE TABLE customer
(
C_CUSTKEY UInt32,
@@ -85,16 +85,16 @@ ENGINE = MergeTree ORDER BY S_SUPPKEY;
Inserting data:
-```
-clickhouse-client --query "INSERT INTO customer FORMAT CSV" < customer.tbl
-clickhouse-client --query "INSERT INTO part FORMAT CSV" < part.tbl
-clickhouse-client --query "INSERT INTO supplier FORMAT CSV" < supplier.tbl
-clickhouse-client --query "INSERT INTO lineorder FORMAT CSV" < lineorder.tbl
+```bash
+$ clickhouse-client --query "INSERT INTO customer FORMAT CSV" < customer.tbl
+$ clickhouse-client --query "INSERT INTO part FORMAT CSV" < part.tbl
+$ clickhouse-client --query "INSERT INTO supplier FORMAT CSV" < supplier.tbl
+$ clickhouse-client --query "INSERT INTO lineorder FORMAT CSV" < lineorder.tbl
```
Converting "star schema" to denormalized "flat schema":
-```
+```sql
SET max_memory_usage = 20000000000, allow_experimental_multiple_joins_emulation = 1;
CREATE TABLE lineorder_flat
@@ -112,44 +112,56 @@ ALTER TABLE lineorder_flat DROP COLUMN C_CUSTKEY, DROP COLUMN S_SUPPKEY, DROP CO
Running the queries:
-```
Q1.1
+```sql
SELECT sum(LO_EXTENDEDPRICE * LO_DISCOUNT) AS revenue FROM lineorder_flat WHERE toYear(LO_ORDERDATE) = 1993 AND LO_DISCOUNT BETWEEN 1 AND 3 AND LO_QUANTITY < 25;
-
+```
Q1.2
+```sql
SELECT sum(LO_EXTENDEDPRICE * LO_DISCOUNT) AS revenue FROM lineorder_flat WHERE toYYYYMM(LO_ORDERDATE) = 199401 AND LO_DISCOUNT BETWEEN 4 AND 6 AND LO_QUANTITY BETWEEN 26 AND 35;
-
+```
Q1.3
+```sql
SELECT sum(LO_EXTENDEDPRICE * LO_DISCOUNT) AS revenue FROM lineorder_flat WHERE toISOWeek(LO_ORDERDATE) = 6 AND toYear(LO_ORDERDATE) = 1994 AND LO_DISCOUNT BETWEEN 5 AND 7 AND LO_QUANTITY BETWEEN 26 AND 35;
-
+```
Q2.1
+```sql
SELECT sum(LO_REVENUE), toYear(LO_ORDERDATE) AS year, P_BRAND FROM lineorder_flat WHERE P_CATEGORY = 'MFGR#12' AND S_REGION = 'AMERICA' GROUP BY year, P_BRAND ORDER BY year, P_BRAND;
-
+```
Q2.2
+```sql
SELECT sum(LO_REVENUE), toYear(LO_ORDERDATE) AS year, P_BRAND FROM lineorder_flat WHERE P_BRAND BETWEEN 'MFGR#2221' AND 'MFGR#2228' AND S_REGION = 'ASIA' GROUP BY year, P_BRAND ORDER BY year, P_BRAND;
-
+```
Q2.3
+```sql
SELECT sum(LO_REVENUE), toYear(LO_ORDERDATE) AS year, P_BRAND FROM lineorder_flat WHERE P_BRAND = 'MFGR#2239' AND S_REGION = 'EUROPE' GROUP BY year, P_BRAND ORDER BY year, P_BRAND;
-
+```
Q3.1
+```sql
SELECT C_NATION, S_NATION, toYear(LO_ORDERDATE) AS year, sum(LO_REVENUE) AS revenue FROM lineorder_flat WHERE C_REGION = 'ASIA' AND S_REGION = 'ASIA' AND year >= 1992 AND year <= 1997 GROUP BY C_NATION, S_NATION, year ORDER BY year asc, revenue desc;
-
+```
Q3.2
+```sql
SELECT C_CITY, S_CITY, toYear(LO_ORDERDATE) AS year, sum(LO_REVENUE) AS revenue FROM lineorder_flat WHERE C_NATION = 'UNITED STATES' AND S_NATION = 'UNITED STATES' AND year >= 1992 AND year <= 1997 GROUP BY C_CITY, S_CITY, year ORDER BY year asc, revenue desc;
-
+```
Q3.3
+```sql
SELECT C_CITY, S_CITY, toYear(LO_ORDERDATE) AS year, sum(LO_REVENUE) AS revenue FROM lineorder_flat WHERE (C_CITY = 'UNITED KI1' OR C_CITY = 'UNITED KI5') AND (S_CITY = 'UNITED KI1' OR S_CITY = 'UNITED KI5') AND year >= 1992 AND year <= 1997 GROUP BY C_CITY, S_CITY, year ORDER BY year asc, revenue desc;
-
+```
Q3.4
+```sql
SELECT C_CITY, S_CITY, toYear(LO_ORDERDATE) AS year, sum(LO_REVENUE) AS revenue FROM lineorder_flat WHERE (C_CITY = 'UNITED KI1' OR C_CITY = 'UNITED KI5') AND (S_CITY = 'UNITED KI1' OR S_CITY = 'UNITED KI5') AND toYYYYMM(LO_ORDERDATE) = '199712' GROUP BY C_CITY, S_CITY, year ORDER BY year asc, revenue desc;
-
+```
Q4.1
+```sql
SELECT toYear(LO_ORDERDATE) AS year, C_NATION, sum(LO_REVENUE - LO_SUPPLYCOST) AS profit FROM lineorder_flat WHERE C_REGION = 'AMERICA' AND S_REGION = 'AMERICA' AND (P_MFGR = 'MFGR#1' OR P_MFGR = 'MFGR#2') GROUP BY year, C_NATION ORDER BY year, C_NATION;
-
+```
Q4.2
+```sql
SELECT toYear(LO_ORDERDATE) AS year, S_NATION, P_CATEGORY, sum(LO_REVENUE - LO_SUPPLYCOST) AS profit FROM lineorder_flat WHERE C_REGION = 'AMERICA' AND S_REGION = 'AMERICA' AND (year = 1997 OR year = 1998) AND (P_MFGR = 'MFGR#1' OR P_MFGR = 'MFGR#2') GROUP BY year, S_NATION, P_CATEGORY ORDER BY year, S_NATION, P_CATEGORY;
-
+```
Q4.3
+```sql
SELECT toYear(LO_ORDERDATE) AS year, S_CITY, P_BRAND, sum(LO_REVENUE - LO_SUPPLYCOST) AS profit FROM lineorder_flat WHERE S_NATION = 'UNITED STATES' AND (year = 1997 OR year = 1998) AND P_CATEGORY = 'MFGR#14' GROUP BY year, S_CITY, P_BRAND ORDER BY year, S_CITY, P_BRAND;
```
diff --git a/docs/ru/getting_started/example_datasets/wikistat.md b/docs/ru/getting_started/example_datasets/wikistat.md
index ed8037ffc8e..12469694c53 100644
--- a/docs/ru/getting_started/example_datasets/wikistat.md
+++ b/docs/ru/getting_started/example_datasets/wikistat.md
@@ -4,7 +4,7 @@
Создание таблицы:
-``` sql
+```sql
CREATE TABLE wikistat
(
date Date,
@@ -20,9 +20,9 @@ CREATE TABLE wikistat
Загрузка данных:
```bash
-for i in {2007..2016}; do for j in {01..12}; do echo $i-$j >&2; curl -sSL "http://dumps.wikimedia.org/other/pagecounts-raw/$i/$i-$j/" | grep -oE 'pagecounts-[0-9]+-[0-9]+\.gz'; done; done | sort | uniq | tee links.txt
-cat links.txt | while read link; do wget http://dumps.wikimedia.org/other/pagecounts-raw/$(echo $link | sed -r 's/pagecounts-([0-9]{4})([0-9]{2})[0-9]{2}-[0-9]+\.gz/\1/')/$(echo $link | sed -r 's/pagecounts-([0-9]{4})([0-9]{2})[0-9]{2}-[0-9]+\.gz/\1-\2/')/$link; done
-ls -1 /opt/wikistat/ | grep gz | while read i; do echo $i; gzip -cd /opt/wikistat/$i | ./wikistat-loader --time="$(echo -n $i | sed -r 's/pagecounts-([0-9]{4})([0-9]{2})([0-9]{2})-([0-9]{2})([0-9]{2})([0-9]{2})\.gz/\1-\2-\3 \4-00-00/')" | clickhouse-client --query="INSERT INTO wikistat FORMAT TabSeparated"; done
+$ for i in {2007..2016}; do for j in {01..12}; do echo $i-$j >&2; curl -sSL "http://dumps.wikimedia.org/other/pagecounts-raw/$i/$i-$j/" | grep -oE 'pagecounts-[0-9]+-[0-9]+\.gz'; done; done | sort | uniq | tee links.txt
+$ cat links.txt | while read link; do wget http://dumps.wikimedia.org/other/pagecounts-raw/$(echo $link | sed -r 's/pagecounts-([0-9]{4})([0-9]{2})[0-9]{2}-[0-9]+\.gz/\1/')/$(echo $link | sed -r 's/pagecounts-([0-9]{4})([0-9]{2})[0-9]{2}-[0-9]+\.gz/\1-\2/')/$link; done
+$ ls -1 /opt/wikistat/ | grep gz | while read i; do echo $i; gzip -cd /opt/wikistat/$i | ./wikistat-loader --time="$(echo -n $i | sed -r 's/pagecounts-([0-9]{4})([0-9]{2})([0-9]{2})-([0-9]{2})([0-9]{2})([0-9]{2})\.gz/\1-\2-\3 \4-00-00/')" | clickhouse-client --query="INSERT INTO wikistat FORMAT TabSeparated"; done
```
[Оригинальная статья](https://clickhouse.yandex/docs/ru/getting_started/example_datasets/wikistat/)
diff --git a/docs/ru/getting_started/index.md b/docs/ru/getting_started/index.md
index e3fb2ab0985..822955df9eb 100644
--- a/docs/ru/getting_started/index.md
+++ b/docs/ru/getting_started/index.md
@@ -18,8 +18,8 @@ $ grep -q sse4_2 /proc/cpuinfo && echo "SSE 4.2 supported" || echo "SSE 4.2 not
Чтобы установить официальные пакеты, пропишите репозиторий Яндекса в `/etc/apt/sources.list` или в отдельный файл `/etc/apt/sources.list.d/clickhouse.list`:
-```
-deb http://repo.yandex.ru/clickhouse/deb/stable/ main/
+```bash
+$ deb http://repo.yandex.ru/clickhouse/deb/stable/ main/
```
Если вы хотите использовать наиболее свежую тестовую, замените `stable` на `testing` (не рекомендуется для production окружений).
@@ -27,10 +27,10 @@ deb http://repo.yandex.ru/clickhouse/deb/stable/ main/
Затем для самой установки пакетов выполните:
```bash
-sudo apt-get install dirmngr # optional
-sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv E0C56BD4 # optional
-sudo apt-get update
-sudo apt-get install clickhouse-client clickhouse-server
+$ sudo apt-get install dirmngr # optional
+$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv E0C56BD4 # optional
+$ sudo apt-get update
+$ sudo apt-get install clickhouse-client clickhouse-server
```
Также эти пакеты можно скачать и установить вручную отсюда: .
@@ -41,9 +41,9 @@ sudo apt-get install clickhouse-client clickhouse-server
Сначала нужно подключить официальный репозиторий:
```bash
-sudo yum install yum-utils
-sudo rpm --import https://repo.yandex.ru/clickhouse/CLICKHOUSE-KEY.GPG
-sudo yum-config-manager --add-repo https://repo.yandex.ru/clickhouse/rpm/stable/x86_64
+$ sudo yum install yum-utils
+$ sudo rpm --import https://repo.yandex.ru/clickhouse/CLICKHOUSE-KEY.GPG
+$ sudo yum-config-manager --add-repo https://repo.yandex.ru/clickhouse/rpm/stable/x86_64
```
Для использования наиболее свежих версий нужно заменить `stable` на `testing` (рекомендуется для тестовых окружений).
@@ -52,7 +52,7 @@ Then run these commands to actually install packages:
Для, собственно, установки пакетов необходимо выполнить следующие команды:
```bash
-sudo yum install clickhouse-server clickhouse-client
+$ sudo yum install clickhouse-server clickhouse-client
```
Также есть возможность установить пакеты вручную, скачав отсюда: .
@@ -67,14 +67,14 @@ sudo yum install clickhouse-server clickhouse-client
Можно скомпилировать пакеты и установить их, либо использовать программы без установки пакетов. Также при ручой сборке можно отключить необходимость поддержки набора инструкций SSE 4.2 или собрать под процессоры архитектуры AArch64.
-```
+```text
Client: dbms/programs/clickhouse-client
Server: dbms/programs/clickhouse-server
```
Для работы собранного вручную сервера необходимо создать директории для данных и метаданных, а также сделать их `chown` для желаемого пользователя. Пути к этим директориям могут быть изменены в конфигурационном файле сервера (src/dbms/programs/server/config.xml), по умолчанию используются следующие:
-```
+```text
/opt/clickhouse/data/default/
/opt/clickhouse/metadata/default/
```
@@ -85,7 +85,7 @@ Server: dbms/programs/clickhouse-server
Для запуска сервера в качестве демона, выполните:
-``` bash
+```bash
$ sudo service clickhouse-server start
```
@@ -95,7 +95,7 @@ $ sudo service clickhouse-server start
Также можно запустить сервер вручную из консоли:
-``` bash
+```bash
$ clickhouse-server --config-file=/etc/clickhouse-server/config.xml
```
@@ -104,7 +104,7 @@ $ clickhouse-server --config-file=/etc/clickhouse-server/config.xml
После запуска сервера, соединиться с ним можно с помощью клиента командной строки:
-``` bash
+```bash
$ clickhouse-client
```
@@ -116,23 +116,19 @@ $ clickhouse-client
Пример проверки работоспособности системы:
-``` bash
+```bash
$ ./clickhouse-client
ClickHouse client version 0.0.18749.
Connecting to localhost:9000.
Connected to ClickHouse server version 0.0.18749.
-
-:) SELECT 1
-
+```
+```sql
SELECT 1
-
+```
+```text
┌─1─┐
│ 1 │
└───┘
-
-1 rows in set. Elapsed: 0.003 sec.
-
-:)
```
**Поздравляем, система работает!**
diff --git a/docs/ru/index.md b/docs/ru/index.md
index 2db293a36ff..fbb80bfb8a9 100644
--- a/docs/ru/index.md
+++ b/docs/ru/index.md
@@ -79,22 +79,16 @@ ClickHouse - столбцовая система управления базам
Например, для запроса "посчитать количество записей для каждой рекламной системы", требуется прочитать один столбец "идентификатор рекламной системы", который занимает 1 байт в несжатом виде. Если большинство переходов было не с рекламных систем, то можно рассчитывать хотя бы на десятикратное сжатие этого столбца. При использовании быстрого алгоритма сжатия, возможно разжатие данных со скоростью более нескольких гигабайт несжатых данных в секунду. То есть, такой запрос может выполняться со скоростью около нескольких миллиардов строк в секунду на одном сервере. На практике, такая скорость действительно достигается.
Пример
-```
+```bash
$ clickhouse-client
ClickHouse client version 0.0.52053.
Connecting to localhost:9000.
Connected to ClickHouse server version 0.0.52053.
-
-:) SELECT CounterID, count() FROM hits GROUP BY CounterID ORDER BY count() DESC LIMIT 20
-
-SELECT
- CounterID,
- count()
-FROM hits
-GROUP BY CounterID
-ORDER BY count() DESC
-LIMIT 20
-
+```
+```sql
+SELECT CounterID, count() FROM hits GROUP BY CounterID ORDER BY count() DESC LIMIT 20
+```
+```text
┌─CounterID─┬──count()─┐
│ 114208 │ 56057344 │
│ 115080 │ 51619590 │
@@ -117,10 +111,6 @@ LIMIT 20
│ 115079 │ 8837972 │
│ 337234 │ 8205961 │
└───────────┴──────────┘
-
-20 rows in set. Elapsed: 0.153 sec. Processed 1.00 billion rows, 4.00 GB (6.53 billion rows/s., 26.10 GB/s.)
-
-:)
```
diff --git a/docs/ru/interfaces/cli.md b/docs/ru/interfaces/cli.md
index 59980109240..a2b624c6f21 100644
--- a/docs/ru/interfaces/cli.md
+++ b/docs/ru/interfaces/cli.md
@@ -22,14 +22,14 @@ Connected to ClickHouse server version 0.0.26176.
Примеры использования клиента для вставки данных:
```bash
-echo -ne "1, 'some text', '2016-08-14 00:00:00'\n2, 'some more text', '2016-08-14 00:00:01'" | clickhouse-client --database=test --query="INSERT INTO test FORMAT CSV";
+$ echo -ne "1, 'some text', '2016-08-14 00:00:00'\n2, 'some more text', '2016-08-14 00:00:01'" | clickhouse-client --database=test --query="INSERT INTO test FORMAT CSV";
-cat <<_EOF | clickhouse-client --database=test --query="INSERT INTO test FORMAT CSV";
+$ cat <<_EOF | clickhouse-client --database=test --query="INSERT INTO test FORMAT CSV";
3, 'some text', '2016-08-14 00:00:00'
4, 'some more text', '2016-08-14 00:00:01'
_EOF
-cat file.csv | clickhouse-client --database=test --query="INSERT INTO test FORMAT CSV";
+$ cat file.csv | clickhouse-client --database=test --query="INSERT INTO test FORMAT CSV";
```
В batch режиме в качестве формата данных по умолчанию используется формат TabSeparated. Формат может быть указан в секции FORMAT запроса.
@@ -79,7 +79,7 @@ clickhouse-client --param_parName="[1, 2]" -q "SELECT * FROM table WHERE a = {p
Отформатируйте запрос обычным способом. Представьте значения, которые вы хотите передать из параметров приложения в запрос в следующем формате:
-```
+```sql
{:}
```
@@ -89,7 +89,7 @@ clickhouse-client --param_parName="[1, 2]" -q "SELECT * FROM table WHERE a = {p
#### Пример
```bash
-clickhouse-client --param_tuple_in_tuple="(10, ('dt', 10))" -q "SELECT * FROM table WHERE val = {tuple_in_tuple:Tuple(UInt8, Tuple(String, UInt8))}"
+$ clickhouse-client --param_tuple_in_tuple="(10, ('dt', 10))" -q "SELECT * FROM table WHERE val = {tuple_in_tuple:Tuple(UInt8, Tuple(String, UInt8))}"
```
## Конфигурирование {#interfaces_cli_configuration}
diff --git a/docs/ru/interfaces/formats.md b/docs/ru/interfaces/formats.md
index 9acf2d67e4a..c76c68a7685 100644
--- a/docs/ru/interfaces/formats.md
+++ b/docs/ru/interfaces/formats.md
@@ -50,7 +50,7 @@ ClickHouse может принимать (`INSERT`) и отдавать (`SELECT
SELECT EventDate, count() AS c FROM test.hits GROUP BY EventDate WITH TOTALS ORDER BY EventDate FORMAT TabSeparated``
```
-```
+```text
2014-03-17 1406958
2014-03-18 1383658
2014-03-19 1405797
@@ -82,7 +82,7 @@ SELECT EventDate, count() AS c FROM test.hits GROUP BY EventDate WITH TOTALS ORD
Строки выводятся с экранированием спецсимволов с помощью обратного слеша. При выводе, используются следующие escape-последовательности: `\b`, `\f`, `\r`, `\n`, `\t`, `\0`, `\'`, `\\`. Парсер также поддерживает последовательности `\a`, `\v`, и `\xHH` (последовательности hex escape) и любые последовательности вида `\c`, где `c` — любой символ (такие последовательности преобразуются в `c`). Таким образом, при чтении поддерживаются форматы, где перевод строки может быть записан как `\n` и как `\` и перевод строки. Например, строка `Hello world`, где между словами вместо пробела стоит перевод строки, может быть считана в любом из следующих вариантов:
-```
+```text
Hello\nworld
Hello\
@@ -211,7 +211,7 @@ format_schema_rows_between_delimiter = '\n '
```
Пример ввода:
-```
+```text
Some header
Page views: 5, User id: 4324182021466249494, Useless field: hello, Duration: 146, Sign: -1
Page views: 6, User id: 4324182021466249494, Useless field: world, Duration: 185, Sign: 1
@@ -239,7 +239,7 @@ format_schema_rows_between_delimiter = ','
Похож на TabSeparated, но выводит значения в формате name=value. Имена экранируются так же, как строки в формате TabSeparated и, дополнительно, экранируется также символ =.
-```
+```text
SearchPhrase= count()=8267016
SearchPhrase=интерьер ванной комнаты count()=2166
SearchPhrase=яндекс count()=1655
@@ -258,7 +258,7 @@ SearchPhrase=баку count()=1000
SELECT * FROM t_null FORMAT TSKV
```
-```
+```text
x=1 y=\N
```
@@ -274,8 +274,8 @@ x=1 y=\N
При форматировании, строки выводятся в двойных кавычках. Двойная кавычка внутри строки выводится как две двойные кавычки подряд. Других правил экранирования нет. Даты и даты-с-временем выводятся в двойных кавычках. Числа выводятся без кавычек. Значения разделяются символом-разделителем, по умолчанию — `,`. Символ-разделитель определяется настройкой [format_csv_delimiter](../operations/settings/settings.md#settings-format_csv_delimiter). Строки разделяются unix переводом строки (LF). Массивы сериализуются в CSV следующим образом: сначала массив сериализуется в строку, как в формате TabSeparated, а затем полученная строка выводится в CSV в двойных кавычках. Кортежи в формате CSV сериализуются, как отдельные столбцы (то есть, теряется их вложенность в кортеж).
-```
-clickhouse-client --format_csv_delimiter="|" --query="INSERT INTO test.csv FORMAT CSV" < data.csv
+```bash
+$ clickhouse-client --format_csv_delimiter="|" --query="INSERT INTO test.csv FORMAT CSV" < data.csv
```
*По умолчанию — `,`. См. настройку [format_csv_delimiter](../operations/settings/settings.md#settings-format_csv_delimiter) для дополнительной информации.
@@ -460,7 +460,7 @@ ClickHouse заменяет опущенные значения значения
Рассмотрим следующую таблицу:
-```
+```sql
CREATE TABLE IF NOT EXISTS example_table
(
x UInt32,
@@ -478,7 +478,7 @@ CREATE TABLE IF NOT EXISTS example_table
Рассмотрим в качестве примера таблицу `UserActivity`:
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐
│ 4324182021466249494 │ 5 │ 146 │ -1 │
│ 4324182021466249494 │ 6 │ 185 │ 1 │
@@ -487,7 +487,7 @@ CREATE TABLE IF NOT EXISTS example_table
Запрос `SELECT * FROM UserActivity FORMAT JSONEachRow` возвращает:
-```
+```text
{"UserID":"4324182021466249494","PageViews":5,"Duration":146,"Sign":-1}
{"UserID":"4324182021466249494","PageViews":6,"Duration":185,"Sign":1}
```
@@ -578,7 +578,7 @@ SELECT * FROM json_each_row_nested
```sql
SELECT * FROM t_null
```
-```
+```text
┌─x─┬────y─┐
│ 1 │ ᴺᵁᴸᴸ │
└───┴──────┘
@@ -590,7 +590,7 @@ SELECT * FROM t_null
SELECT 'String with \'quotes\' and \t character' AS Escaping_test
```
-```
+```text
┌─Escaping_test────────────────────────┐
│ String with 'quotes' and character │
└──────────────────────────────────────┘
@@ -605,7 +605,7 @@ SELECT 'String with \'quotes\' and \t character' AS Escaping_test
SELECT EventDate, count() AS c FROM test.hits GROUP BY EventDate WITH TOTALS ORDER BY EventDate FORMAT PrettyCompact
```
-```
+```text
┌──EventDate─┬───────c─┐
│ 2014-03-17 │ 1406958 │
│ 2014-03-18 │ 1383658 │
@@ -644,7 +644,7 @@ Extremes:
Пример:
```bash
-watch -n1 "clickhouse-client --query='SELECT event, value FROM system.events FORMAT PrettyCompactNoEscapes'"
+$ watch -n1 "clickhouse-client --query='SELECT event, value FROM system.events FORMAT PrettyCompactNoEscapes'"
```
Для отображения в браузере, вы можете использовать HTTP интерфейс.
@@ -703,7 +703,7 @@ Array представлены как длина в формате varint (unsig
```sql
SELECT * FROM t_null FORMAT Vertical
```
-```
+```text
Row 1:
──────
x: 1
@@ -716,7 +716,7 @@ y: ᴺᵁᴸᴸ
SELECT 'string with \'quotes\' and \t with some special \n characters' AS test FORMAT Vertical
```
-```
+```text
Row 1:
──────
test: string with 'quotes' and with some special
@@ -806,12 +806,12 @@ Cap'n Proto - формат бинарных сообщений, похож на
Сообщения Cap'n Proto строго типизированы и не самоописывающиеся, т.е. нуждаются во внешнем описании схемы. Схема применяется "на лету" и кешируется между запросами.
```bash
-cat capnproto_messages.bin | clickhouse-client --query "INSERT INTO test.hits FORMAT CapnProto SETTINGS format_schema='schema:Message'"
+$ cat capnproto_messages.bin | clickhouse-client --query "INSERT INTO test.hits FORMAT CapnProto SETTINGS format_schema='schema:Message'"
```
Где `schema.capnp` выглядит следующим образом:
-```
+```capnp
struct Message {
SearchPhrase @0 :Text;
c @1 :Uint64;
@@ -838,12 +838,12 @@ SELECT * FROM test.table FORMAT Protobuf SETTINGS format_schema = 'schemafile:Me
или
```bash
-cat protobuf_messages.bin | clickhouse-client --query "INSERT INTO test.table FORMAT Protobuf SETTINGS format_schema='schemafile:MessageType'"
+$ cat protobuf_messages.bin | clickhouse-client --query "INSERT INTO test.table FORMAT Protobuf SETTINGS format_schema='schemafile:MessageType'"
```
Где файл `schemafile.proto` может выглядеть так:
-```
+```capnp
syntax = "proto3";
message MessageType {
@@ -860,7 +860,7 @@ message MessageType {
Вложенные сообщения поддерживаются, например, для поля `z` в таком сообщении
-```
+```capnp
message MessageType {
message XType {
message YType {
@@ -877,7 +877,7 @@ ClickHouse попытается найти столбец с именем `x.y.z
Значения по умолчанию, определённые в схеме `proto2`, например,
-```
+```capnp
syntax = "proto2";
message MessageType {
@@ -926,14 +926,14 @@ ClickHouse поддерживает настраиваемую точность
Чтобы вставить в ClickHouse данные из файла в формате Parquet, выполните команду следующего вида:
-```
-cat {filename} | clickhouse-client --query="INSERT INTO {some_table} FORMAT Parquet"
+```bash
+$ cat {filename} | clickhouse-client --query="INSERT INTO {some_table} FORMAT Parquet"
```
Чтобы получить данные из таблицы ClickHouse и сохранить их в файл формата Parquet, используйте команду следующего вида:
-```
-clickhouse-client --query="SELECT * FROM {some_table} FORMAT Parquet" > {some_file.pq}
+```bash
+$ clickhouse-client --query="SELECT * FROM {some_table} FORMAT Parquet" > {some_file.pq}
```
Для обмена данными с экосистемой Hadoop можно использовать движки таблиц [HDFS](../operations/table_engines/hdfs.md) и `URL`.
diff --git a/docs/ru/interfaces/http.md b/docs/ru/interfaces/http.md
index 77eb984e8f4..c7c32a46a4c 100644
--- a/docs/ru/interfaces/http.md
+++ b/docs/ru/interfaces/http.md
@@ -76,31 +76,31 @@ $ echo 'SELECT 1 FORMAT Pretty' | curl 'http://localhost:8123/?' --data-binary @
Создаём таблицу:
```bash
-echo 'CREATE TABLE t (a UInt8) ENGINE = Memory' | curl 'http://localhost:8123/' --data-binary @-
+$ echo 'CREATE TABLE t (a UInt8) ENGINE = Memory' | curl 'http://localhost:8123/' --data-binary @-
```
Используем привычный запрос INSERT для вставки данных:
```bash
-echo 'INSERT INTO t VALUES (1),(2),(3)' | curl 'http://localhost:8123/' --data-binary @-
+$ echo 'INSERT INTO t VALUES (1),(2),(3)' | curl 'http://localhost:8123/' --data-binary @-
```
Данные можно отправить отдельно от запроса:
```bash
-echo '(4),(5),(6)' | curl 'http://localhost:8123/?query=INSERT%20INTO%20t%20VALUES' --data-binary @-
+$ echo '(4),(5),(6)' | curl 'http://localhost:8123/?query=INSERT%20INTO%20t%20VALUES' --data-binary @-
```
Можно указать любой формат для данных. Формат Values - то же, что используется при записи INSERT INTO t VALUES:
```bash
-echo '(7),(8),(9)' | curl 'http://localhost:8123/?query=INSERT%20INTO%20t%20FORMAT%20Values' --data-binary @-
+$ echo '(7),(8),(9)' | curl 'http://localhost:8123/?query=INSERT%20INTO%20t%20FORMAT%20Values' --data-binary @-
```
Можно вставить данные из tab-separated дампа, указав соответствующий формат:
```bash
-echo -ne '10\n11\n12\n' | curl 'http://localhost:8123/?query=INSERT%20INTO%20t%20FORMAT%20TabSeparated' --data-binary @-
+$ echo -ne '10\n11\n12\n' | curl 'http://localhost:8123/?query=INSERT%20INTO%20t%20FORMAT%20TabSeparated' --data-binary @-
```
Прочитаем содержимое таблицы. Данные выводятся в произвольном порядке из-за параллельной обработки запроса:
@@ -124,7 +124,7 @@ $ curl 'http://localhost:8123/?query=SELECT%20a%20FROM%20t'
Удаляем таблицу.
```bash
-echo 'DROP TABLE t' | curl 'http://localhost:8123/' --data-binary @-
+$ echo 'DROP TABLE t' | curl 'http://localhost:8123/' --data-binary @-
```
Для запросов, которые не возвращают таблицу с данными, в случае успеха, выдаётся пустое тело ответа.
@@ -141,11 +141,11 @@ echo 'DROP TABLE t' | curl 'http://localhost:8123/' --data-binary @-
Примеры отправки данных со сжатием:
```bash
-#Отправка данных на сервер:
-curl -vsS "http://localhost:8123/?enable_http_compression=1" -d 'SELECT number FROM system.numbers LIMIT 10' -H 'Accept-Encoding: gzip'
+$ #Отправка данных на сервер:
+$ curl -vsS "http://localhost:8123/?enable_http_compression=1" -d 'SELECT number FROM system.numbers LIMIT 10' -H 'Accept-Encoding: gzip'
-#Отправка данных клиенту:
-echo "SELECT 1" | gzip -c | curl -sS --data-binary @- -H 'Content-Encoding: gzip' 'http://localhost:8123/'
+$ #Отправка данных клиенту:
+$ echo "SELECT 1" | gzip -c | curl -sS --data-binary @- -H 'Content-Encoding: gzip' 'http://localhost:8123/'
```
!!! note "Примечание"
@@ -174,13 +174,13 @@ $ echo 'SELECT number FROM numbers LIMIT 10' | curl 'http://localhost:8123/?data
1. С использованием HTTP Basic Authentification. Пример:
```bash
-echo 'SELECT 1' | curl 'http://user:password@localhost:8123/' -d @-
+$ echo 'SELECT 1' | curl 'http://user:password@localhost:8123/' -d @-
```
2. В параметрах URL user и password. Пример:
```bash
-echo 'SELECT 1' | curl 'http://localhost:8123/?user=user&password=password' -d @-
+$ echo 'SELECT 1' | curl 'http://localhost:8123/?user=user&password=password' -d @-
```
Если пользователь не задан,то используется `default`. Если пароль не задан, то используется пустой пароль.
@@ -208,7 +208,7 @@ $ echo 'SELECT number FROM system.numbers LIMIT 10' | curl 'http://localhost:812
Прогресс выполнения запроса можно отслеживать с помощью заголовков ответа `X-ClickHouse-Progress`. Для этого включите [send_progress_in_http_headers](../operations/settings/settings.md#settings-send_progress_in_http_headers). Пример последовательности заголовков:
-```
+```text
X-ClickHouse-Progress: {"read_rows":"2752512","read_bytes":"240570816","total_rows_to_read":"8880128"}
X-ClickHouse-Progress: {"read_rows":"5439488","read_bytes":"482285394","total_rows_to_read":"8880128"}
X-ClickHouse-Progress: {"read_rows":"8783786","read_bytes":"819092887","total_rows_to_read":"8880128"}
@@ -240,7 +240,7 @@ HTTP интерфейс позволяет передать внешние да
Пример:
```bash
-curl -sS 'http://localhost:8123/?max_result_bytes=4000000&buffer_size=3000000&wait_end_of_query=1' -d 'SELECT toUInt8(number) FROM system.numbers LIMIT 9000000 FORMAT RowBinary'
+$ curl -sS 'http://localhost:8123/?max_result_bytes=4000000&buffer_size=3000000&wait_end_of_query=1' -d 'SELECT toUInt8(number) FROM system.numbers LIMIT 9000000 FORMAT RowBinary'
```
Буферизация позволяет избежать ситуации когда код ответа и HTTP-заголовки были отправлены клиенту, после чего возникла ошибка выполнения запроса. В такой ситуации сообщение об ошибке записывается в конце тела ответа, и на стороне клиента ошибка может быть обнаружена только на этапе парсинга.
@@ -252,7 +252,7 @@ curl -sS 'http://localhost:8123/?max_result_bytes=4000000&buffer_size=3000000&wa
### Пример
```bash
-curl -sS "?param_id=2¶m_phrase=test" -d "SELECT * FROM table WHERE int_column = {id:UInt8} and string_column = {phrase:String}"
+$ curl -sS "?param_id=2¶m_phrase=test" -d "SELECT * FROM table WHERE int_column = {id:UInt8} and string_column = {phrase:String}"
```
[Оригинальная статья](https://clickhouse.yandex/docs/ru/interfaces/http_interface/)
diff --git a/docs/ru/interfaces/jdbc.md b/docs/ru/interfaces/jdbc.md
index 121a0e53eea..9c6d594bc1c 100644
--- a/docs/ru/interfaces/jdbc.md
+++ b/docs/ru/interfaces/jdbc.md
@@ -1,6 +1,6 @@
# JDBC-драйвер
-* **[Официальный драйвер](https://github.com/yandex/clickhouse-jdbc)**
+* **[Официальный драйвер](https://github.com/ClickHouse/clickhouse-jdbc)**
* Драйверы от сторонних организаций:
* [ClickHouse-Native-JDBC](https://github.com/housepower/ClickHouse-Native-JDBC)
* [clickhouse4j](https://github.com/blynkkk/clickhouse4j)
diff --git a/docs/ru/interfaces/odbc.md b/docs/ru/interfaces/odbc.md
index 4d4df867e1c..7605d6061b2 100644
--- a/docs/ru/interfaces/odbc.md
+++ b/docs/ru/interfaces/odbc.md
@@ -1,5 +1,5 @@
# ODBC-драйвер
-- [Официальный драйвер](https://github.com/yandex/clickhouse-odbc).
+- [Официальный драйвер](https://github.com/ClickHouse/clickhouse-odbc).
[Оригинальная статья](https://clickhouse.yandex/docs/ru/interfaces/odbc/)
diff --git a/docs/ru/interfaces/tcp.md b/docs/ru/interfaces/tcp.md
index da0ea735644..449c15b1bcc 100644
--- a/docs/ru/interfaces/tcp.md
+++ b/docs/ru/interfaces/tcp.md
@@ -1,5 +1,5 @@
# Родной интерфейс (TCP)
-Нативный протокол используется в [клиенте командной строки](cli.md), для взаимодействия между серверами во время обработки распределенных запросов, а также в других программах на C++. К сожалению, у родного протокола ClickHouse пока нет формальной спецификации, но в нем можно разобраться с использованием исходного кода ClickHouse (начиная с [примерно этого места](https://github.com/yandex/ClickHouse/tree/master/dbms/src/Client)) и/или путем перехвата и анализа TCP трафика.
+Нативный протокол используется в [клиенте командной строки](cli.md), для взаимодействия между серверами во время обработки распределенных запросов, а также в других программах на C++. К сожалению, у родного протокола ClickHouse пока нет формальной спецификации, но в нем можно разобраться с использованием исходного кода ClickHouse (начиная с [примерно этого места](https://github.com/ClickHouse/ClickHouse/tree/master/dbms/src/Client)) и/или путем перехвата и анализа TCP трафика.
[Оригинальная статья](https://clickhouse.yandex/docs/ru/interfaces/tcp/)
diff --git a/docs/ru/operations/backup.md b/docs/ru/operations/backup.md
index e2162182673..1f7402fd25e 100644
--- a/docs/ru/operations/backup.md
+++ b/docs/ru/operations/backup.md
@@ -1,6 +1,6 @@
# Резервное копирование данных
-[Репликация](table_engines/replication.md) обеспечивает защиту от аппаратных сбоев, но не защищает от человеческих ошибок: случайного удаления данных, удаления не той таблицы, которую надо было, или таблицы на не том кластере, а также программных ошибок, которые приводят к неправильной обработке данных или их повреждению. Во многих случаях подобные ошибки влияют на все реплики. ClickHouse имеет встроенные средства защиты для предотвращения некоторых типов ошибок — например, по умолчанию [не получится удалить таблицы *MergeTree, содержащие более 50 Гб данных, одной командой](https://github.com/yandex/ClickHouse/blob/v18.14.18-stable/dbms/programs/server/config.xml#L322-L330). Однако эти средства защиты не охватывают все возможные случаи и могут быть обойдены.
+[Репликация](table_engines/replication.md) обеспечивает защиту от аппаратных сбоев, но не защищает от человеческих ошибок: случайного удаления данных, удаления не той таблицы, которую надо было, или таблицы на не том кластере, а также программных ошибок, которые приводят к неправильной обработке данных или их повреждению. Во многих случаях подобные ошибки влияют на все реплики. ClickHouse имеет встроенные средства защиты для предотвращения некоторых типов ошибок — например, по умолчанию [не получится удалить таблицы *MergeTree, содержащие более 50 Гб данных, одной командой](https://github.com/ClickHouse/ClickHouse/blob/v18.14.18-stable/dbms/programs/server/config.xml#L322-L330). Однако эти средства защиты не охватывают все возможные случаи и могут быть обойдены.
Для того чтобы эффективно уменьшить возможные человеческие ошибки, следует тщательно подготовить стратегию резервного копирования и восстановления данных **заранее**.
diff --git a/docs/ru/operations/configuration_files.md b/docs/ru/operations/configuration_files.md
index 9514734678d..78b803775e1 100644
--- a/docs/ru/operations/configuration_files.md
+++ b/docs/ru/operations/configuration_files.md
@@ -19,8 +19,10 @@
В `config.xml` может быть указан отдельный конфиг с настройками пользователей, профилей и квот. Относительный путь к нему указывается в элементе users_config. По умолчанию - `users.xml`. Если `users_config` не указан, то настройки пользователей, профилей и квот, указываются непосредственно в `config.xml`.
Для `users_config` могут также существовать переопределения в файлах из директории `users_config.d` (например, `users.d`) и подстановки. Например, можно иметь по отдельному конфигурационному файлу для каждого пользователя:
-``` xml
+```bash
$ cat /etc/clickhouse-server/users.d/alice.xml
+```
+```xml
diff --git a/docs/ru/operations/settings/constraints_on_settings.md b/docs/ru/operations/settings/constraints_on_settings.md
index 50c45a8356a..5a4442295d2 100644
--- a/docs/ru/operations/settings/constraints_on_settings.md
+++ b/docs/ru/operations/settings/constraints_on_settings.md
@@ -3,7 +3,7 @@
Ограничения на изменение настроек могут находиться внутри секции `users` файла `user.xml` и запрещают пользователю менять некоторые настройки с помощью запроса `SET`.
Выглядит это следующим образом:
-```
+```xml
<имя_пользователя>
@@ -31,7 +31,7 @@
**Пример:** Пусть файл `users.xml` содержит строки:
-```
+```xml
10000000000
@@ -52,13 +52,13 @@
Каждый из следующих запросов кинет исключение:
-```
+```sql
SET max_memory_usage=20000000001;
SET max_memory_usage=4999999999;
SET force_index_by_date=1;
```
-```
+```text
Code: 452, e.displayText() = DB::Exception: Setting max_memory_usage should not be greater than 20000000000.
Code: 452, e.displayText() = DB::Exception: Setting max_memory_usage should not be less than 5000000000.
Code: 452, e.displayText() = DB::Exception: Setting force_index_by_date should not be changed.
diff --git a/docs/ru/operations/settings/query_complexity.md b/docs/ru/operations/settings/query_complexity.md
index 4ebd9868ecc..864764add48 100644
--- a/docs/ru/operations/settings/query_complexity.md
+++ b/docs/ru/operations/settings/query_complexity.md
@@ -38,7 +38,7 @@
Максимальный возможный объем оперативной памяти для запросов пользователя на одном сервере.
-Значения по умолчанию определены в файле [Settings.h](https://github.com/yandex/ClickHouse/blob/master/dbms/src/Core/Settings.h#L288). По умолчанию размер не ограничен (`max_memory_usage_for_user = 0`).
+Значения по умолчанию определены в файле [Settings.h](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/src/Core/Settings.h#L288). По умолчанию размер не ограничен (`max_memory_usage_for_user = 0`).
Смотрите также описание настройки [max_memory_usage](#settings_max_memory_usage).
@@ -46,7 +46,7 @@
Максимальный возможный объем оперативной памяти для всех запросов на одном сервере.
-Значения по умолчанию определены в файле [Settings.h](https://github.com/yandex/ClickHouse/blob/master/dbms/src/Core/Settings.h#L289). По умолчанию размер не ограничен (`max_memory_usage_for_all_queries = 0`).
+Значения по умолчанию определены в файле [Settings.h](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/src/Core/Settings.h#L289). По умолчанию размер не ограничен (`max_memory_usage_for_all_queries = 0`).
Смотрите также описание настройки [max_memory_usage](#settings_max_memory_usage).
diff --git a/docs/ru/operations/settings/settings.md b/docs/ru/operations/settings/settings.md
index 20017e88af4..e3273527ef4 100644
--- a/docs/ru/operations/settings/settings.md
+++ b/docs/ru/operations/settings/settings.md
@@ -175,7 +175,8 @@ ClickHouse применяет настройку в тех случаях, ко
```sql
SET input_format_values_interpret_expressions = 0;
INSERT INTO datetime_t VALUES (now())
-
+```
+```text
Exception on client:
Code: 27. DB::Exception: Cannot parse input: expected ) before: now()): (at row 1)
```
@@ -183,7 +184,8 @@ Code: 27. DB::Exception: Cannot parse input: expected ) before: now()): (at row
```sql
SET input_format_values_interpret_expressions = 1;
INSERT INTO datetime_t VALUES (now())
-
+```
+```text
Ok.
```
@@ -192,7 +194,8 @@ Ok.
```sql
SET input_format_values_interpret_expressions = 0;
INSERT INTO datetime_t SELECT now()
-
+```
+```text
Ok.
```
@@ -604,7 +607,7 @@ ClickHouse поддерживает следующие алгоритмы выб
### Random (by default) {#load_balancing-random}
-```
+```sql
load_balancing = random
```
@@ -613,7 +616,7 @@ load_balancing = random
### Nearest Hostname {#load_balancing-nearest_hostname}
-```
+```sql
load_balancing = nearest_hostname
```
@@ -627,7 +630,7 @@ load_balancing = nearest_hostname
### In Order {#load_balancing-in_order}
-```
+```sql
load_balancing = in_order
```
@@ -636,7 +639,7 @@ load_balancing = in_order
### First or Random {#load_balancing-first_or_random}
-```
+```sql
load_balancing = first_or_random
```
@@ -841,4 +844,34 @@ load_balancing = first_or_random
- [Множественный JOIN](../../query_language/select.md#select-join)
+
+## optimize_throw_if_noop {#setting-optimize_throw_if_noop}
+
+Включает или отключает генерирование исключения в в случаях, когда запрос [OPTIMIZE](../../query_language/misc.md#misc_operations-optimize) не выполняет мёрж.
+
+По умолчанию, `OPTIMIZE` завершается успешно и в тех случаях, когда он ничего не сделал. Настройка позволяет отделить подобные случаи и включает генерирование исключения с поясняющим сообщением.
+
+Возможные значения:
+
+- 1 — генерирование исключения включено.
+- 0 — генерирование исключения выключено.
+
+Значение по умолчанию — 0.
+
+## os_thread_priority {#setting-os_thread_priority}
+
+Устанавливает приоритет ([nice](https://en.wikipedia.org/wiki/Nice_(Unix))) для потоков, исполняющих запросы. Планировщик ОС учитывает эти приоритеты при выборе следующего потока для исполнения на доступном ядре CPU.
+
+!!! warning "Предупреждение"
+ Для использования этой настройки необходимо установить свойство `CAP_SYS_NICE`. Пакет `clickhouse-server` устанавливает его во время инсталляции. Некоторые виртуальные окружения не позволяют установить `CAP_SYS_NICE`. В этом случае, `clickhouse-server` выводит сообщение при запуске.
+
+Допустимые значения:
+
+- Любое значение из диапазона `[-20, 19]`.
+
+Более низкие значения означают более высокий приоритет. Потоки с низкими значениями приоритета `nice` выполняются чаще, чем потоки с более высокими значениями. Высокие значения предпочтительно использовать для долгих неинтерактивных запросов, поскольку это позволяет бысто выделить ресурс в пользу коротких интерактивных запросов.
+
+Значение по умолчанию — 0.
+
+
[Оригинальная статья](https://clickhouse.yandex/docs/ru/operations/settings/settings/)
diff --git a/docs/ru/operations/settings/settings_profiles.md b/docs/ru/operations/settings/settings_profiles.md
index 212e577faaf..a120c388880 100644
--- a/docs/ru/operations/settings/settings_profiles.md
+++ b/docs/ru/operations/settings/settings_profiles.md
@@ -8,7 +8,7 @@
Установить профиль `web`.
-``` sql
+```sql
SET profile = 'web'
```
diff --git a/docs/ru/operations/settings/settings_users.md b/docs/ru/operations/settings/settings_users.md
index 85233f19baa..adb50a02a9d 100644
--- a/docs/ru/operations/settings/settings_users.md
+++ b/docs/ru/operations/settings/settings_users.md
@@ -4,7 +4,7 @@
Структура раздела `users`:
-```
+```xml
@@ -79,7 +79,7 @@
Чтобы открыть доступ пользователю из любой сети, укажите:
-```
+```xml
::/0
```
@@ -88,7 +88,7 @@
Чтобы открыть только локальный доступ, укажите:
-```
+```xml
::1
127.0.0.1
```
@@ -111,7 +111,7 @@
Следующая конфигурация задаёт, что пользователь `user1` в результате запросов `SELECT` может получать только те строки `table1`, в которых значение поля `id` равно 1000.
-```
+```xml
diff --git a/docs/ru/operations/system_tables.md b/docs/ru/operations/system_tables.md
index 4c66664b921..3aa77776c44 100644
--- a/docs/ru/operations/system_tables.md
+++ b/docs/ru/operations/system_tables.md
@@ -47,7 +47,7 @@ SELECT * FROM system.asynchronous_metrics LIMIT 10
Содержит информацию о доступных в конфигурационном файле кластерах и серверах, которые в них входят.
Столбцы:
-```
+```text
cluster String — имя кластера.
shard_num UInt32 — номер шарда в кластере, начиная с 1.
shard_weight UInt32 — относительный вес шарда при записи данных
@@ -356,7 +356,7 @@ SELECT * FROM system.metrics LIMIT 10
Эта системная таблица используется для реализации запроса `SHOW PROCESSLIST`.
Столбцы:
-```
+```text
user String - имя пользователя, который задал запрос. При распределённой обработке запроса, относится к пользователю, с помощью которого сервер-инициатор запроса отправил запрос на данный сервер, а не к имени пользователя, который задал распределённый запрос на сервер-инициатор запроса.
address String - IP-адрес, с которого задан запрос. При распределённой обработке запроса, аналогично.
@@ -480,7 +480,7 @@ WHERE table = 'visits'
FORMAT Vertical
```
-```
+```text
Row 1:
──────
database: merge
@@ -506,7 +506,7 @@ active_replicas: 2
Столбцы:
-```
+```text
database: имя БД
table: имя таблицы
engine: имя движка таблицы
@@ -598,7 +598,7 @@ WHERE
Столбцы:
-```
+```text
name String - имя настройки
value String - значение настройки
changed UInt8 - была ли настройка явно задана в конфиге или изменена явным образом
@@ -612,7 +612,7 @@ FROM system.settings
WHERE changed
```
-```
+```text
┌─name───────────────────┬─value───────┬─changed─┐
│ max_threads │ 8 │ 1 │
│ use_uncompressed_cache │ 0 │ 1 │
@@ -681,7 +681,7 @@ WHERE path = '/clickhouse/tables/01-08/visits/replicas'
FORMAT Vertical
```
-```
+```text
Row 1:
──────
name: example01-08-1.yandex.ru
diff --git a/docs/ru/operations/table_engines/aggregatingmergetree.md b/docs/ru/operations/table_engines/aggregatingmergetree.md
index 97ddee14714..fa452829a5b 100644
--- a/docs/ru/operations/table_engines/aggregatingmergetree.md
+++ b/docs/ru/operations/table_engines/aggregatingmergetree.md
@@ -11,7 +11,7 @@
## Создание таблицы
-```
+```sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
@@ -60,7 +60,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
Создаём материализованное представление типа `AggregatingMergeTree`, следящее за таблицей `test.visits`:
-``` sql
+```sql
CREATE MATERIALIZED VIEW test.basic
ENGINE = AggregatingMergeTree() PARTITION BY toYYYYMM(StartDate) ORDER BY (CounterID, StartDate)
AS SELECT
@@ -74,7 +74,7 @@ GROUP BY CounterID, StartDate;
Вставляем данные в таблицу `test.visits`:
-``` sql
+```sql
INSERT INTO test.visits ...
```
@@ -82,7 +82,7 @@ INSERT INTO test.visits ...
Чтобы получить агрегированные данные, выполним запрос вида `SELECT ... GROUP BY ...` из представления `test.basic`:
-``` sql
+```sql
SELECT
StartDate,
sumMerge(Visits) AS Visits,
diff --git a/docs/ru/operations/table_engines/buffer.md b/docs/ru/operations/table_engines/buffer.md
index abc20b6a88c..bf3c1b450fc 100644
--- a/docs/ru/operations/table_engines/buffer.md
+++ b/docs/ru/operations/table_engines/buffer.md
@@ -2,7 +2,7 @@
Буферизует записываемые данные в оперативке, периодически сбрасывая их в другую таблицу. При чтении, производится чтение данных одновременно из буфера и из другой таблицы.
-```
+```sql
Buffer(database, table, num_layers, min_time, max_time, min_rows, max_rows, min_bytes, max_bytes)
```
@@ -22,7 +22,7 @@ min_bytes, max_bytes - условие на количество байт в бу
Пример:
-``` sql
+```sql
CREATE TABLE merge.hits_buffer AS merge.hits ENGINE = Buffer(merge, hits, 16, 10, 100, 10000, 1000000, 10000000, 100000000)
```
diff --git a/docs/ru/operations/table_engines/collapsingmergetree.md b/docs/ru/operations/table_engines/collapsingmergetree.md
index 17e85bcca0c..8260ce1bc76 100644
--- a/docs/ru/operations/table_engines/collapsingmergetree.md
+++ b/docs/ru/operations/table_engines/collapsingmergetree.md
@@ -65,7 +65,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
Например, мы хотим рассчитать, сколько страниц проверили пользователи на каком-то сайте и как долго они там находились. В какой-то момент времени мы пишем следующую строку с состоянием действий пользователя:
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐
│ 4324182021466249494 │ 5 │ 146 │ 1 │
└─────────────────────┴───────────┴──────────┴──────┘
@@ -73,7 +73,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
Через некоторое время мы регистрируем изменение активности пользователя и записываем его следующими двумя строками.
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐
│ 4324182021466249494 │ 5 │ 146 │ -1 │
│ 4324182021466249494 │ 6 │ 185 │ 1 │
@@ -86,7 +86,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
Поскольку нам нужно только последнее состояние активности пользователя, строки
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐
│ 4324182021466249494 │ 5 │ 146 │ 1 │
│ 4324182021466249494 │ 5 │ 146 │ -1 │
@@ -134,7 +134,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
Исходные данные:
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐
│ 4324182021466249494 │ 5 │ 146 │ 1 │
│ 4324182021466249494 │ 5 │ 146 │ -1 │
@@ -170,11 +170,11 @@ INSERT INTO UAct VALUES (4324182021466249494, 5, 146, -1),(4324182021466249494,
Получение данных:
-```
+```sql
SELECT * FROM UAct
```
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐
│ 4324182021466249494 │ 5 │ 146 │ -1 │
│ 4324182021466249494 │ 6 │ 185 │ 1 │
@@ -200,7 +200,7 @@ GROUP BY UserID
HAVING sum(Sign) > 0
```
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┐
│ 4324182021466249494 │ 6 │ 185 │
└─────────────────────┴───────────┴──────────┘
@@ -212,7 +212,7 @@ HAVING sum(Sign) > 0
SELECT * FROM UAct FINAL
```
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐
│ 4324182021466249494 │ 6 │ 185 │ 1 │
└─────────────────────┴───────────┴──────────┴──────┘
@@ -224,7 +224,7 @@ SELECT * FROM UAct FINAL
Исходные данные:
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐
│ 4324182021466249494 │ 5 │ 146 │ 1 │
│ 4324182021466249494 │ -5 │ -146 │ -1 │
@@ -253,28 +253,39 @@ insert into UAct values(4324182021466249494, -5, -146, -1);
insert into UAct values(4324182021466249494, 6, 185, 1);
select * from UAct final; // старайтесь не использовать final (он подходит только для тестов и маленьких таблиц)
+```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐
│ 4324182021466249494 │ 6 │ 185 │ 1 │
└─────────────────────┴───────────┴──────────┴──────┘
-
+```
+```sql
SELECT
UserID,
sum(PageViews) AS PageViews,
sum(Duration) AS Duration
FROM UAct
GROUP BY UserID
+```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┐
│ 4324182021466249494 │ 6 │ 185 │
└─────────────────────┴───────────┴──────────┘
-
+```
+```sql
select count() FROM UAct
+```
+```text
┌─count()─┐
│ 3 │
└─────────┘
-
+```
+```sql
optimize table UAct final;
select * FROM UAct
+```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┐
│ 4324182021466249494 │ 6 │ 185 │ 1 │
└─────────────────────┴───────────┴──────────┴──────┘
diff --git a/docs/ru/operations/table_engines/custom_partitioning_key.md b/docs/ru/operations/table_engines/custom_partitioning_key.md
index 87f7ea50c63..55904d7dbe0 100644
--- a/docs/ru/operations/table_engines/custom_partitioning_key.md
+++ b/docs/ru/operations/table_engines/custom_partitioning_key.md
@@ -6,7 +6,7 @@
Ключ партиционирования задается при [создании таблицы](mergetree.md#table_engine-mergetree-creating-a-table), в секции `PARTITION BY expr`. Ключ может представлять собой произвольное выражение из столбцов таблицы. Например, чтобы задать партиционирования по месяцам, можно использовать выражение `toYYYYMM(date_column)`:
-``` sql
+```sql
CREATE TABLE visits
(
VisitDate Date,
@@ -20,7 +20,7 @@ ORDER BY Hour
Ключом партиционирования также может быть кортеж из выражений (аналогично [первичному ключу](mergetree.md#primary-keys-and-indexes-in-queries)). Например:
-``` sql
+```sql
ENGINE = ReplicatedCollapsingMergeTree('/clickhouse/tables/name', 'replica1', Sign)
PARTITION BY (toMonday(StartDate), EventType)
ORDER BY (CounterID, StartDate, intHash32(UserID));
@@ -44,7 +44,7 @@ FROM system.parts
WHERE table = 'visits'
```
-```
+```text
┌─partition─┬─name───────────┬─active─┐
│ 201901 │ 201901_1_3_1 │ 0 │
│ 201901 │ 201901_1_9_2 │ 1 │
@@ -80,7 +80,7 @@ WHERE table = 'visits'
OPTIMIZE TABLE visits PARTITION 201902;
```
-```
+```text
┌─partition─┬─name───────────┬─active─┐
│ 201901 │ 201901_1_3_1 │ 0 │
│ 201901 │ 201901_1_9_2 │ 1 │
@@ -98,7 +98,7 @@ OPTIMIZE TABLE visits PARTITION 201902;
Другой способ посмотреть набор кусков и партиций – зайти в директорию с данными таблицы: `/var/lib/clickhouse/data///`. Например:
```bash
-dev:/var/lib/clickhouse/data/default/visits$ ls -l
+/var/lib/clickhouse/data/default/visits$ ls -l
total 40
drwxr-xr-x 2 clickhouse clickhouse 4096 Feb 1 16:48 201901_1_3_1
drwxr-xr-x 2 clickhouse clickhouse 4096 Feb 5 16:17 201901_1_9_2
diff --git a/docs/ru/operations/table_engines/dictionary.md b/docs/ru/operations/table_engines/dictionary.md
index 650ad9466a7..d24dca27a07 100644
--- a/docs/ru/operations/table_engines/dictionary.md
+++ b/docs/ru/operations/table_engines/dictionary.md
@@ -38,9 +38,7 @@
Запрос данных словаря:
-``` sql
-select name, type, key, attribute.names, attribute.types, bytes_allocated, element_count,source from system.dictionaries where name = 'products';
-
+```sql
SELECT
name,
type,
@@ -53,7 +51,7 @@ SELECT
FROM system.dictionaries
WHERE name = 'products'
```
-```
+```text
┌─name─────┬─type─┬─key────┬─attribute.names─┬─attribute.types─┬─bytes_allocated─┬─element_count─┬─source──────────┐
│ products │ Flat │ UInt64 │ ['title'] │ ['String'] │ 23065376 │ 175032 │ ODBC: .products │
└──────────┴──────┴────────┴─────────────────┴─────────────────┴─────────────────┴───────────────┴─────────────────┘
@@ -65,45 +63,27 @@ WHERE name = 'products'
Синтаксис:
-```
+```sql
CREATE TABLE %table_name% (%fields%) engine = Dictionary(%dictionary_name%)`
```
Пример использования:
-``` sql
+```sql
create table products (product_id UInt64, title String) Engine = Dictionary(products);
-
-CREATE TABLE products
-(
- product_id UInt64,
- title String,
-)
-ENGINE = Dictionary(products)
-```
-```
-Ok.
-
-0 rows in set. Elapsed: 0.004 sec.
```
Проверим что у нас в таблице?
-``` sql
+```sql
select * from products limit 1;
-
-SELECT *
-FROM products
-LIMIT 1
```
-```
+```text
┌────product_id─┬─title───────────┐
│ 152689 │ Some item │
└───────────────┴─────────────────┘
-
-1 rows in set. Elapsed: 0.006 sec.
```
[Оригинальная статья](https://clickhouse.yandex/docs/ru/operations/table_engines/dictionary/)
diff --git a/docs/ru/operations/table_engines/distributed.md b/docs/ru/operations/table_engines/distributed.md
index be3dde014cf..52743f72703 100644
--- a/docs/ru/operations/table_engines/distributed.md
+++ b/docs/ru/operations/table_engines/distributed.md
@@ -6,7 +6,7 @@
Движок Distributed принимает параметры: имя кластера в конфигурационном файле сервера, имя удалённой базы данных, имя удалённой таблицы, а также (не обязательно) ключ шардирования.
Пример:
-```
+```sql
Distributed(logs, default, hits[, sharding_key])
```
diff --git a/docs/ru/operations/table_engines/external_data.md b/docs/ru/operations/table_engines/external_data.md
index 430f90a82ae..358ad7ff9c9 100644
--- a/docs/ru/operations/table_engines/external_data.md
+++ b/docs/ru/operations/table_engines/external_data.md
@@ -33,9 +33,9 @@ ClickHouse позволяет отправить на сервер данные,
Примеры:
```bash
-echo -ne "1\n2\n3\n" | clickhouse-client --query="SELECT count() FROM test.visits WHERE TraficSourceID IN _data" --external --file=- --types=Int8
+$ echo -ne "1\n2\n3\n" | clickhouse-client --query="SELECT count() FROM test.visits WHERE TraficSourceID IN _data" --external --file=- --types=Int8
849897
-cat /etc/passwd | sed 's/:/\t/g' | clickhouse-client --query="SELECT shell, count() AS c FROM passwd GROUP BY shell ORDER BY c DESC" --external --file=- --name=passwd --structure='login String, unused String, uid UInt16, gid UInt16, comment String, home String, shell String'
+$ cat /etc/passwd | sed 's/:/\t/g' | clickhouse-client --query="SELECT shell, count() AS c FROM passwd GROUP BY shell ORDER BY c DESC" --external --file=- --name=passwd --structure='login String, unused String, uid UInt16, gid UInt16, comment String, home String, shell String'
/bin/sh 20
/bin/false 5
/bin/bash 4
@@ -48,9 +48,9 @@ cat /etc/passwd | sed 's/:/\t/g' | clickhouse-client --query="SELECT shell, coun
Пример:
```bash
-cat /etc/passwd | sed 's/:/\t/g' > passwd.tsv
+$ cat /etc/passwd | sed 's/:/\t/g' > passwd.tsv
-curl -F 'passwd=@passwd.tsv;' 'http://localhost:8123/?query=SELECT+shell,+count()+AS+c+FROM+passwd+GROUP+BY+shell+ORDER+BY+c+DESC&passwd_structure=login+String,+unused+String,+uid+UInt16,+gid+UInt16,+comment+String,+home+String,+shell+String'
+$ curl -F 'passwd=@passwd.tsv;' 'http://localhost:8123/?query=SELECT+shell,+count()+AS+c+FROM+passwd+GROUP+BY+shell+ORDER+BY+c+DESC&passwd_structure=login+String,+unused+String,+uid+UInt16,+gid+UInt16,+comment+String,+home+String,+shell+String'
/bin/sh 20
/bin/false 5
/bin/bash 4
diff --git a/docs/ru/operations/table_engines/file.md b/docs/ru/operations/table_engines/file.md
index b67823b988a..bb8f831235f 100644
--- a/docs/ru/operations/table_engines/file.md
+++ b/docs/ru/operations/table_engines/file.md
@@ -10,7 +10,7 @@
## Использование движка в сервере ClickHouse
-```
+```sql
File(Format)
```
@@ -29,7 +29,7 @@ File(Format)
**1.** Создадим на сервере таблицу `file_engine_table`:
-``` sql
+```sql
CREATE TABLE file_engine_table (name String, value UInt32) ENGINE=File(TabSeparated)
```
@@ -45,11 +45,11 @@ two 2
**3.** Запросим данные:
-``` sql
+```sql
SELECT * FROM file_engine_table
```
-```
+```text
┌─name─┬─value─┐
│ one │ 1 │
│ two │ 2 │
diff --git a/docs/ru/operations/table_engines/graphitemergetree.md b/docs/ru/operations/table_engines/graphitemergetree.md
index f58d4b3d047..40948512a2c 100644
--- a/docs/ru/operations/table_engines/graphitemergetree.md
+++ b/docs/ru/operations/table_engines/graphitemergetree.md
@@ -76,7 +76,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
Структура конфигурации rollup:
-```
+```text
required-columns
patterns
```
@@ -92,7 +92,7 @@ patterns
Структура раздела `patterns`:
-```
+```text
pattern
regexp
function
diff --git a/docs/ru/operations/table_engines/hdfs.md b/docs/ru/operations/table_engines/hdfs.md
index 3f42c9ec447..303f0a07d19 100644
--- a/docs/ru/operations/table_engines/hdfs.md
+++ b/docs/ru/operations/table_engines/hdfs.md
@@ -1,36 +1,37 @@
# HDFS {#table_engines-hdfs}
-Управляет данными в HDFS. Данный движок похож на движок [File](file.md) и на движок [URL](url.md).
+Управляет данными в HDFS. Данный движок похож на движки [File](file.md) и [URL](url.md).
## Использование движка
-```
+```sql
ENGINE = HDFS(URI, format)
```
В параметр `URI` нужно передавать полный URI файла в HDFS.
Параметр `format` должен быть таким, который ClickHouse может использовать и в запросах `INSERT`, и в запросах `SELECT`. Полный список поддерживаемых форматов смотрите в разделе [Форматы](../../interfaces/formats.md#formats).
+Часть URI с путем файла может содержать шаблоны. В этом случае таблица может использоваться только для чтения.
**Пример:**
**1.** Создадим на сервере таблицу `hdfs_engine_table`:
-``` sql
+```sql
CREATE TABLE hdfs_engine_table (name String, value UInt32) ENGINE=HDFS('hdfs://hdfs1:9000/other_storage', 'TSV')
```
**2.** Заполним файл:
-``` sql
+```sql
INSERT INTO hdfs_engine_table VALUES ('one', 1), ('two', 2), ('three', 3)
```
**3.** Запросим данные:
-``` sql
+```sql
SELECT * FROM hdfs_engine_table LIMIT 2
```
-```
+```text
┌─name─┬─value─┐
│ one │ 1 │
│ two │ 2 │
@@ -45,4 +46,55 @@ SELECT * FROM hdfs_engine_table LIMIT 2
- индексы;
- репликация.
+**Шаблоны в пути**
+
+Шаблоны могут содержаться в нескольких компонентах пути. Обрабатываются только существующие файлы, название которых целиком удовлетворяет шаблону (не только суффиксом или префиксом).
+
+- `*` — Заменяет любое количество любых символов кроме `/`, включая отсутствие символов.
+- `?` — Заменяет ровно один любой символ.
+- `{some_string,another_string,yet_another_one}` — Заменяет любую из строк `'some_string', 'another_string', 'yet_another_one'`.
+- `{N..M}` — Заменяет любое число в интервале от `N` до `M` включительно.
+
+Конструкция с `{}` аналогична табличной функции [remote](remote.md).
+
+**Пример**
+
+1. Предположим, у нас есть несколько файлов со следующими URI в HDFS:
+
+- 'hdfs://hdfs1:9000/some_dir/some_file_1'
+- 'hdfs://hdfs1:9000/some_dir/some_file_2'
+- 'hdfs://hdfs1:9000/some_dir/some_file_3'
+- 'hdfs://hdfs1:9000/another_dir/some_file_1'
+- 'hdfs://hdfs1:9000/another_dir/some_file_2'
+- 'hdfs://hdfs1:9000/another_dir/some_file_3'
+
+2. Есть несколько возможностей создать таблицу, состояющую из этих шести файлов:
+
+```sql
+CREATE TABLE table_with_range (name String, value UInt32) ENGINE = HDFS('hdfs://hdfs1:9000/{some,another}_dir/some_file_{1..3}', 'TSV')
+```
+
+Другой способ:
+
+```sql
+CREATE TABLE table_with_question_mark (name String, value UInt32) ENGINE = HDFS('hdfs://hdfs1:9000/{some,another}_dir/some_file_?', 'TSV')
+```
+
+Таблица, состоящая из всех файлов в обеих директориях (все файлы должны удовлетворять формату и схеме, указанной в запросе):
+
+```sql
+CREATE TABLE table_with_asterisk (name String, value UInt32) ENGINE = HDFS('hdfs://hdfs1:9000/{some,another}_dir/*', 'TSV')
+```
+
+!!! warning
+ Если список файлов содержит числовые интервалы с ведущими нулями, используйте конструкцию с фигурными скобочками для каждой цифры или используйте `?`.
+
+**Example**
+
+Создадим таблицу с именами `file000`, `file001`, ... , `file999`:
+
+```sql
+CREARE TABLE big_table (name String, value UInt32) ENGINE = HDFS('hdfs://hdfs1:9000/big_dir/file{0..9}{0..9}{0..9}', 'CSV')
+```
+
[Оригинальная статья](https://clickhouse.yandex/docs/ru/operations/table_engines/hdfs/)
diff --git a/docs/ru/operations/table_engines/jdbc.md b/docs/ru/operations/table_engines/jdbc.md
index 97c52f7bfd9..08b1bef87d5 100644
--- a/docs/ru/operations/table_engines/jdbc.md
+++ b/docs/ru/operations/table_engines/jdbc.md
@@ -27,7 +27,7 @@ ENGINE = JDBC(dbms_uri, external_database, external_table)
Создадим таблицу в на сервере MySQL с помощью консольного клиента MySQL:
-```
+```text
mysql> CREATE TABLE `test`.`test` (
-> `int_id` INT NOT NULL AUTO_INCREMENT,
-> `int_nullable` INT NULL DEFAULT NULL,
@@ -50,30 +50,28 @@ mysql> select * from test;
Создадим таблицу на сервере ClickHouse и получим из неё данные:
-```
+```sql
CREATE TABLE jdbc_table ENGINE JDBC('jdbc:mysql://localhost:3306/?user=root&password=root', 'test', 'test')
-
-Ok.
-
+```
+```sql
DESCRIBE TABLE jdbc_table
-
+```
+```text
┌─name───────────────┬─type───────────────┬─default_type─┬─default_expression─┐
│ int_id │ Int32 │ │ │
│ int_nullable │ Nullable(Int32) │ │ │
│ float │ Float32 │ │ │
│ float_nullable │ Nullable(Float32) │ │ │
└────────────────────┴────────────────────┴──────────────┴────────────────────┘
-
-10 rows in set. Elapsed: 0.031 sec.
-
+```
+```sql
SELECT *
FROM jdbc_table
-
+```
+```text
┌─int_id─┬─int_nullable─┬─float─┬─float_nullable─┐
│ 1 │ ᴺᵁᴸᴸ │ 2 │ ᴺᵁᴸᴸ │
└────────┴──────────────┴───────┴────────────────┘
-
-1 rows in set. Elapsed: 0.055 sec.
```
## Смотрите также
diff --git a/docs/ru/operations/table_engines/join.md b/docs/ru/operations/table_engines/join.md
index 0de007b321d..75676bdfa0a 100644
--- a/docs/ru/operations/table_engines/join.md
+++ b/docs/ru/operations/table_engines/join.md
@@ -4,7 +4,7 @@
## Создание таблицы
-```
+```sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1] [TTL expr1],
diff --git a/docs/ru/operations/table_engines/kafka.md b/docs/ru/operations/table_engines/kafka.md
index f2318d824e2..b79520cfd61 100644
--- a/docs/ru/operations/table_engines/kafka.md
+++ b/docs/ru/operations/table_engines/kafka.md
@@ -10,7 +10,7 @@ Kafka позволяет:
## Создание таблицы {#table_engine-kafka-creating-a-table}
-```
+```sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
@@ -77,7 +77,7 @@ SETTINGS
!!! attention
Не используйте этот метод в новых проектах. По возможности переключите старые проекты на метод, описанный выше.
-```
+```sql
Kafka(kafka_broker_list, kafka_topic_list, kafka_group_name, kafka_format
[, kafka_row_delimiter, kafka_schema, kafka_num_consumers, kafka_skip_broken_messages])
```
@@ -125,7 +125,7 @@ Kafka(kafka_broker_list, kafka_topic_list, kafka_group_name, kafka_format
Чтобы остановить получение данных топика или изменить логику преобразования, отсоедините материализованное представление:
-```
+```sql
DETACH TABLE consumer;
ATTACH MATERIALIZED VIEW consumer;
```
diff --git a/docs/ru/operations/table_engines/log_family.md b/docs/ru/operations/table_engines/log_family.md
index 14247331fc9..ef59b79abeb 100644
--- a/docs/ru/operations/table_engines/log_family.md
+++ b/docs/ru/operations/table_engines/log_family.md
@@ -1,6 +1,6 @@
#Семейство Log
-Движки разработаны для сценариев, когда необходимо записывать много таблиц с небольшим объемом данных (менее 1 миллиона строк).
+Движки разработаны для сценариев, когда необходимо быстро записывать много таблиц с небольшим объемом данных (менее 1 миллиона строк), а затем читать их целиком.
Движки семейства:
@@ -13,33 +13,24 @@
Движки:
- Хранят данные на диске.
-
- Добавляют данные в конец файла при записи.
+- Поддерживают блокировки для конкурентного доступа к данным.
+
+ Во время запросов `INSERT` таблица блокируется, а другие запросы на чтение и запись ожидают разблокировки таблицы. Если запросов на запись данных нет, то можно выполнять любое количество конкуретных запросов на чтение.
- Не поддерживают операции [мутации](../../query_language/alter.md#alter-mutations).
-
- Не поддерживают индексы.
Это означает, что запросы `SELECT` не эффективны для выборки диапазонов данных.
- Записывают данные не атомарно.
- Вы можете получить таблицу с повреждёнными данными, если что-то нарушит операцию записи (например, аварийное завершение работы сервера).
+ Вы можете получить таблицу с повреждёнными данными, если что-то прервёт операцию записи (например, аварийное завершение работы сервера).
## Отличия
-Движки `Log` и `StripeLog` поддерживают:
+Движок `TinyLog` самый простой в семье и обеспечивает самые низкие функциональность и эффективность. Движок `TinyLog` не поддерживает параллельного чтения данных в несколько потоков. Движок читает данные медленнее, чем оба других движка с параллельным чтением, и использует почти столько же дескрипторов, сколько и движок `Log`, поскольку хранит каждый столбец в отдельном файле. Его можно использовать в простых сценариях с низкой нагрузкой.
-- Блокировки для конкурентного доступа к данным.
-
- Во время выполнения запроса `INSERT` таблица заблокирована и другие запросы на чтение и запись данных ожидают снятия блокировки. При отсутствии запросов на запись данных можно одновременно выполнять любое количество запросов на чтение данных.
-
-- Параллельное чтение данных.
-
- ClickHouse читает данные в несколько потоков. Каждый поток обрабатывает отдельный блок данных.
-
-Движок `Log` сохраняет каждый столбец таблицы в отдельном файле. Движок `StripeLog` хранит все данные в одном файле. Таким образом, движок `StripeLog` использует меньше дескрипторов в операционной системе, а движок `Log` обеспечивает более эффективное считывание данных.
-
-Движок `TinyLog` самый простой в семье и обеспечивает самые низкие функциональность и эффективность. Движок `TinyLog` не поддерживает ни параллельного чтения данных, ни конкурентного доступа к данным. Он хранит каждый столбец в отдельном файле. Движок читает данные медленнее, чем оба других движка с параллельным чтением, и использует почти столько же дескрипторов, сколько и движок `Log`. Его можно использовать в простых сценариях с низкой нагрузкой.
+Движки `Log` и `StripeLog` поддерживают параллельное чтение. При чтении данных, ClickHouse использует множество потоков. Каждый поток обрабатывает отдельный блок данных. Движок `Log` сохраняет каждый столбец таблицы в отдельном файле. Движок `StripeLog` хранит все данные в одном файле. Таким образом, движок `StripeLog` использует меньше дескрипторов в операционной системе, а движок `Log` обеспечивает более эффективное считывание данных.
[Оригинальная статья](https://clickhouse.yandex/docs/ru/operations/table_engines/log_family/)
diff --git a/docs/ru/operations/table_engines/merge.md b/docs/ru/operations/table_engines/merge.md
index b87bfdafd75..acd7618f9a0 100644
--- a/docs/ru/operations/table_engines/merge.md
+++ b/docs/ru/operations/table_engines/merge.md
@@ -7,7 +7,7 @@
Пример:
-```
+```sql
Merge(hits, '^WatchLog')
```
@@ -27,7 +27,7 @@ Merge(hits, '^WatchLog')
Пусть есть старая таблица `WatchLog_old`. Необходимо изменить партиционирование без перемещения данных в новую таблицу `WatchLog_new`. При этом в выборке должны участвовать данные обеих таблиц.
-```
+```sql
CREATE TABLE WatchLog_old(date Date, UserId Int64, EventType String, Cnt UInt64)
ENGINE=MergeTree(date, (UserId, EventType), 8192);
INSERT INTO WatchLog_old VALUES ('2018-01-01', 1, 'hit', 3);
@@ -40,14 +40,14 @@ CREATE TABLE WatchLog as WatchLog_old ENGINE=Merge(currentDatabase(), '^WatchLog
SELECT *
FROM WatchLog
-
+```
+```text
┌───────date─┬─UserId─┬─EventType─┬─Cnt─┐
│ 2018-01-01 │ 1 │ hit │ 3 │
└────────────┴────────┴───────────┴─────┘
┌───────date─┬─UserId─┬─EventType─┬─Cnt─┐
│ 2018-01-02 │ 2 │ hit │ 3 │
└────────────┴────────┴───────────┴─────┘
-
```
## Виртуальные столбцы
diff --git a/docs/ru/operations/table_engines/mergetree.md b/docs/ru/operations/table_engines/mergetree.md
index c1fb3dc9edd..61bdbc76457 100644
--- a/docs/ru/operations/table_engines/mergetree.md
+++ b/docs/ru/operations/table_engines/mergetree.md
@@ -44,7 +44,10 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
[SETTINGS name=value, ...]
```
-Описание параметров запроса смотрите в [описании запроса](../../query_language/create.md).
+Описание параметров смотрите в [описании запроса CREATE](../../query_language/create.md).
+
+!!!note "Note"
+ `INDEX` — экспериментальная возможность, смотрите [Индексы пропуска данных](#table_engine-mergetree-data_skipping-indexes).
### Секции запроса
@@ -75,7 +78,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
- `SETTINGS` — дополнительные параметры, регулирующие поведение `MergeTree`:
- - `index_granularity` — гранулярность индекса. Число строк данных между «засечками» индекса. По умолчанию — 8192. Список всех доступных параметров можно посмотреть в [MergeTreeSettings.h](https://github.com/yandex/ClickHouse/blob/master/dbms/src/Storages/MergeTree/MergeTreeSettings.h).
+ - `index_granularity` — гранулярность индекса. Число строк данных между «засечками» индекса. По умолчанию — 8192. Список всех доступных параметров можно посмотреть в [MergeTreeSettings.h](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/src/Storages/MergeTree/MergeTreeSettings.h).
- `min_merge_bytes_to_use_direct_io` — минимальный объем данных, необходимый для прямого (небуферизованного) чтения/записи (direct I/O) на диск. При слиянии частей данных ClickHouse вычисляет общий объем хранения всех данных, подлежащих слиянию. Если общий объем хранения всех данных для чтения превышает `min_bytes_to_use_direct_io` байт, тогда ClickHouse использует флаг `O_DIRECT` при чтении данных с диска. Если `min_merge_bytes_to_use_direct_io = 0`, тогда прямой ввод-вывод отключен. Значение по умолчанию: `10 * 1024 * 1024 * 1024` байт.
- `merge_with_ttl_timeout` - Минимальное время в секундах для повторного выполнения слияний с TTL. По умолчанию - 86400 (1 день).
@@ -115,7 +118,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
**Пример**
-```
+```sql
MergeTree(EventDate, intHash32(UserID), (CounterID, EventDate, intHash32(UserID)), 8192)
```
@@ -139,7 +142,7 @@ MergeTree(EventDate, intHash32(UserID), (CounterID, EventDate, intHash32(UserID)
Рассмотрим первичный ключ — `(CounterID, Date)`. В этом случае сортировку и индекс можно проиллюстрировать следующим образом:
-```
+```text
Whole data: [-------------------------------------------------------------------------]
CounterID: [aaaaaaaaaaaaaaaaaabbbbcdeeeeeeeeeeeeefgggggggghhhhhhhhhiiiiiiiiikllllllll]
Date: [1111111222222233331233211111222222333211111112122222223111112223311122333]
@@ -210,7 +213,7 @@ ClickHouse не требует уникального первичного кл
Рассмотрим движок сконфигурированный следующим образом:
-```
+```sql
ENGINE MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate) SETTINGS index_granularity=8192
```
@@ -244,7 +247,7 @@ ClickHouse не может использовать индекс, если зн
ClickHouse использует эту логику не только для последовательностей дней месяца, но и для любого частично-монотонного первичного ключа.
-### Дополнительные индексы (Экспериментальная функциональность)
+### Индексы пропуска данных (экспериментальная функциональность) {#table_engine-mergetree-data_skipping-indexes}
Для использования требуется установить настройку `allow_experimental_data_skipping_indices` в 1. (запустить `SET allow_experimental_data_skipping_indices = 1`).
@@ -282,11 +285,18 @@ SELECT count() FROM table WHERE u64 * i32 == 10 AND u64 * length(s) >= 1234
#### Доступные индексы
-* `minmax`
-Хранит минимум и максимум выражения (если выражение - `tuple`, то для каждого элемента `tuple`), используя их для пропуска блоков аналогично первичному ключу.
+- `minmax` — Хранит минимум и максимум выражения (если выражение - `tuple`, то для каждого элемента `tuple`), используя их для пропуска блоков аналогично первичному ключу.
+
+- `set(max_rows)` — Хранит уникальные значения выражения на блоке в количестве не более `max_rows` (если `max_rows = 0`, то ограничений нет), используя их для пропуска блоков, оценивая выполнимость `WHERE` выражения на хранимых данных.
+
+- `bloom_filter([false_positive])` — [фильтр Блума](https://en.wikipedia.org/wiki/Bloom_filter) для указанных стоблцов.
+
+ Необязательный параметр `false_positive` — это вероятность получения ложноположительного срабатывания. Возможные значения: (0, 1). Значение по умолчанию: 0.025.
+
+ Поддержанные типы данных: `Int*`, `UInt*`, `Float*`, `Enum`, `Date`, `DateTime`, `String`, `FixedString`.
+
+ Фильтром могут пользоваться функции: [equals](../../query_language/functions/comparison_functions.md), [notEquals](../../query_language/functions/comparison_functions.md), [in](../../query_language/functions/in_functions.md), [notIn](../../query_language/functions/in_functions.md).
-* `set(max_rows)`
-Хранит уникальные значения выражения на блоке в количестве не более `max_rows` (если `max_rows = 0`, то ограничений нет), используя их для пропуска блоков, оценивая выполнимость `WHERE` выражения на хранимых данных.
**Примеры**
@@ -295,6 +305,48 @@ INDEX b (u64 * length(str), i32 + f64 * 100, date, str) TYPE minmax GRANULARITY
INDEX b (u64 * length(str), i32 + f64 * 100, date, str) TYPE set(100) GRANULARITY 4
```
+#### Поддержка для функций
+
+Условия в секции `WHERE` содержат вызовы функций, оперирующих со столбцами. Если столбец - часть индекса, ClickHouse пытается использовать индекс при выполнении функции. Для разных видов индексов, ClickHouse поддерживает различные наборы функций, которые могут использоваться индексами.
+
+Индекс `set` используется со всеми функциями. Наборы функций для остальных индексов представлены в таблице ниже.
+
+Function (operator) / Index | primary key | minmax | ngrambf_v1 | tokenbf_v1 | bloom_filter
+----------------------------|-------------|--------|------------|------------|---------------
+[equals (=, ==)](../../query_language/functions/comparison_functions.md#function-equals) | ✔ | ✔ | ✔ | ✔ | ✔
+[notEquals(!=, <>)](../../query_language/functions/comparison_functions.md#function-notequals) | ✔ | ✔ | ✔ | ✔ | ✔
+[like](../../query_language/functions/string_search_functions.md#function-like) | ✔ | ✔ | ✔ | ✗ | ✗
+[notLike](../../query_language/functions/string_search_functions.md#function-notlike) | ✔ | ✔ | ✔ | ✔ | ✗
+[startsWith](../../query_language/functions/string_functions.md#function-startswith) | ✔ | ✔ | ✔ | ✔ | ✗
+[endsWith](../../query_language/functions/string_functions.md#function-endswith) | ✗ | ✗ | ✔ | ✔ | ✗
+[multiSearchAny](../../query_language/functions/string_search_functions.md#function-multisearchany) | ✗ | ✗ | ✔ | ✔ | ✗
+[in](../../query_language/functions/in_functions.md#in-functions) | ✔ | ✔ | ✔ | ✔ | ✔
+[notIn](../../query_language/functions/in_functions.md#in-functions) | ✔ | ✔ | ✔ | ✔ | ✔
+[less (<)](../../query_language/functions/comparison_functions.md#function-less) | ✔ | ✔ | ✗ | ✗ | ✗
+[greater (>)](../../query_language/functions/comparison_functions.md#function-greater) | ✔ | ✔ | ✗ | ✗ | ✗
+[lessOrEquals (<=)](../../query_language/functions/comparison_functions.md#function-lessorequals) | ✔ | ✔ | ✗ | ✗ | ✗
+[greaterOrEquals (>=)](../../query_language/functions/comparison_functions.md#function-greaterorequals) | ✔ | ✔ | ✗ | ✗ | ✗
+[empty](../../query_language/functions/array_functions.md#function-empty) | ✔ | ✔ | ✗ | ✗ | ✗
+[notEmpty](../../query_language/functions/array_functions.md#function-notempty) | ✔ | ✔ | ✗ | ✗ | ✗
+hasToken | ✗ | ✗ | ✗ | ✔ | ✗
+
+Функции с постоянным агрументом, который меньше, чем размер ngram не могут использовать индекс `ngrambf_v1` для оптимизации запроса.
+
+Фильтры Блума могут иметь ложнопозитивные срабатывания, следовательно индексы `ngrambf_v1`, `tokenbf_v1` и `bloom_filter` невозможно использовать для оптимизации запросов, в которых результат функции предполается false, например:
+
+- Можно оптимизировать:
+ - `s LIKE '%test%'`
+ - `NOT s NOT LIKE '%test%'`
+ - `s = 1`
+ - `NOT s != 1`
+ - `startsWith(s, 'test')`
+- Нельзя оптимизировать:
+ - `NOT s LIKE '%test%'`
+ - `s NOT LIKE '%test%'`
+ - `NOT s = 1`
+ - `s != 1`
+ - `NOT startsWith(s, 'test')`
+
## Конкурентный доступ к данным
Для конкурентного доступа к таблице используется мультиверсионность. То есть, при одновременном чтении и обновлении таблицы, данные будут читаться из набора кусочков, актуального на момент запроса. Длинных блокировок нет. Вставки никак не мешают чтениям.
@@ -309,14 +361,14 @@ INDEX b (u64 * length(str), i32 + f64 * 100, date, str) TYPE set(100) GRANULARIT
Таблица должна иметь столбец типа [Date](../../data_types/date.md) или [DateTime](../../data_types/datetime.md). Для установки времени жизни данных, следует использовать операцию со столбцом с временем, например:
-```
+```sql
TTL time_column
TTL time_column + interval
```
Чтобы задать `interval`, используйте операторы [интервала времени](../../query_language/operators.md#operators-datetime).
-```
+```sql
TTL date_time + INTERVAL 1 MONTH
TTL date_time + INTERVAL 15 HOUR
```
diff --git a/docs/ru/operations/table_engines/mysql.md b/docs/ru/operations/table_engines/mysql.md
index 4f904959ef2..f35e2113bd1 100644
--- a/docs/ru/operations/table_engines/mysql.md
+++ b/docs/ru/operations/table_engines/mysql.md
@@ -44,7 +44,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
Таблица в MySQL:
-```
+```text
mysql> CREATE TABLE `test`.`test` (
-> `int_id` INT NOT NULL AUTO_INCREMENT,
-> `int_nullable` INT NULL DEFAULT NULL,
diff --git a/docs/ru/operations/table_engines/odbc.md b/docs/ru/operations/table_engines/odbc.md
index c3abd18aaa3..34d6bec239d 100644
--- a/docs/ru/operations/table_engines/odbc.md
+++ b/docs/ru/operations/table_engines/odbc.md
@@ -8,7 +8,7 @@
## Создание таблицы
-```
+```sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1],
@@ -41,15 +41,17 @@ ENGINE = ODBC(connection_settings, external_database, external_table)
По умолчанию (если установлен из пакетов) ClickHouse запускается от имени пользователя `clickhouse`. Таким образом, вам нужно создать и настроить этого пользователя на сервере MySQL.
+```bash
+$ sudo mysql
```
-sudo mysql
+```sql
mysql> CREATE USER 'clickhouse'@'localhost' IDENTIFIED BY 'clickhouse';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'clickhouse'@'clickhouse' WITH GRANT OPTION;
```
Теперь настроим соединение в `/etc/odbc.ini`.
-```
+```bash
$ cat /etc/odbc.ini
[mysqlconn]
DRIVER = /usr/local/lib/libmyodbc5w.so
@@ -62,8 +64,8 @@ PASSWORD = clickhouse
Вы можете проверить соединение с помощью утилиты `isql` из установки unixODBC.
-```
-isql -v mysqlconn
+```bash
+$ isql -v mysqlconn
+---------------------------------------+
| Connected! |
| |
@@ -72,7 +74,7 @@ isql -v mysqlconn
Таблица в MySQL:
-```
+```text
mysql> CREATE TABLE `test`.`test` (
-> `int_id` INT NOT NULL AUTO_INCREMENT,
-> `int_nullable` INT NULL DEFAULT NULL,
diff --git a/docs/ru/operations/table_engines/replication.md b/docs/ru/operations/table_engines/replication.md
index b94cf97762e..3076225feb3 100644
--- a/docs/ru/operations/table_engines/replication.md
+++ b/docs/ru/operations/table_engines/replication.md
@@ -154,7 +154,7 @@ CREATE TABLE table_name
Для запуска восстановления, создайте в ZooKeeper узел `/path_to_table/replica_name/flags/force_restore_data` с любым содержимым или выполните команду для восстановления всех реплицируемых таблиц:
```bash
-sudo -u clickhouse touch /var/lib/clickhouse/flags/force_restore_data
+$ sudo -u clickhouse touch /var/lib/clickhouse/flags/force_restore_data
```
Затем запустите сервер. При старте, сервер удалит эти флаги и запустит восстановление.
diff --git a/docs/ru/operations/table_engines/stripelog.md b/docs/ru/operations/table_engines/stripelog.md
index c4ab87fc855..7e80f79255f 100644
--- a/docs/ru/operations/table_engines/stripelog.md
+++ b/docs/ru/operations/table_engines/stripelog.md
@@ -6,7 +6,7 @@
## Создание таблицы {#table_engines-stripelog-creating-a-table}
-```
+```sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
column1_name [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
@@ -61,7 +61,7 @@ ClickHouse использует несколько потоков при выб
SELECT * FROM stripe_log_table
```
-```
+```text
┌───────────timestamp─┬─message_type─┬─message────────────────────┐
│ 2019-01-18 14:27:32 │ REGULAR │ The second regular message │
│ 2019-01-18 14:34:53 │ WARNING │ The first warning message │
@@ -77,7 +77,7 @@ SELECT * FROM stripe_log_table
SELECT * FROM stripe_log_table ORDER BY timestamp
```
-```
+```text
┌───────────timestamp─┬─message_type─┬─message────────────────────┐
│ 2019-01-18 14:23:43 │ REGULAR │ The first regular message │
│ 2019-01-18 14:27:32 │ REGULAR │ The second regular message │
diff --git a/docs/ru/operations/table_engines/summingmergetree.md b/docs/ru/operations/table_engines/summingmergetree.md
index 9ca1052fe5e..05d20ba60a6 100644
--- a/docs/ru/operations/table_engines/summingmergetree.md
+++ b/docs/ru/operations/table_engines/summingmergetree.md
@@ -68,8 +68,8 @@ ORDER BY key
Добавим в неё данные:
-```
-:) INSERT INTO summtt Values(1,1),(1,2),(2,1)
+```sql
+INSERT INTO summtt Values(1,1),(1,2),(2,1)
```
ClickHouse может не полностью просуммировать все строки ([смотрите ниже по тексту](#obrabotka-dannykh)), поэтому при запросе мы используем агрегатную функцию `sum` и секцию `GROUP BY`.
@@ -77,7 +77,7 @@ ClickHouse может не полностью просуммировать вс
```sql
SELECT key, sum(value) FROM summtt GROUP BY key
```
-```
+```text
┌─key─┬─sum(value)─┐
│ 2 │ 1 │
│ 1 │ 3 │
@@ -118,7 +118,7 @@ ClickHouse может слить куски данных таким образо
Примеры:
-```
+```text
[(1, 100)] + [(2, 150)] -> [(1, 100), (2, 150)]
[(1, 100)] + [(1, 150)] -> [(1, 250)]
[(1, 100)] + [(1, 150), (2, 150)] -> [(1, 250), (2, 150)]
diff --git a/docs/ru/operations/table_engines/tinylog.md b/docs/ru/operations/table_engines/tinylog.md
index 22b0ffb5e50..673647f628a 100644
--- a/docs/ru/operations/table_engines/tinylog.md
+++ b/docs/ru/operations/table_engines/tinylog.md
@@ -2,21 +2,8 @@
Движок относится к семейству движков Log. Смотрите общие свойства и различия движков в статье [Семейство Log](log_family.md).
-Самый простой движок таблиц, который хранит данные на диске.
-Каждый столбец хранится в отдельном сжатом файле.
-При записи, данные дописываются в конец файлов.
+Типичный способ использования этой движка — это write-once: сначала данные один раз записываются, а затем читаются столько раз, сколько это необходимо. Например, можно использовать таблицы с движком `TinyLog` для хранения промежуточных данных, которые обрабатываются небольшими блоками. Учтите, что хранить данные в большом количестве мелких таблиц неэффективно.
-Конкурентный доступ к данным никак не ограничивается:
-
-- если вы одновременно читаете из таблицы и в другом запросе пишете в неё, то чтение будет завершено с ошибкой;
-- если вы одновременно пишете в таблицу в нескольких запросах, то данные будут битыми.
-
-Типичный способ использования этой таблицы - это write-once: сначала один раз только пишем данные, а потом сколько угодно читаем.
-Запросы выполняются в один поток. То есть, этот движок предназначен для сравнительно маленьких таблиц (рекомендуется до 1 000 000 строк).
-Этот движок таблиц имеет смысл использовать лишь в случае, если у вас есть много маленьких таблиц, так как он проще, чем движок Log (требуется открывать меньше файлов).
-Случай, когда у вас много маленьких таблиц, является гарантированно плохим по производительности, но может уже использоваться при работе с другой СУБД, и вам может оказаться удобнее перейти на использование таблиц типа TinyLog.
-**Индексы не поддерживаются.**
-
-В Яндекс.Метрике таблицы типа TinyLog используются для промежуточных данных, обрабатываемых маленькими пачками.
+Запросы выполняются в один поток. То есть, этот движок предназначен для сравнительно маленьких таблиц (до 1 000 000 строк). Этот движок таблиц имеет смысл использовать в том случае, когда у вас есть много маленьких таблиц, так как он проще, чем движок [Log](log.md) (требуется открывать меньше файлов).
[Оригинальная статья](https://clickhouse.yandex/docs/ru/operations/table_engines/tinylog/)
diff --git a/docs/ru/operations/table_engines/url.md b/docs/ru/operations/table_engines/url.md
index f67242ee870..8e8313e814e 100644
--- a/docs/ru/operations/table_engines/url.md
+++ b/docs/ru/operations/table_engines/url.md
@@ -21,7 +21,7 @@
**1.** Создадим на сервере таблицу `url_engine_table`:
-``` sql
+```sql
CREATE TABLE url_engine_table (word String, value UInt64)
ENGINE=URL('http://127.0.0.1:12345/', CSV)
```
@@ -46,16 +46,16 @@ if __name__ == "__main__":
```
```bash
-python3 server.py
+$ python3 server.py
```
**3.** Запросим данные:
-``` sql
+```sql
SELECT * FROM url_engine_table
```
-```
+```text
┌─word──┬─value─┐
│ Hello │ 1 │
│ World │ 2 │
diff --git a/docs/ru/operations/table_engines/versionedcollapsingmergetree.md b/docs/ru/operations/table_engines/versionedcollapsingmergetree.md
index 43dfd083630..0de677e726b 100644
--- a/docs/ru/operations/table_engines/versionedcollapsingmergetree.md
+++ b/docs/ru/operations/table_engines/versionedcollapsingmergetree.md
@@ -28,7 +28,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
**Параметры движка**
-```
+```sql
VersionedCollapsingMergeTree(sign, version)
```
@@ -80,7 +80,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
Например, мы хотим рассчитать, сколько страниц пользователи посетили на каком-либо сайте и как долго они там находились. В какой-то момент времени мы записываем следующую строку состояния пользовательской активности:
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┬─Version─┐
│ 4324182021466249494 │ 5 │ 146 │ 1 │ 1 |
└─────────────────────┴───────────┴──────────┴──────┴─────────┘
@@ -88,7 +88,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
Через некоторое время мы регистрируем изменение активности пользователя и записываем его следующими двумя строками.
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┬─Version─┐
│ 4324182021466249494 │ 5 │ 146 │ -1 │ 1 |
│ 4324182021466249494 │ 6 │ 185 │ 1 │ 2 |
@@ -101,7 +101,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
Поскольку нам нужно только последнее состояние активности пользователя, строки
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┬─Version─┐
│ 4324182021466249494 │ 5 │ 146 │ 1 │ 1 |
│ 4324182021466249494 │ 5 │ 146 │ -1 │ 1 |
@@ -138,7 +138,7 @@ ClickHouse не гарантирует, что все строки с одина
Данные для примера:
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┬─Version─┐
│ 4324182021466249494 │ 5 │ 146 │ 1 │ 1 |
│ 4324182021466249494 │ 5 │ 146 │ -1 │ 1 |
@@ -175,11 +175,11 @@ INSERT INTO UAct VALUES (4324182021466249494, 5, 146, -1, 1),(432418202146624949
Получение данных:
-```
+```sql
SELECT * FROM UAct
```
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┬─Version─┐
│ 4324182021466249494 │ 5 │ 146 │ 1 │ 1 │
└─────────────────────┴───────────┴──────────┴──────┴─────────┘
@@ -206,7 +206,7 @@ GROUP BY UserID, Version
HAVING sum(Sign) > 0
```
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Version─┐
│ 4324182021466249494 │ 6 │ 185 │ 2 │
└─────────────────────┴───────────┴──────────┴─────────┘
@@ -218,7 +218,7 @@ HAVING sum(Sign) > 0
SELECT * FROM UAct FINAL
```
-```
+```text
┌──────────────UserID─┬─PageViews─┬─Duration─┬─Sign─┬─Version─┐
│ 4324182021466249494 │ 6 │ 185 │ 1 │ 2 │
└─────────────────────┴───────────┴──────────┴──────┴─────────┘
diff --git a/docs/ru/operations/tips.md b/docs/ru/operations/tips.md
index 1022ed5a9ba..7803f5f9141 100644
--- a/docs/ru/operations/tips.md
+++ b/docs/ru/operations/tips.md
@@ -5,7 +5,7 @@
Всегда используйте `performance` scaling governor. `ondemand` scaling governor работает намного хуже при постоянно высоком спросе.
```bash
-echo 'performance' | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
+$ echo 'performance' | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
```
## Ограничение CPU
@@ -21,8 +21,8 @@ echo 'performance' | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_gover
Не выключайте overcommit. Значение `cat /proc/sys/vm/overcommit_memory` должно быть 0 or 1. Выполните:
-```
-echo 0 | sudo tee /proc/sys/vm/overcommit_memory
+```bash
+$ echo 0 | sudo tee /proc/sys/vm/overcommit_memory
```
## Huge pages
@@ -30,7 +30,7 @@ echo 0 | sudo tee /proc/sys/vm/overcommit_memory
Механизм прозрачных huge pages нужно отключить. Он мешает работе аллокаторов памяти, что приводит к значительной деградации производительности.
```bash
-echo 'never' | sudo tee /sys/kernel/mm/transparent_hugepage/enabled
+$ echo 'never' | sudo tee /sys/kernel/mm/transparent_hugepage/enabled
```
С помощью `perf top` можно наблюдать за временем, проведенном в ядре операционной системы для управления памятью.
@@ -55,7 +55,7 @@ echo 'never' | sudo tee /sys/kernel/mm/transparent_hugepage/enabled
При использовании RAID-5, RAID-6 или RAID-50, нужно всегда увеличивать stripe_cache_size, так как значение по умолчанию выбрано не самым удачным образом.
```bash
-echo 4096 | sudo tee /sys/block/md2/md/stripe_cache_size
+$ echo 4096 | sudo tee /sys/block/md2/md/stripe_cache_size
```
Точное число стоит вычислять из числа устройств и размер блока по формуле: `2 * num_devices * chunk_size_in_bytes / 4096`.
@@ -164,7 +164,7 @@ dynamicConfigFile=/etc/zookeeper-{{ cluster['name'] }}/conf/zoo.cfg.dynamic
Версия Java:
-```
+```text
Java(TM) SE Runtime Environment (build 1.8.0_25-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.25-b02, mixed mode)
```
@@ -212,7 +212,7 @@ JAVA_OPTS="-Xms{{ cluster.get('xms','128M') }} \
Salt init:
-```
+```text
description "zookeeper-{{ cluster['name'] }} centralized coordination service"
start on runlevel [2345]
diff --git a/docs/ru/operations/troubleshooting.md b/docs/ru/operations/troubleshooting.md
index a48eea394ca..844bd24fc37 100644
--- a/docs/ru/operations/troubleshooting.md
+++ b/docs/ru/operations/troubleshooting.md
@@ -25,14 +25,14 @@
Команда:
-```
-sudo service clickhouse-server status
+```bash
+$ sudo service clickhouse-server status
```
Если сервер не запущен, запустите его с помощью команды:
-```
-sudo service clickhouse-server start
+```bash
+$ sudo service clickhouse-server start
```
**Проверьте журналы**
@@ -46,19 +46,19 @@ sudo service clickhouse-server start
Если `clickhouse-server` не запустился из-за ошибки конфигурации вы увидите `` строку с описанием ошибки. Например:
-```
+```text
2019.01.11 15:23:25.549505 [ 45 ] {} ExternalDictionaries: Failed reloading 'event2id' external dictionary: Poco::Exception. Code: 1000, e.code() = 111, e.displayText() = Connection refused, e.what() = Connection refused
```
Если вы не видите ошибки в конце файла, просмотрите весь файл начиная со строки:
-```
+```text
Application: starting up.
```
При попытке запустить второй экземпляр `clickhouse-server` журнал выглядит следующим образом:
-```
+```text
2019.01.11 15:25:11.151730 [ 1 ] {} : Starting ClickHouse 19.1.0 with revision 54413
2019.01.11 15:25:11.154578 [ 1 ] {} Application: starting up
2019.01.11 15:25:11.156361 [ 1 ] {} StatusFile: Status file ./status already exists - unclean restart. Contents:
@@ -76,14 +76,14 @@ Revision: 54413
Если из логов `clickhouse-server` вы не получили необходимой информации или логов нет, то вы можете посмотреть логи `system.d` командой:
-```
-sudo journalctl -u clickhouse-server
+```bash
+$ sudo journalctl -u clickhouse-server
```
**Запустите clickhouse-server в интерактивном режиме**
-```
-sudo -u clickhouse /usr/bin/clickhouse-server --config-file /etc/clickhouse-server/config.xml
+```bash
+$ sudo -u clickhouse /usr/bin/clickhouse-server --config-file /etc/clickhouse-server/config.xml
```
Эта команда запускает сервер как интерактивное приложение со стандартными параметрами скрипта автозапуска. В этом режиме `clickhouse-server` выводит сообщения в консоль.
diff --git a/docs/ru/operations/update.md b/docs/ru/operations/update.md
index 41346775637..eb870c2de9c 100644
--- a/docs/ru/operations/update.md
+++ b/docs/ru/operations/update.md
@@ -2,10 +2,10 @@
Если ClickHouse установлен с помощью deb-пакетов, выполните следующие команды на сервере:
-```
-sudo apt-get update
-sudo apt-get install clickhouse-client clickhouse-server
-sudo service clickhouse-server restart
+```bash
+$ sudo apt-get update
+$ sudo apt-get install clickhouse-client clickhouse-server
+$ sudo service clickhouse-server restart
```
Если ClickHouse установлен не из рекомендуемых deb-пакетов, используйте соответствующий метод обновления.
diff --git a/docs/ru/operations/utils/clickhouse-copier.md b/docs/ru/operations/utils/clickhouse-copier.md
index b38e25f6c16..9eb5a151a4a 100644
--- a/docs/ru/operations/utils/clickhouse-copier.md
+++ b/docs/ru/operations/utils/clickhouse-copier.md
@@ -23,7 +23,7 @@
Утилиту следует запускать вручную следующим образом:
```bash
-clickhouse-copier copier --daemon --config zookeeper.xml --task-path /task/path --base-dir /path/to/dir
+$ clickhouse-copier copier --daemon --config zookeeper.xml --task-path /task/path --base-dir /path/to/dir
```
Параметры запуска:
diff --git a/docs/ru/operations/utils/clickhouse-local.md b/docs/ru/operations/utils/clickhouse-local.md
index dd3e21d9ce1..a9c32e49606 100644
--- a/docs/ru/operations/utils/clickhouse-local.md
+++ b/docs/ru/operations/utils/clickhouse-local.md
@@ -14,8 +14,8 @@
Основной формат вызова:
-``` bash
-clickhouse-local --structure "table_structure" --input-format "format_of_incoming_data" -q "query"
+```bash
+$ clickhouse-local --structure "table_structure" --input-format "format_of_incoming_data" -q "query"
```
Ключи команды:
@@ -35,8 +35,8 @@ clickhouse-local --structure "table_structure" --input-format "format_of_incomin
## Примеры вызова
-``` bash
-echo -e "1,2\n3,4" | clickhouse-local -S "a Int64, b Int64" -if "CSV" -q "SELECT * FROM table"
+```bash
+$ echo -e "1,2\n3,4" | clickhouse-local -S "a Int64, b Int64" -if "CSV" -q "SELECT * FROM table"
Read 2 rows, 32.00 B in 0.000 sec., 5182 rows/sec., 80.97 KiB/sec.
1 2
3 4
@@ -44,7 +44,7 @@ Read 2 rows, 32.00 B in 0.000 sec., 5182 rows/sec., 80.97 KiB/sec.
Вызов выше эквивалентен следующему:
-``` bash
+```bash
$ echo -e "1,2\n3,4" | clickhouse-local -q "CREATE TABLE table (a Int64, b Int64) ENGINE = File(CSV, stdin); SELECT a, b FROM table; DROP TABLE table"
Read 2 rows, 32.00 B in 0.000 sec., 4987 rows/sec., 77.93 KiB/sec.
1 2
@@ -53,7 +53,7 @@ Read 2 rows, 32.00 B in 0.000 sec., 4987 rows/sec., 77.93 KiB/sec.
А теперь давайте выведем на экран объем оперативной памяти, занимаемой пользователями (Unix):
-``` bash
+```bash
$ ps aux | tail -n +2 | awk '{ printf("%s\t%s\n", $1, $4) }' | clickhouse-local -S "user String, mem Float64" -q "SELECT user, round(sum(mem), 2) as memTotal FROM table GROUP BY user ORDER BY memTotal DESC FORMAT Pretty"
Read 186 rows, 4.15 KiB in 0.035 sec., 5302 rows/sec., 118.34 KiB/sec.
┏━━━━━━━━━━┳━━━━━━━━━━┓
diff --git a/docs/ru/query_language/agg_functions/combinators.md b/docs/ru/query_language/agg_functions/combinators.md
index 1fcdb111e17..68e32fb7032 100644
--- a/docs/ru/query_language/agg_functions/combinators.md
+++ b/docs/ru/query_language/agg_functions/combinators.md
@@ -46,4 +46,75 @@
Преобразует агрегатную функцию для таблиц в агрегатную функцию для массивов, которая применяет агрегирование для соответствующих элементов массивов и возвращает массив результатов. Например, `sumForEach` для массивов `[1, 2]`, `[3, 4, 5]` и `[6, 7]` даст результат `[10, 13, 5]`, сложив соответственные элементы массивов.
+
+## -Resample {#agg_functions-combinator-resample}
+
+
+Позволяет поделить данные на группы, а затем по-отдельности агрегирует данные для этих групп. Группы образуются разбиением значений одного из столбцов на интервалы.
+
+```sql
+Resample(start, end, step)(, resampling_key)
+```
+
+**Параметры**
+
+- `start` — начальное значение для интервала значений `resampling_key`.
+- `stop` — конечное значение для интервала значений `resampling_key`. Интервал не включает значение `stop` (`[start, stop)`).
+- `step` — шаг деления полного интервала на подинтервалы. Функция `aggFunction` выполняется для каждого из подинтервалов независимо.
+- `resampling_key` — столбец, значения которого используются для разделения данных на интервалы.
+- `aggFunction_params` — параметры `aggFunction`.
+
+
+**Возвращаемые значения**
+
+- Массив результатов `aggFunction` для каждого подинтервала.
+
+**Пример**
+
+
+Рассмотрим таблицу `people` со следующими данными:
+
+```text
+┌─name───┬─age─┬─wage─┐
+│ John │ 16 │ 10 │
+│ Alice │ 30 │ 15 │
+│ Mary │ 35 │ 8 │
+│ Evelyn │ 48 │ 11.5 │
+│ David │ 62 │ 9.9 │
+│ Brian │ 60 │ 16 │
+└────────┴─────┴──────┘
+```
+
+Получим имена людей, чей возраст находится в интервалах `[30,60)` и `[60,75)`. Поскольку мы используем целочисленное представление возраста, то интервалы будут выглядеть как `[30, 59]` и `[60,74]`.
+
+Чтобы собрать имена в массив, возьмём агрегатную функцию [groupArray](reference.md#agg_function-grouparray). Она принимает один аргумент. В нашем случае, это столбец `name`. Функция `groupArrayResample` должна использовать столбец `age` для агрегирования имён по возрасту. Чтобы определить необходимые интервалы, передадим в функцию `groupArrayResample` аргументы `30, 75, 30`.
+
+```sql
+SELECT groupArrayResample(30, 75, 30)(name, age) from people
+```
+```text
+┌─groupArrayResample(30, 75, 30)(name, age)─────┐
+│ [['Alice','Mary','Evelyn'],['David','Brian']] │
+└───────────────────────────────────────────────┘
+```
+
+Посмотрим на результаты.
+
+`Jonh` не попал в выдачу, поскольку слишком молод. Остальные распределены согласно заданным возрастным интервалам.
+
+Теперь посчитаем общее количество людей и их среднюю заработную плату в заданных возрастных интервалах.
+
+
+```sql
+SELECT
+ countResample(30, 75, 30)(name, age) AS amount,
+ avgResample(30, 75, 30)(wage, age) AS avg_wage
+FROM people
+```
+```text
+┌─amount─┬─avg_wage──────────────────┐
+│ [3,2] │ [11.5,12.949999809265137] │
+└────────┴───────────────────────────┘
+```
+
[Оригинальная статья](https://clickhouse.yandex/docs/ru/query_language/agg_functions/combinators/)
diff --git a/docs/ru/query_language/agg_functions/index.md b/docs/ru/query_language/agg_functions/index.md
index 138bae48fd3..fad195991db 100644
--- a/docs/ru/query_language/agg_functions/index.md
+++ b/docs/ru/query_language/agg_functions/index.md
@@ -15,7 +15,7 @@ ClickHouse поддерживает также:
Рассмотрим таблицу:
-```
+```text
┌─x─┬────y─┐
│ 1 │ 2 │
│ 2 │ ᴺᵁᴸᴸ │
@@ -27,17 +27,14 @@ ClickHouse поддерживает также:
Выполним суммирование значений в столбце `y`:
+```sql
+SELECT sum(y) FROM t_null_big
```
-:) SELECT sum(y) FROM t_null_big
-
-SELECT sum(y)
-FROM t_null_big
-
+```text
┌─sum(y)─┐
│ 7 │
└────────┘
-1 rows in set. Elapsed: 0.002 sec.
```
Функция `sum` работает с `NULL` как с `0`. В частности, это означает, что если на вход в функцию подать выборку, где все значения `NULL`, то результат будет `0`, а не `NULL`.
@@ -45,17 +42,13 @@ FROM t_null_big
Теперь с помощью фукции `groupArray` сформируем массив из стобца `y`:
+```sql
+SELECT groupArray(y) FROM t_null_big
```
-:) SELECT groupArray(y) FROM t_null_big
-
-SELECT groupArray(y)
-FROM t_null_big
-
+```text
┌─groupArray(y)─┐
│ [2,2,3] │
└───────────────┘
-
-1 rows in set. Elapsed: 0.002 sec.
```
`groupArray` не включает `NULL` в результирующий массив.
diff --git a/docs/ru/query_language/agg_functions/parametric_functions.md b/docs/ru/query_language/agg_functions/parametric_functions.md
index 93e96be1dfa..5adf20dfce5 100644
--- a/docs/ru/query_language/agg_functions/parametric_functions.md
+++ b/docs/ru/query_language/agg_functions/parametric_functions.md
@@ -2,6 +2,75 @@
Некоторые агрегатные функции могут принимать не только столбцы-аргументы (по которым производится свёртка), но и набор параметров - констант для инициализации. Синтаксис - две пары круглых скобок вместо одной. Первая - для параметров, вторая - для аргументов.
+## histogram
+
+Рассчитывает адаптивную гистограмму. Не гарантирует точного результата.
+
+```
+histogram(number_of_bins)(values)
+```
+
+Функция использует [A Streaming Parallel Decision Tree Algorithm](http://jmlr.org/papers/volume11/ben-haim10a/ben-haim10a.pdf). Границы столбцов устанавливаются по мере поступления новых данных в функцию. В общем случае столбцы имею разную ширину.
+
+**Параметры**
+
+`number_of_bins` — максимальное количество корзин в гистограмме. Функция автоматически вычисляет количество корзин. Она пытается получить указанное количество корзин, но если не получилось, то в результате корзин будет меньше.
+`values` — [выражение](../syntax.md#syntax-expressions), предоставляющее входные значения.
+
+**Возвращаемые значения**
+
+- [Массив](../../data_types/array.md) [кортежей](../../data_types/tuple.md) следующего вида:
+
+ ```
+ [(lower_1, upper_1, height_1), ... (lower_N, upper_N, height_N)]
+ ```
+
+ - `lower` — нижняя граница корзины.
+ - `upper` — верхняя граница корзины.
+ - `height` — количество значений в корзине.
+
+**Пример**
+
+```sql
+SELECT histogram(5)(number + 1)
+FROM (
+ SELECT *
+ FROM system.numbers
+ LIMIT 20
+)
+```
+```text
+┌─histogram(5)(plus(number, 1))───────────────────────────────────────────┐
+│ [(1,4.5,4),(4.5,8.5,4),(8.5,12.75,4.125),(12.75,17,4.625),(17,20,3.25)] │
+└─────────────────────────────────────────────────────────────────────────┘
+```
+
+С помощью функции [bar](../other_functions.md#function-bar) можно визуализировать гистограмму, например:
+
+```sql
+WITH histogram(5)(rand() % 100) AS hist
+SELECT
+ arrayJoin(hist).3 AS height,
+ bar(height, 0, 6, 5) AS bar
+FROM
+(
+ SELECT *
+ FROM system.numbers
+ LIMIT 20
+)
+```
+```text
+┌─height─┬─bar───┐
+│ 2.125 │ █▋ │
+│ 3.25 │ ██▌ │
+│ 5.625 │ ████▏ │
+│ 5.625 │ ████▏ │
+│ 3.375 │ ██▌ │
+└────────┴───────┘
+```
+
+В этом случае необходимо помнить, что границы корзин гистограммы не известны.
+
## sequenceMatch(pattern)(time, cond1, cond2, ...)
Сопоставление с образцом для цепочки событий.
@@ -21,7 +90,7 @@
Это вырожденный пример. Его можно записать с помощью других агрегатных функций:
-```
+```sql
minIf(EventTime, URL LIKE '%company%') < maxIf(EventTime, URL LIKE '%cart%').
```
@@ -53,7 +122,7 @@ minIf(EventTime, URL LIKE '%company%') < maxIf(EventTime, URL LIKE '%cart%').
Отыскивает цепочки событий в скользящем окне по времени и вычисляет максимальное количество произошедших событий из цепочки.
-```
+```sql
windowFunnel(window)(timestamp, cond1, cond2, cond3, ...)
```
@@ -85,7 +154,7 @@ windowFunnel(window)(timestamp, cond1, cond2, cond3, ...)
Чтобы узнать, как далеко пользователь `user_id` смог пройти по цепочке за час в январе 2017-го года, составим запрос:
-```
+```sql
SELECT
level,
count() AS c
@@ -121,7 +190,7 @@ ORDER BY level
Пример применения:
-```
+```text
Задача: показывать в отчёте только поисковые фразы, по которым было хотя бы 5 уникальных посетителей.
Решение: пишем в запросе GROUP BY SearchPhrase HAVING uniqUpTo(4)(UserID) >= 5
```
diff --git a/docs/ru/query_language/agg_functions/reference.md b/docs/ru/query_language/agg_functions/reference.md
index b948b17abee..bcf362d4c50 100644
--- a/docs/ru/query_language/agg_functions/reference.md
+++ b/docs/ru/query_language/agg_functions/reference.md
@@ -81,7 +81,7 @@ SELECT count(DISTINCT num) FROM t
Выбирает часто встречающееся значение с помощью алгоритма "[heavy hitters](http://www.cs.umd.edu/~samir/498/karp.pdf)". Если существует значение, которое встречается чаще, чем в половине случаев, в каждом потоке выполнения запроса, то возвращается данное значение. В общем случае, результат недетерминирован.
-```
+```sql
anyHeavy(column)
```
@@ -98,7 +98,7 @@ SELECT anyHeavy(AirlineID) AS res
FROM ontime
```
-```
+```text
┌───res─┐
│ 19690 │
└───────┘
@@ -113,7 +113,7 @@ FROM ontime
Применяет побитовое `И` для последовательности чисел.
-```
+```sql
groupBitAnd(expr)
```
@@ -129,7 +129,7 @@ groupBitAnd(expr)
Тестовые данные:
-```
+```text
binary decimal
00101100 = 44
00011100 = 28
@@ -139,7 +139,7 @@ binary decimal
Запрос:
-```
+```sql
SELECT groupBitAnd(num) FROM t
```
@@ -147,7 +147,7 @@ SELECT groupBitAnd(num) FROM t
Результат:
-```
+```text
binary decimal
00000100 = 4
```
@@ -156,7 +156,7 @@ binary decimal
Применяет побитовое `ИЛИ` для последовательности чисел.
-```
+```sql
groupBitOr(expr)
```
@@ -172,7 +172,7 @@ groupBitOr(expr)
Тестовые данные:
-```
+```text
binary decimal
00101100 = 44
00011100 = 28
@@ -182,7 +182,7 @@ binary decimal
Запрос:
-```
+```sql
SELECT groupBitOr(num) FROM t
```
@@ -190,7 +190,7 @@ SELECT groupBitOr(num) FROM t
Результат:
-```
+```text
binary decimal
01111101 = 125
```
@@ -199,7 +199,7 @@ binary decimal
Применяет побитовое `ИСКЛЮЧАЮЩЕЕ ИЛИ` для последовательности чисел.
-```
+```sql
groupBitXor(expr)
```
@@ -215,7 +215,7 @@ groupBitXor(expr)
Тестовые данные:
-```
+```text
binary decimal
00101100 = 44
00011100 = 28
@@ -225,7 +225,7 @@ binary decimal
Запрос:
-```
+```sql
SELECT groupBitXor(num) FROM t
```
@@ -233,7 +233,7 @@ SELECT groupBitXor(num) FROM t
Результат:
-```
+```text
binary decimal
01101000 = 104
```
@@ -242,7 +242,7 @@ binary decimal
Bitmap или агрегатные вычисления для столбца с типом данных `UInt*`, возвращают кардинальность в виде значения типа UInt64, если добавить суффикс -State, то возвращают [объект bitmap](../functions/bitmap_functions.md).
-```
+```sql
groupBitmap(expr)
```
@@ -258,7 +258,7 @@ groupBitmap(expr)
Тестовые данные:
-```
+```text
UserID
1
1
@@ -268,13 +268,13 @@ UserID
Запрос:
-```
+```sql
SELECT groupBitmap(UserID) as num FROM t
```
Результат:
-```
+```text
num
3
```
@@ -293,15 +293,17 @@ num
**Пример:**
-```
+```text
┌─user─────┬─salary─┐
│ director │ 5000 │
│ manager │ 3000 │
│ worker │ 1000 │
└──────────┴────────┘
-
+```
+```sql
SELECT argMin(user, salary) FROM salary
-
+```
+```text
┌─argMin(user, salary)─┐
│ worker │
└──────────────────────┘
@@ -353,7 +355,7 @@ FROM sum_map
GROUP BY timeslot
```
-```
+```text
┌────────────timeslot─┬─sumMap(statusMap.status, statusMap.requests)─┐
│ 2000-01-01 00:00:00 │ ([1,2,3,4,5],[10,10,20,10,10]) │
│ 2000-01-01 00:01:00 │ ([4,5,6,7,8],[10,10,20,10,10]) │
@@ -364,7 +366,7 @@ GROUP BY timeslot
Вычисляет [коэффициент асимметрии](https://ru.wikipedia.org/wiki/Коэффициент_асимметрии) для последовательности.
-```
+```sql
skewPop(expr)
```
@@ -388,7 +390,7 @@ SELECT skewPop(value) FROM series_with_value_column
Он представляет собой несмещенную оценку асимметрии случайной величины, если переданные значения образуют ее выборку.
-```
+```sql
skewSamp(expr)
```
@@ -410,7 +412,7 @@ SELECT skewSamp(value) FROM series_with_value_column
Вычисляет [коэффициент эксцесса](https://ru.wikipedia.org/wiki/Коэффициент_эксцесса) последовательности.
-```
+```sql
kurtPop(expr)
```
@@ -434,7 +436,7 @@ SELECT kurtPop(value) FROM series_with_value_column
Он представляет собой несмещенную оценку эксцесса случайной величины, если переданные значения образуют ее выборку.
-```
+```sql
kurtSamp(expr)
```
@@ -467,7 +469,7 @@ SELECT kurtSamp(value) FROM series_with_value_column
Пример:
-```
+```text
┌─uid─┬─timestamp─┬─value─┐
│ 1 │ 2 │ 0.2 │
│ 1 │ 7 │ 0.7 │
@@ -482,7 +484,7 @@ SELECT kurtSamp(value) FROM series_with_value_column
└─────┴───────────┴───────┘
```
-```
+```sql
CREATE TABLE time_series(
uid UInt64,
timestamp Int64,
@@ -500,7 +502,7 @@ FROM (
И результат будет:
-```
+```text
[(2,0.2),(3,0.9),(7,2.1),(8,2.4),(12,3.6),(17,5.1),(18,5.4),(24,7.2),(25,2.5)]
```
@@ -511,7 +513,7 @@ FROM (
Для пример из описания timeSeriesGroupRateSum результат будет следующим:
-```
+```text
[(2,0),(3,0.1),(7,0.3),(8,0.3),(12,0.3),(17,0.3),(18,0.3),(24,0.3),(25,0.1)]
```
@@ -525,7 +527,7 @@ FROM (
Приближённо вычисляет количество различных значений аргумента.
-```
+```sql
uniq(x[, ...])
```
@@ -561,7 +563,7 @@ uniq(x[, ...])
Приближённо вычисляет количество различных значений аргумента.
-```
+```sql
uniqCombined(HLL_precision)(x[, ...])
```
@@ -605,7 +607,7 @@ uniqCombined(HLL_precision)(x[, ...])
Вычисляет приблизительное число различных значений аргументов, используя алгоритм [HyperLogLog](https://en.wikipedia.org/wiki/HyperLogLog).
-```
+```sql
uniqHLL12(x[, ...])
```
@@ -641,7 +643,7 @@ uniqHLL12(x[, ...])
Вычисляет точное количество различных значений аргументов.
-```
+```sql
uniqExact(x[, ...])
```
@@ -659,7 +661,7 @@ uniqExact(x[, ...])
- [uniqCombined](#agg_function-uniqcombined)
- [uniqHLL12](#agg_function-uniqhll12)
-## groupArray(x), groupArray(max_size)(x)
+## groupArray(x), groupArray(max_size)(x) {#agg_function-grouparray}
Составляет массив из значений аргумента.
Значения в массив могут быть добавлены в любом (недетерминированном) порядке.
@@ -685,7 +687,7 @@ uniqExact(x[, ...])
Вычисляет скользящую сумму входных значений.
-```
+```sql
groupArrayMovingSum(numbers_for_summing)
groupArrayMovingSum(window_size)(numbers_for_summing)
```
@@ -835,7 +837,7 @@ FROM t
Функция `groupUniqArray(max_size)(x)` ограничивает размер результирующего массива до `max_size` элементов. Например, `groupUniqArray(1)(x)` равнозначно `[any(x)]`.
-## quantile(level)(x)
+## quantile(level)(x) {#agg_function-quantile}
Приближённо вычисляет квантиль уровня level. level - константа, число с плавающей запятой от 0 до 1.
Рекомендуется использовать значения level в диапазоне `[0.01, 0.99]`.
@@ -862,27 +864,55 @@ FROM t
Не используйте эту функцию для расчёта таймингов. Для этого есть более подходящая функция - `quantileTiming`.
-## quantileTiming(level)(x)
+## quantileTiming {#agg_function-quantiletiming}
-Вычисляет квантиль уровня level с фиксированной точностью.
-Работает для чисел. Предназначена для расчёта квантилей от времени загрузки страницы в миллисекундах.
+Вычисляет квантиль заданного уровня с детерминированной точностью. Функция предназначена для расчётов квантилей времени загрузки страниц в миллисекундах.
-Если значение больше 30000 (соответствует времени загрузки страницы большем 30 секундам) - результат приравнивается к 30000.
+```
+quantileTiming(level)(expr)
+```
-Если всего значений не больше примерно 5670, то вычисление точное.
+**Параметры**
-Иначе:
+- `level` — уровень квантили. Диапазон: [0, 1].
+- `expr` — [выражение](../syntax.md#syntax-expressions), возвращающее число типа [Float*](../../data_types/float.md). Функция ожидает на вход значения в фомате UNIX-время в миллисекундах, но не проверяет формат входных значений.
+
+ - Поведение функции не определено для отрицательных входных значений.
+ - Если входное значение больше 30,000 (т.е. время загрузки страницы превышает 30 секунд), оно приравнивается к 30,000.
-- если время меньше 1024 мс., то вычисление точное.
-- иначе вычисление идёт с округлением до числа, кратного 16 мс.
+**Точность**
-При передаче в функцию отрицательных значений, поведение не определено.
+Вычисления точны если:
-Возвращаемое значение имеет тип Float32. Когда в функцию не было передано ни одного значения (при использовании `quantileTimingIf`), возвращается nan. Это сделано, чтобы отличать такие случаи от нулей. Смотрите замечание о сортировке NaN-ов в разделе «Секция ORDER BY».
+- Общее количество значений не превышает 5670.
+- Общее количество значений больше 5670, но времена загрузки страниц меньше 1024мс.
+
+В противном случае, результат рассчетов округляется до ближайшего числа, кратного 16мс.
+
+!! note "Примечание"
+ Для расчёта квантилей времени загрузки страниц, функция работает эффективней и с более высокой точностью, чем функция [quantile](#agg_function-quantile).
+
+**Возвращаемое значение**
+
+- Квантиль заданного уровня.
+
+Тип: `Float32`.
+
+!!! note "Примечание"
+ Если в функцию не передано значений (для `quantileTimingIf`), возвращается [NaN](../../data_types/float.md#data_type-float-nan-inf). Это необходимо для того, что бы отделить такие случаи от случаев, в которых результат 0. Смотрите замечания о сортировке значений `NaN` в разделе [Секция ORDER BY](../select.md#select-order-by).
Результат детерминирован (не зависит от порядка выполнения запроса).
-Для своей задачи (расчёт квантилей времени загрузки страниц), использование этой функции эффективнее и результат точнее, чем для функции `quantile`.
+**Пример**
+
+```sql
+SELECT quantileTiming(0.5)(number / 2) FROM numbers(10)
+```
+```text
+┌─quantileTiming(0.5)(divide(number, 2))─┐
+│ 2 │
+└────────────────────────────────────────┘
+```
## quantileTimingWeighted(level)(x, weight)
@@ -943,7 +973,7 @@ FROM t
Реализует [Filtered Space-Saving](http://www.l2f.inesc-id.pt/~fmmb/wiki/uploads/Work/misnis.ref0a.pdf) алгоритм для анализа TopK, на основе reduce-and-combine алгоритма из методики [Parallel Space Saving](https://arxiv.org/pdf/1401.0702.pdf).
-```
+```sql
topK(N)(column)
```
@@ -965,7 +995,7 @@ SELECT topK(3)(AirlineID) AS res
FROM ontime
```
-```
+```text
┌─res─────────────────┐
│ [19393,19790,19805] │
└─────────────────────┘
@@ -989,7 +1019,7 @@ FROM ontime
Выполняет простую (одномерную) линейную регрессию.
-```
+```sql
simpleLinearRegression(x, y)
```
diff --git a/docs/ru/query_language/alter.md b/docs/ru/query_language/alter.md
index 21f19323d2d..685ad914722 100644
--- a/docs/ru/query_language/alter.md
+++ b/docs/ru/query_language/alter.md
@@ -6,7 +6,7 @@
Изменение структуры таблицы.
-``` sql
+```sql
ALTER TABLE [db].name [ON CLUSTER cluster] ADD|DROP|CLEAR|COMMENT|MODIFY COLUMN ...
```
@@ -25,7 +25,7 @@ ALTER TABLE [db].name [ON CLUSTER cluster] ADD|DROP|CLEAR|COMMENT|MODIFY COLUMN
#### ADD COLUMN {#alter_add-column}
-``` sql
+```sql
ADD COLUMN [IF NOT EXISTS] name [type] [default_expr] [AFTER name_after]
```
@@ -39,13 +39,13 @@ ADD COLUMN [IF NOT EXISTS] name [type] [default_expr] [AFTER name_after]
Пример:
-``` sql
+```sql
ALTER TABLE visits ADD COLUMN browser String AFTER user_id
```
#### DROP COLUMN {#alter_drop-column}
-``` sql
+```sql
DROP COLUMN [IF EXISTS] name
```
@@ -55,13 +55,13 @@ DROP COLUMN [IF EXISTS] name
Пример:
-``` sql
+```sql
ALTER TABLE visits DROP COLUMN browser
```
#### CLEAR COLUMN {#alter_clear-column}
-``` sql
+```sql
CLEAR COLUMN [IF EXISTS] name IN PARTITION partition_name
```
Сбрасывает все значения в столбце для заданной партиции. Если указано `IF EXISTS`, запрос не будет возвращать ошибку, если столбца не существует.
@@ -70,13 +70,13 @@ CLEAR COLUMN [IF EXISTS] name IN PARTITION partition_name
Пример:
-``` sql
+```sql
ALTER TABLE visits CLEAR COLUMN browser IN PARTITION tuple()
```
#### COMMENT COLUMN {#alter_comment-column}
-``` sql
+```sql
COMMENT COLUMN [IF EXISTS] name 'Text comment'
```
@@ -88,13 +88,13 @@ COMMENT COLUMN [IF EXISTS] name 'Text comment'
Пример:
-``` sql
+```sql
ALTER TABLE visits COMMENT COLUMN browser 'Столбец показывает, из каких браузеров пользователи заходили на сайт.'
```
#### MODIFY COLUMN {#alter_modify-column}
-``` sql
+```sql
MODIFY COLUMN [IF EXISTS] name [type] [default_expr]
```
@@ -104,7 +104,7 @@ MODIFY COLUMN [IF EXISTS] name [type] [default_expr]
Пример запроса:
-``` sql
+```sql
ALTER TABLE visits MODIFY COLUMN browser Array(String)
```
@@ -138,7 +138,7 @@ ALTER TABLE visits MODIFY COLUMN browser Array(String)
Поддерживается операция:
-``` sql
+```sql
MODIFY ORDER BY new_expression
```
@@ -153,7 +153,7 @@ MODIFY ORDER BY new_expression
### Манипуляции с индексами
Добавить или удалить индекс можно с помощью операций
-```
+```sql
ALTER TABLE [db].name ADD INDEX name expression TYPE type GRANULARITY value [AFTER name]
ALTER TABLE [db].name DROP INDEX name
```
@@ -170,7 +170,7 @@ ALTER TABLE [db].name DROP INDEX name
Про ограничения подробнее написано [тут](create.md#constraints).
Добавить или удалить ограничение можно с помощью запросов
-```
+```sql
ALTER TABLE [db].name ADD CONSTRAINT constraint_name CHECK expression;
ALTER TABLE [db].name DROP CONSTRAINT constraint_name;
```
@@ -286,7 +286,7 @@ ALTER TABLE visits CLEAR COLUMN hour in PARTITION 201902
#### CLEAR INDEX IN PARTITION {#alter_clear-index-partition}
-``` sql
+```sql
ALTER TABLE table_name CLEAR INDEX index_name IN PARTITION partition_expr
```
@@ -399,7 +399,7 @@ ALTER TABLE hits MOVE PARTITION '2019-09-01' TO DISK 'fast_ssd'
OPTIMIZE TABLE table_not_partitioned PARTITION tuple() FINAL;
```
-Примеры запросов `ALTER ... PARTITION` можно посмотреть в тестах: [`00502_custom_partitioning_local`](https://github.com/yandex/ClickHouse/blob/master/dbms/tests/queries/0_stateless/00502_custom_partitioning_local.sql) и [`00502_custom_partitioning_replicated_zookeeper`](https://github.com/yandex/ClickHouse/blob/master/dbms/tests/queries/0_stateless/00502_custom_partitioning_replicated_zookeeper.sql).
+Примеры запросов `ALTER ... PARTITION` можно посмотреть в тестах: [`00502_custom_partitioning_local`](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/tests/queries/0_stateless/00502_custom_partitioning_local.sql) и [`00502_custom_partitioning_replicated_zookeeper`](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/tests/queries/0_stateless/00502_custom_partitioning_replicated_zookeeper.sql).
### Синхронность запросов ALTER
@@ -418,19 +418,19 @@ OPTIMIZE TABLE table_not_partitioned PARTITION tuple() FINAL;
На данный момент доступны команды:
-``` sql
+```sql
ALTER TABLE [db.]table DELETE WHERE filter_expr
```
Выражение `filter_expr` должно иметь тип UInt8. Запрос удаляет строки таблицы, для которых это выражение принимает ненулевое значение.
-``` sql
+```sql
ALTER TABLE [db.]table UPDATE column1 = expr1 [, ...] WHERE filter_expr
```
Команда доступна начиная с версии 18.12.14. Выражение `filter_expr` должно иметь тип UInt8. Запрос изменяет значение указанных столбцов на вычисленное значение соответствующих выражений в каждой строке, для которой `filter_expr` принимает ненулевое значение. Вычисленные значения преобразуются к типу столбца с помощью оператора `CAST`. Изменение столбцов, которые используются при вычислении первичного ключа или ключа партиционирования, не поддерживается.
-``` sql
+```sql
ALTER TABLE [db.]table MATERIALIZE INDEX name IN PARTITION partition_name
```
diff --git a/docs/ru/query_language/create.md b/docs/ru/query_language/create.md
index d32afb7b9d9..c10cfff7685 100644
--- a/docs/ru/query_language/create.md
+++ b/docs/ru/query_language/create.md
@@ -51,7 +51,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name AS [db2.]name2 [ENGINE = engine]
Создаёт таблицу с такой же структурой, как другая таблица. Можно указать другой движок для таблицы. Если движок не указан, то будет выбран такой же движок, как у таблицы `db2.name2`.
-``` sql
+```sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name AS table_fucntion()
```
@@ -144,7 +144,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
Высокие уровни сжатия полезны для асимметричных сценариев, например, для таких, в которых требуется однократное сжатие и многократная распаковка. Более высокие уровни обеспечивают лучшее сжатие, но более высокое потребление вычислительных ресурсов.
!!! warning "Предупреждение"
- Базу данных ClickHouse не получится распаковать с помощью внешних утилит типа `lz4`. Используйте специальную программу [clickhouse-compressor](https://github.com/yandex/ClickHouse/tree/master/dbms/programs/compressor).
+ Базу данных ClickHouse не получится распаковать с помощью внешних утилит типа `lz4`. Используйте специальную программу [clickhouse-compressor](https://github.com/ClickHouse/ClickHouse/tree/master/dbms/programs/compressor).
Пример использования:
diff --git a/docs/ru/query_language/dicts/external_dicts_dict_layout.md b/docs/ru/query_language/dicts/external_dicts_dict_layout.md
index 826d9b78ae9..22fd6ec0d91 100644
--- a/docs/ru/query_language/dicts/external_dicts_dict_layout.md
+++ b/docs/ru/query_language/dicts/external_dicts_dict_layout.md
@@ -99,7 +99,7 @@
Пример: таблица содержит скидки для каждого рекламодателя в виде:
-```
+```text
+---------------+---------------------+-------------------+--------+
| advertiser id | discount start date | discount end date | amount |
+===============+=====================+===================+========+
@@ -226,7 +226,7 @@
Пример: таблица содержит префиксы сети и соответствующие им номера AS и коды стран:
-```
+```text
+-----------------+-------+--------+
| prefix | asn | cca2 |
+=================+=======+========+
@@ -269,13 +269,13 @@
Для запросов необходимо использовать те же функции (`dictGetT` с кортежем), что и для словарей с составными ключами:
-```
+```sql
dictGetT('dict_name', 'attr_name', tuple(ip))
```
Функция принимает либо `UInt32` для IPv4, либо `FixedString(16)` для IPv6:
-```
+```sql
dictGetString('prefix', 'asn', tuple(IPv6StringToNum('2001:db8::1')))
```
diff --git a/docs/ru/query_language/dicts/external_dicts_dict_sources.md b/docs/ru/query_language/dicts/external_dicts_dict_sources.md
index 1c0d214053c..9a25ffdb4aa 100644
--- a/docs/ru/query_language/dicts/external_dicts_dict_sources.md
+++ b/docs/ru/query_language/dicts/external_dicts_dict_sources.md
@@ -132,7 +132,7 @@ ClickHouse получает от ODBC-драйвера информацию о
Сконфигурируем unixODBC для работы с PostgreSQL. Содержимое `/etc/odbc.ini`:
-```
+```text
[gregtest]
Driver = /usr/lib/psqlodbca.so
Servername = localhost
@@ -145,7 +145,7 @@ PASSWORD = test
Если выполнить запрос вида:
-```
+```sql
SELECT * FROM odbc('DSN=gregtest;Servername=some-server.com', 'test_db');
```
@@ -156,12 +156,12 @@ SELECT * FROM odbc('DSN=gregtest;Servername=some-server.com', 'test_db');
ОС Ubuntu.
Установка unixODBC и ODBC-драйвера для PostgreSQL: :
-
- sudo apt-get install -y unixodbc odbcinst odbc-postgresql
-
+```bash
+$ sudo apt-get install -y unixodbc odbcinst odbc-postgresql
+```
Настройка `/etc/odbc.ini` (или `~/.odbc.ini`):
-```
+```text
[DEFAULT]
Driver = myconnection
@@ -223,13 +223,13 @@ SELECT * FROM odbc('DSN=gregtest;Servername=some-server.com', 'test_db');
Установка драйвера: :
-```
- sudo apt-get install tdsodbc freetds-bin sqsh
+```bash
+$ sudo apt-get install tdsodbc freetds-bin sqsh
```
Настройка драйвера: :
-```
+```bash
$ cat /etc/freetds/freetds.conf
...
diff --git a/docs/ru/query_language/functions/arithmetic_functions.md b/docs/ru/query_language/functions/arithmetic_functions.md
index e17d193c203..3c541b50a99 100644
--- a/docs/ru/query_language/functions/arithmetic_functions.md
+++ b/docs/ru/query_language/functions/arithmetic_functions.md
@@ -4,11 +4,11 @@
Пример:
-``` sql
+```sql
SELECT toTypeName(0), toTypeName(0 + 0), toTypeName(0 + 0 + 0), toTypeName(0 + 0 + 0 + 0)
```
-```
+```text
┌─toTypeName(0)─┬─toTypeName(plus(0, 0))─┬─toTypeName(plus(plus(0, 0), 0))─┬─toTypeName(plus(plus(plus(0, 0), 0), 0))─┐
│ UInt8 │ UInt16 │ UInt32 │ UInt64 │
└───────────────┴────────────────────────┴─────────────────────────────────┴──────────────────────────────────────────┘
diff --git a/docs/ru/query_language/functions/array_functions.md b/docs/ru/query_language/functions/array_functions.md
index 11d5c819b02..19e3bb965c5 100644
--- a/docs/ru/query_language/functions/array_functions.md
+++ b/docs/ru/query_language/functions/array_functions.md
@@ -1,12 +1,12 @@
# Функции по работе с массивами
-## empty
+## empty {#function-empty}
Возвращает 1 для пустого массива, и 0 для непустого массива.
Тип результата - UInt8.
Функция также работает для строк.
-## notEmpty
+## notEmpty {#function-notempty}
Возвращает 0 для пустого массива, и 1 для непустого массива.
Тип результата - UInt8.
@@ -49,7 +49,7 @@
Объединяет массивы, переданные в качестве аргументов.
-```
+```sql
arrayConcat(arrays)
```
@@ -62,7 +62,7 @@ arrayConcat(arrays)
SELECT arrayConcat([1, 2], [3, 4], [5, 6]) AS res
```
-```
+```text
┌─res───────────┐
│ [1,2,3,4,5,6] │
└───────────────┘
@@ -83,9 +83,10 @@ SELECT arrayConcat([1, 2], [3, 4], [5, 6]) AS res
`NULL` обрабатывается как значение.
-```
+```sql
SELECT has([1, 2, NULL], NULL)
-
+```
+```text
┌─has([1, 2, NULL], NULL)─┐
│ 1 │
└─────────────────────────┘
@@ -95,7 +96,7 @@ SELECT has([1, 2, NULL], NULL)
Проверяет, является ли один массив подмножеством другого.
-```
+```sql
hasAll(set, subset)
```
@@ -133,7 +134,7 @@ hasAll(set, subset)
Проверяет, имеют ли два массива хотя бы один общий элемент.
-```
+```sql
hasAny(array1, array2)
```
@@ -170,11 +171,10 @@ hasAny(array1, array2)
Пример:
-```
-:) SELECT indexOf([1,3,NULL,NULL],NULL)
-
+```sql
SELECT indexOf([1, 3, NULL, NULL], NULL)
-
+```
+```text
┌─indexOf([1, 3, NULL, NULL], NULL)─┐
│ 3 │
└───────────────────────────────────┘
@@ -190,9 +190,10 @@ SELECT indexOf([1, 3, NULL, NULL], NULL)
Пример:
-```
+```sql
SELECT countEqual([1, 2, NULL, NULL], NULL)
-
+```
+```text
┌─countEqual([1, 2, NULL, NULL], NULL)─┐
│ 2 │
└──────────────────────────────────────┘
@@ -216,7 +217,7 @@ WHERE CounterID = 160656
LIMIT 10
```
-```
+```text
┌─Reaches─┬──Hits─┐
│ 95606 │ 31406 │
└─────────┴───────┘
@@ -232,7 +233,7 @@ FROM test.hits
WHERE (CounterID = 160656) AND notEmpty(GoalsReached)
```
-```
+```text
┌─Reaches─┬──Hits─┐
│ 95606 │ 31406 │
└─────────┴───────┘
@@ -263,7 +264,7 @@ ORDER BY Reaches DESC
LIMIT 10
```
-```
+```text
┌──GoalID─┬─Reaches─┬─Visits─┐
│ 53225 │ 3214 │ 1097 │
│ 2825062 │ 3188 │ 1097 │
@@ -286,7 +287,7 @@ LIMIT 10
SELECT arrayEnumerateUniq([1, 1, 1, 2, 2, 2], [1, 1, 2, 1, 1, 2]) AS res
```
-```
+```text
┌─res───────────┐
│ [1,2,1,1,2,1] │
└───────────────┘
@@ -298,7 +299,7 @@ SELECT arrayEnumerateUniq([1, 1, 1, 2, 2, 2], [1, 1, 2, 1, 1, 2]) AS res
Удаляет последний элемент из массива.
-```
+```sql
arrayPopBack(array)
```
@@ -311,7 +312,7 @@ arrayPopBack(array)
```sql
SELECT arrayPopBack([1, 2, 3]) AS res
```
-
+text
```
┌─res───┐
│ [1,2] │
@@ -322,7 +323,7 @@ SELECT arrayPopBack([1, 2, 3]) AS res
Удаляет первый элемент из массива.
-```
+```sql
arrayPopFront(array)
```
@@ -336,7 +337,7 @@ arrayPopFront(array)
SELECT arrayPopFront([1, 2, 3]) AS res
```
-```
+```text
┌─res───┐
│ [2,3] │
└───────┘
@@ -346,7 +347,7 @@ SELECT arrayPopFront([1, 2, 3]) AS res
Добавляет один элемент в конец массива.
-```
+```sql
arrayPushBack(array, single_value)
```
@@ -361,7 +362,7 @@ arrayPushBack(array, single_value)
SELECT arrayPushBack(['a'], 'b') AS res
```
-```
+```text
┌─res───────┐
│ ['a','b'] │
└───────────┘
@@ -371,7 +372,7 @@ SELECT arrayPushBack(['a'], 'b') AS res
Добавляет один элемент в начало массива.
-```
+```sql
arrayPushFront(array, single_value)
```
@@ -386,7 +387,7 @@ arrayPushFront(array, single_value)
SELECT arrayPushBack(['b'], 'a') AS res
```
-```
+```text
┌─res───────┐
│ ['a','b'] │
└───────────┘
@@ -396,7 +397,7 @@ SELECT arrayPushBack(['b'], 'a') AS res
Изменяет длину массива.
-```
+```sql
arrayResize(array, size[, extender])
```
@@ -414,17 +415,19 @@ arrayResize(array, size[, extender])
**Примеры вызовов**
-```
+```sql
SELECT arrayResize([1], 3)
-
+```
+```text
┌─arrayResize([1], 3)─┐
│ [1,0,0] │
└─────────────────────┘
```
-```
+```sql
SELECT arrayResize([1], 3, NULL)
-
+```
+```text
┌─arrayResize([1], 3, NULL)─┐
│ [1,NULL,NULL] │
└───────────────────────────┘
@@ -434,7 +437,7 @@ SELECT arrayResize([1], 3, NULL)
Возвращает срез массива.
-```
+```sql
arraySlice(array, offset[, length])
```
@@ -450,7 +453,7 @@ arraySlice(array, offset[, length])
SELECT arraySlice([1, 2, NULL, 4, 5], 2, 3) AS res
```
-```
+```text
┌─res────────┐
│ [2,NULL,4] │
└────────────┘
@@ -464,10 +467,10 @@ SELECT arraySlice([1, 2, NULL, 4, 5], 2, 3) AS res
Пример сортировки целочисленных значений:
-``` sql
+```sql
SELECT arraySort([1, 3, 3, 0])
```
-```
+```text
┌─arraySort([1, 3, 3, 0])─┐
│ [0,1,3,3] │
└─────────────────────────┘
@@ -475,10 +478,10 @@ SELECT arraySort([1, 3, 3, 0])
Пример сортировки строковых значений:
-``` sql
+```sql
SELECT arraySort(['hello', 'world', '!'])
```
-```
+```text
┌─arraySort(['hello', 'world', '!'])─┐
│ ['!','hello','world'] │
└────────────────────────────────────┘
@@ -486,10 +489,10 @@ SELECT arraySort(['hello', 'world', '!'])
Значения `NULL`, `NaN` и `Inf` сортируются по следующему принципу:
-``` sql
+```sql
SELECT arraySort([1, nan, 2, NULL, 3, nan, -4, NULL, inf, -inf]);
```
-```
+```text
┌─arraySort([1, nan, 2, NULL, 3, nan, -4, NULL, inf, -inf])─┐
│ [-inf,-4,1,2,3,inf,nan,nan,NULL,NULL] │
└───────────────────────────────────────────────────────────┘
@@ -504,10 +507,10 @@ SELECT arraySort([1, nan, 2, NULL, 3, nan, -4, NULL, inf, -inf]);
Рассмотрим пример:
-``` sql
+```sql
SELECT arraySort((x) -> -x, [1, 2, 3]) as res;
```
-```
+```text
┌─res─────┐
│ [3,2,1] │
└─────────┘
@@ -517,11 +520,11 @@ SELECT arraySort((x) -> -x, [1, 2, 3]) as res;
Лямбда-функция может принимать несколько аргументов. В этом случае, в функцию `arraySort` нужно передавать несколько массивов, которые будут соответствовать аргументам лямбда-функции (массивы должны быть одинаковой длины). Следует иметь в виду, что результат будет содержать элементы только из первого массива; элементы из всех последующих массивов будут задавать ключи сортировки. Например:
-``` sql
+```sql
SELECT arraySort((x, y) -> y, ['hello', 'world'], [2, 1]) as res;
```
-```
+```text
┌─res────────────────┐
│ ['world', 'hello'] │
└────────────────────┘
@@ -531,18 +534,18 @@ SELECT arraySort((x, y) -> y, ['hello', 'world'], [2, 1]) as res;
Ниже приведены другие примеры.
-``` sql
+```sql
SELECT arraySort((x, y) -> y, [0, 1, 2], ['c', 'b', 'a']) as res;
```
-``` sql
+```text
┌─res─────┐
│ [2,1,0] │
└─────────┘
```
-``` sql
+```sql
SELECT arraySort((x, y) -> -y, [0, 1, 2], [1, 2, 3]) as res;
```
-``` sql
+```text
┌─res─────┐
│ [2,1,0] │
└─────────┘
@@ -557,10 +560,10 @@ SELECT arraySort((x, y) -> -y, [0, 1, 2], [1, 2, 3]) as res;
Пример сортировки целочисленных значений:
-``` sql
+```sql
SELECT arrayReverseSort([1, 3, 3, 0]);
```
-```
+```text
┌─arrayReverseSort([1, 3, 3, 0])─┐
│ [3,3,1,0] │
└────────────────────────────────┘
@@ -568,21 +571,21 @@ SELECT arrayReverseSort([1, 3, 3, 0]);
Пример сортировки строковых значений:
-``` sql
+```sql
SELECT arrayReverseSort(['hello', 'world', '!']);
```
-```
+```text
┌─arrayReverseSort(['hello', 'world', '!'])─┐
│ ['world','hello','!'] │
└───────────────────────────────────────────┘
-```
+```
Значения `NULL`, `NaN` и `Inf` сортируются в следующем порядке:
-``` sql
+```sql
SELECT arrayReverseSort([1, nan, 2, NULL, 3, nan, -4, NULL, inf, -inf]) as res;
```
-``` sql
+```text
┌─res───────────────────────────────────┐
│ [inf,3,2,1,-4,-inf,nan,nan,NULL,NULL] │
└───────────────────────────────────────┘
@@ -595,10 +598,10 @@ SELECT arrayReverseSort([1, nan, 2, NULL, 3, nan, -4, NULL, inf, -inf]) as res;
Функция `arrayReverseSort` является [функцией высшего порядка](higher_order_functions.md). Вы можете передать ей в качестве первого аргумента лямбда-функцию. Например:
-``` sql
+```sql
SELECT arrayReverseSort((x) -> -x, [1, 2, 3]) as res;
```
-```
+```text
┌─res─────┐
│ [1,2,3] │
└─────────┘
@@ -611,10 +614,10 @@ SELECT arrayReverseSort((x) -> -x, [1, 2, 3]) as res;
Лямбда-функция может принимать на вход несколько аргументов. В этом случае, в функцию `arrayReverseSort` нужно передавать несколько массивов, которые будут соответствовать аргументам лямбда-функции (массивы должны быть одинаковой длины). Следует иметь в виду, что результат будет содержать элементы только из первого массива; элементы из всех последующих массивов будут определять ключи сортировки. Например:
-``` sql
+```sql
SELECT arrayReverseSort((x, y) -> y, ['hello', 'world'], [2, 1]) as res;
```
-``` sql
+```text
┌─res───────────────┐
│ ['hello','world'] │
└───────────────────┘
@@ -627,18 +630,18 @@ SELECT arrayReverseSort((x, y) -> y, ['hello', 'world'], [2, 1]) as res;
Ниже приведены ещё примеры.
-``` sql
+```sql
SELECT arrayReverseSort((x, y) -> y, [0, 1, 2], ['c', 'b', 'a']) as res;
```
-``` sql
+```text
┌─res─────┐
│ [0,1,2] │
└─────────┘
```
-``` sql
+```sql
SELECT arrayReverseSort((x, y) -> -y, [4, 3, 5], [1, 2, 3]) AS res;
```
-``` sql
+```text
┌─res─────┐
│ [4,3,5] │
└─────────┘
diff --git a/docs/ru/query_language/functions/array_join.md b/docs/ru/query_language/functions/array_join.md
index 7a89ab73fb4..1317d4329f2 100644
--- a/docs/ru/query_language/functions/array_join.md
+++ b/docs/ru/query_language/functions/array_join.md
@@ -15,11 +15,11 @@
Пример:
-``` sql
+```sql
SELECT arrayJoin([1, 2, 3] AS src) AS dst, 'Hello', src
```
-```
+```text
┌─dst─┬─\'Hello\'─┬─src─────┐
│ 1 │ Hello │ [1,2,3] │
│ 2 │ Hello │ [1,2,3] │
diff --git a/docs/ru/query_language/functions/bitmap_functions.md b/docs/ru/query_language/functions/bitmap_functions.md
index 00608b72770..f54d88ab35f 100644
--- a/docs/ru/query_language/functions/bitmap_functions.md
+++ b/docs/ru/query_language/functions/bitmap_functions.md
@@ -4,7 +4,7 @@
Создаёт битовый массив из массива целочисленных значений.
-```
+```sql
bitmapBuild(array)
```
@@ -28,7 +28,7 @@ SELECT bitmapBuild([1, 2, 3, 4, 5]) AS res, toTypeName(res)
Преобразует битовый массив в массив целочисленных значений.
-```
+```sql
bitmapToArray(bitmap)
```
@@ -42,7 +42,7 @@ bitmapToArray(bitmap)
SELECT bitmapToArray(bitmapBuild([1, 2, 3, 4, 5])) AS res
```
-```
+```text
┌─res─────────┐
│ [1,2,3,4,5] │
└─────────────┘
@@ -52,7 +52,7 @@ SELECT bitmapToArray(bitmapBuild([1, 2, 3, 4, 5])) AS res
Проверяет вхождение элемента в битовый массив.
-```
+```sql
bitmapContains(haystack, needle)
```
@@ -70,7 +70,7 @@ bitmapContains(haystack, needle)
**Пример**
-``` sql
+```sql
SELECT bitmapContains(bitmapBuild([1,5,7,9]), toUInt32(9)) AS res
```
```text
@@ -83,7 +83,7 @@ SELECT bitmapContains(bitmapBuild([1,5,7,9]), toUInt32(9)) AS res
Проверяет, имеют ли два битовых массива хотя бы один общий элемент.
-```
+```sql
bitmapHasAny(bitmap1, bitmap2)
```
@@ -104,7 +104,7 @@ bitmapHasAny(bitmap1, bitmap2)
SELECT bitmapHasAny(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res
```
-```
+```text
┌─res─┐
│ 1 │
└─────┘
@@ -115,7 +115,7 @@ SELECT bitmapHasAny(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res
Аналогично функции `hasAll(array, array)` возвращает 1 если первый битовый массив содержит все элементы второго, 0 в противном случае.
Если второй аргумент является пустым битовым массивом, то возвращает 1.
-```
+```sql
bitmapHasAll(bitmap,bitmap)
```
@@ -129,7 +129,7 @@ bitmapHasAll(bitmap,bitmap)
SELECT bitmapHasAll(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res
```
-```
+```text
┌─res─┐
│ 0 │
└─────┘
@@ -139,7 +139,7 @@ SELECT bitmapHasAll(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res
Логическое И для двух битовых массивов. Результат — новый битовый массив.
-```
+```sql
bitmapAnd(bitmap,bitmap)
```
@@ -153,7 +153,7 @@ bitmapAnd(bitmap,bitmap)
SELECT bitmapToArray(bitmapAnd(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res
```
-```
+```text
┌─res─┐
│ [3] │
└─────┘
@@ -163,7 +163,7 @@ SELECT bitmapToArray(bitmapAnd(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS re
Логическое ИЛИ для двух битовых массивов. Результат — новый битовый массив.
-```
+```sql
bitmapOr(bitmap,bitmap)
```
@@ -177,7 +177,7 @@ bitmapOr(bitmap,bitmap)
SELECT bitmapToArray(bitmapOr(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res
```
-```
+```text
┌─res─────────┐
│ [1,2,3,4,5] │
└─────────────┘
@@ -187,7 +187,7 @@ SELECT bitmapToArray(bitmapOr(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res
Логическое исключающее ИЛИ для двух битовых массивов. Результат — новый битовый массив.
-```
+```sql
bitmapXor(bitmap,bitmap)
```
@@ -201,7 +201,7 @@ bitmapXor(bitmap,bitmap)
SELECT bitmapToArray(bitmapXor(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res
```
-```
+```text
┌─res───────┐
│ [1,2,4,5] │
└───────────┘
@@ -211,7 +211,7 @@ SELECT bitmapToArray(bitmapXor(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS re
Логическое отрицание И для двух битовых массивов. Результат — новый битовый массив.
-```
+```sql
bitmapAndnot(bitmap,bitmap)
```
@@ -225,7 +225,7 @@ bitmapAndnot(bitmap,bitmap)
SELECT bitmapToArray(bitmapAndnot(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS res
```
-```
+```text
┌─res───┐
│ [1,2] │
└───────┘
@@ -235,7 +235,7 @@ SELECT bitmapToArray(bitmapAndnot(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS
Возвращает кардинальность битового массива в виде значения типа `UInt64`.
-```
+```sql
bitmapCardinality(bitmap)
```
@@ -249,7 +249,7 @@ bitmapCardinality(bitmap)
SELECT bitmapCardinality(bitmapBuild([1, 2, 3, 4, 5])) AS res
```
-```
+```text
┌─res─┐
│ 5 │
└─────┘
@@ -259,7 +259,7 @@ SELECT bitmapCardinality(bitmapBuild([1, 2, 3, 4, 5])) AS res
Выполняет логическое И и возвращает кардинальность (`UInt64`) результирующего битового массива.
-```
+```sql
bitmapAndCardinality(bitmap,bitmap)
```
@@ -273,7 +273,7 @@ bitmapAndCardinality(bitmap,bitmap)
SELECT bitmapAndCardinality(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res;
```
-```
+```text
┌─res─┐
│ 1 │
└─────┘
@@ -283,7 +283,7 @@ SELECT bitmapAndCardinality(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res;
Выполняет логическое ИЛИ и возвращает кардинальность (`UInt64`) результирующего битового массива.
-```
+```sql
bitmapOrCardinality(bitmap,bitmap)
```
@@ -297,7 +297,7 @@ bitmapOrCardinality(bitmap,bitmap)
SELECT bitmapOrCardinality(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res;
```
-```
+```text
┌─res─┐
│ 5 │
└─────┘
@@ -307,7 +307,7 @@ SELECT bitmapOrCardinality(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res;
Выполняет логическое исключающее ИЛИ и возвращает кардинальность (`UInt64`) результирующего битового массива.
-```
+```sql
bitmapXorCardinality(bitmap,bitmap)
```
@@ -321,7 +321,7 @@ bitmapXorCardinality(bitmap,bitmap)
SELECT bitmapXorCardinality(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res;
```
-```
+```text
┌─res─┐
│ 4 │
└─────┘
@@ -331,7 +331,7 @@ SELECT bitmapXorCardinality(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res;
Выполняет логическое отрицание И и возвращает кардинальность (`UInt64`) результирующего битового массива.
-```
+```sql
bitmapAndnotCardinality(bitmap,bitmap)
```
@@ -345,7 +345,7 @@ bitmapAndnotCardinality(bitmap,bitmap)
SELECT bitmapAndnotCardinality(bitmapBuild([1,2,3]),bitmapBuild([3,4,5])) AS res;
```
-```
+```text
┌─res─┐
│ 2 │
└─────┘
diff --git a/docs/ru/query_language/functions/conditional_functions.md b/docs/ru/query_language/functions/conditional_functions.md
index 0140a008123..a648314deb4 100644
--- a/docs/ru/query_language/functions/conditional_functions.md
+++ b/docs/ru/query_language/functions/conditional_functions.md
@@ -31,7 +31,7 @@ multiIf(cond_1, then_1, cond_2, then_2...else)
Рассмотрим таблицу
-```
+```text
┌─x─┬────y─┐
│ 1 │ ᴺᵁᴸᴸ │
│ 2 │ 3 │
@@ -40,7 +40,7 @@ multiIf(cond_1, then_1, cond_2, then_2...else)
Выполним запрос `SELECT multiIf(isNull(y), x, y < 3, y, NULL) FROM t_null`. Результат:
-```
+```text
┌─multiIf(isNull(y), x, less(y, 3), y, NULL)─┐
│ 1 │
│ ᴺᵁᴸᴸ │
diff --git a/docs/ru/query_language/functions/date_time_functions.md b/docs/ru/query_language/functions/date_time_functions.md
index 2503beb5810..432bf56652b 100644
--- a/docs/ru/query_language/functions/date_time_functions.md
+++ b/docs/ru/query_language/functions/date_time_functions.md
@@ -4,7 +4,7 @@
Все функции по работе с датой и временем, для которых это имеет смысл, могут принимать второй, необязательный аргумент - имя часового пояса. Пример: Asia/Yekaterinburg. В этом случае, они используют не локальный часовой пояс (по умолчанию), а указанный.
-``` sql
+```sql
SELECT
toDateTime('2016-06-15 23:00:00') AS time,
toDate(time) AS date_local,
@@ -12,7 +12,7 @@ SELECT
toString(time, 'US/Samoa') AS time_samoa
```
-```
+```text
┌────────────────time─┬─date_local─┬─date_yekat─┬─time_samoa──────────┐
│ 2016-06-15 23:00:00 │ 2016-06-15 │ 2016-06-16 │ 2016-06-15 09:00:00 │
└─────────────────────┴────────────┴────────────┴─────────────────────┘
diff --git a/docs/ru/query_language/functions/ext_dict_functions.md b/docs/ru/query_language/functions/ext_dict_functions.md
index e7f23a64664..c07f8adf84c 100644
--- a/docs/ru/query_language/functions/ext_dict_functions.md
+++ b/docs/ru/query_language/functions/ext_dict_functions.md
@@ -6,7 +6,7 @@
Извлекает значение из внешнего словаря.
-```
+```sql
dictGet('dict_name', 'attr_name', id_expr)
dictGetOrDefault('dict_name', 'attr_name', id_expr, default_value_expr)
```
@@ -95,7 +95,7 @@ LIMIT 3
Проверяет, присутствует ли запись с указанным ключом в словаре.
-```
+```sql
dictHas('dict_name', id)
```
@@ -115,7 +115,7 @@ dictHas('dict_name', id)
Для иерархического словаря возвращает массив ключей словаря, начиная с переданного `id_expr` и продолжая цепочкой родительских элементов.
-```
+```sql
dictGetHierarchy('dict_name', id)
```
@@ -167,7 +167,7 @@ ClickHouse поддерживает специализированные фун
Синтаксис:
-```
+```sql
dictGet[Type]('dict_name', 'attr_name', id_expr)
dictGet[Type]OrDefault('dict_name', 'attr_name', id_expr, default_value_expr)
```
diff --git a/docs/ru/query_language/functions/functions_for_nulls.md b/docs/ru/query_language/functions/functions_for_nulls.md
index 9899fa8c786..65457fa81b7 100644
--- a/docs/ru/query_language/functions/functions_for_nulls.md
+++ b/docs/ru/query_language/functions/functions_for_nulls.md
@@ -4,7 +4,7 @@
Проверяет является ли аргумент [NULL](../syntax.md#null).
-```
+```sql
isNull(x)
```
@@ -21,7 +21,7 @@ isNull(x)
Входная таблица
-```
+```text
┌─x─┬────y─┐
│ 1 │ ᴺᵁᴸᴸ │
│ 2 │ 3 │
@@ -30,25 +30,20 @@ isNull(x)
Запрос
+```sql
+SELECT x FROM t_null WHERE isNull(y)
```
-:) SELECT x FROM t_null WHERE isNull(y)
-
-SELECT x
-FROM t_null
-WHERE isNull(y)
-
+```text
┌─x─┐
│ 1 │
└───┘
-
-1 rows in set. Elapsed: 0.010 sec.
```
## isNotNull
Проверяет не является ли аргумент [NULL](../syntax.md#null).
-```
+```sql
isNotNull(x)
```
@@ -65,7 +60,7 @@ isNotNull(x)
Входная таблица
-```
+```text
┌─x─┬────y─┐
│ 1 │ ᴺᵁᴸᴸ │
│ 2 │ 3 │
@@ -74,25 +69,20 @@ isNotNull(x)
Запрос
+```sql
+SELECT x FROM t_null WHERE isNotNull(y)
```
-:) SELECT x FROM t_null WHERE isNotNull(y)
-
-SELECT x
-FROM t_null
-WHERE isNotNull(y)
-
+```text
┌─x─┐
│ 2 │
└───┘
-
-1 rows in set. Elapsed: 0.010 sec.
```
## coalesce
Последовательно слева-направо проверяет являются ли переданные аргументы `NULL` и возвращает первый не `NULL`.
-```
+```sql
coalesce(x,...)
```
**Параметры**
@@ -108,7 +98,7 @@ coalesce(x,...)
Рассмотрим адресную книгу, в которой может быть указано несколько способов связи с клиентом.
-```
+```text
┌─name─────┬─mail─┬─phone─────┬──icq─┐
│ client 1 │ ᴺᵁᴸᴸ │ 123-45-67 │ 123 │
│ client 2 │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │
@@ -119,25 +109,21 @@ coalesce(x,...)
Получим из адресной книги первый доступный способ связаться с клиентом:
+```sql
+SELECT coalesce(mail, phone, CAST(icq,'Nullable(String)')) FROM aBook
```
-:) SELECT coalesce(mail, phone, CAST(icq,'Nullable(String)')) FROM aBook
-
-SELECT coalesce(mail, phone, CAST(icq, 'Nullable(String)'))
-FROM aBook
-
+```text
┌─name─────┬─coalesce(mail, phone, CAST(icq, 'Nullable(String)'))─┐
│ client 1 │ 123-45-67 │
│ client 2 │ ᴺᵁᴸᴸ │
└──────────┴──────────────────────────────────────────────────────┘
-
-2 rows in set. Elapsed: 0.006 sec.
```
## ifNull
Возвращает альтернативное значение, если основной аргумент — `NULL`.
-```
+```sql
ifNull(x,alt)
```
@@ -153,16 +139,18 @@ ifNull(x,alt)
**Пример**
-```
+```sql
SELECT ifNull('a', 'b')
-
+```
+```text
┌─ifNull('a', 'b')─┐
│ a │
└──────────────────┘
```
-```
+```sql
SELECT ifNull(NULL, 'b')
-
+```
+```text
┌─ifNull(NULL, 'b')─┐
│ b │
└───────────────────┘
@@ -172,7 +160,7 @@ SELECT ifNull(NULL, 'b')
Возвращает `NULL`, если аргументы равны.
-```
+```sql
nullIf(x, y)
```
@@ -187,16 +175,18 @@ nullIf(x, y)
**Пример**
-```
+```sql
SELECT nullIf(1, 1)
-
+```
+```text
┌─nullIf(1, 1)─┐
│ ᴺᵁᴸᴸ │
└──────────────┘
```
-```
+```sql
SELECT nullIf(1, 2)
-
+```
+```text
┌─nullIf(1, 2)─┐
│ 1 │
└──────────────┘
@@ -206,7 +196,7 @@ SELECT nullIf(1, 2)
Приводит значение типа [Nullable](../../data_types/nullable.md) к не `Nullable`, если значение не `NULL`.
-```
+```sql
assumeNotNull(x)
```
@@ -223,14 +213,15 @@ assumeNotNull(x)
Рассмотрим таблицу `t_null`.
-```
+```sql
SHOW CREATE TABLE t_null
-
+```
+```text
┌─statement─────────────────────────────────────────────────────────────────┐
│ CREATE TABLE default.t_null ( x Int8, y Nullable(Int8)) ENGINE = TinyLog │
└───────────────────────────────────────────────────────────────────────────┘
```
-```
+```text
┌─x─┬────y─┐
│ 1 │ ᴺᵁᴸᴸ │
│ 2 │ 3 │
@@ -239,17 +230,19 @@ SHOW CREATE TABLE t_null
Применим функцию `assumeNotNull` к столбцу `y`.
-```
+```sql
SELECT assumeNotNull(y) FROM t_null
-
+```
+```text
┌─assumeNotNull(y)─┐
│ 0 │
│ 3 │
└──────────────────┘
```
-```
+```sql
SELECT toTypeName(assumeNotNull(y)) FROM t_null
-
+```
+```text
┌─toTypeName(assumeNotNull(y))─┐
│ Int8 │
│ Int8 │
@@ -260,7 +253,7 @@ SELECT toTypeName(assumeNotNull(y)) FROM t_null
Преобразует тип аргумента к `Nullable`.
-```
+```sql
toNullable(x)
```
@@ -274,15 +267,18 @@ toNullable(x)
**Пример**
-```
+```sql
SELECT toTypeName(10)
-
+```
+```text
┌─toTypeName(10)─┐
│ UInt8 │
└────────────────┘
-
+```
+```sql
SELECT toTypeName(toNullable(10))
-
+```
+```text
┌─toTypeName(toNullable(10))─┐
│ Nullable(UInt8) │
└────────────────────────────┘
diff --git a/docs/ru/query_language/functions/geo.md b/docs/ru/query_language/functions/geo.md
index 63ceae9208e..55789c93ab4 100644
--- a/docs/ru/query_language/functions/geo.md
+++ b/docs/ru/query_language/functions/geo.md
@@ -4,7 +4,7 @@
Вычисляет расстояние между двумя точками на поверхности Земли по [формуле большого круга](https://en.wikipedia.org/wiki/Great-circle_distance).
-```
+```sql
greatCircleDistance(lon1Deg, lat1Deg, lon2Deg, lat2Deg)
```
@@ -29,7 +29,7 @@ greatCircleDistance(lon1Deg, lat1Deg, lon2Deg, lat2Deg)
SELECT greatCircleDistance(55.755831, 37.617673, -55.755831, -37.617673)
```
-```
+```text
┌─greatCircleDistance(55.755831, 37.617673, -55.755831, -37.617673)─┐
│ 14132374.194975413 │
└───────────────────────────────────────────────────────────────────┘
@@ -62,7 +62,7 @@ pointInEllipses(x, y, x₀, y₀, a₀, b₀,...,xₙ, yₙ, aₙ, bₙ)
SELECT pointInEllipses(10., 10., 10., 9.1, 1., 0.9999)
```
-```
+```text
┌─pointInEllipses(10., 10., 10., 9.1, 1., 0.9999)─┐
│ 1 │
└─────────────────────────────────────────────────┘
@@ -72,7 +72,7 @@ SELECT pointInEllipses(10., 10., 10., 9.1, 1., 0.9999)
Проверяет, принадлежит ли точка многоугольнику на плоскости.
-```
+```sql
pointInPolygon((x, y), [(a, b), (c, d) ...], ...)
```
@@ -93,7 +93,7 @@ pointInPolygon((x, y), [(a, b), (c, d) ...], ...)
SELECT pointInPolygon((3., 3.), [(6, 0), (8, 4), (5, 8), (0, 2)]) AS res
```
-```
+```text
┌─res─┐
│ 1 │
└─────┘
@@ -103,7 +103,7 @@ SELECT pointInPolygon((3., 3.), [(6, 0), (8, 4), (5, 8), (0, 2)]) AS res
Кодирует широту и долготу в строку geohash, смотрите [http://geohash.org/](http://geohash.org/), [https://en.wikipedia.org/wiki/Geohash](https://en.wikipedia.org/wiki/Geohash).
-```
+```sql
geohashEncode(longitude, latitude, [precision])
```
@@ -123,7 +123,7 @@ geohashEncode(longitude, latitude, [precision])
SELECT geohashEncode(-5.60302734375, 42.593994140625, 0) AS res
```
-```
+```text
┌─res──────────┐
│ ezs42d000000 │
└──────────────┘
@@ -133,7 +133,7 @@ SELECT geohashEncode(-5.60302734375, 42.593994140625, 0) AS res
Декодирует любую строку, закодированную в geohash, на долготу и широту.
-```
+```sql
geohashDecode(geohash_string)
```
@@ -151,7 +151,7 @@ geohashDecode(geohash_string)
SELECT geohashDecode('ezs42') AS res
```
-```
+```text
┌─res─────────────────────────────┐
│ (-5.60302734375,42.60498046875) │
└─────────────────────────────────┘
@@ -161,7 +161,7 @@ SELECT geohashDecode('ezs42') AS res
Получает H3 индекс точки `(lon, lat)` с заданным разрешением
-```
+```sql
geoToH3(lon, lat, resolution)
```
@@ -180,10 +180,10 @@ geoToH3(lon, lat, resolution)
**Пример**
-``` sql
+```sql
SELECT geoToH3(37.79506683, 55.71290588, 15) as h3Index
```
-```
+```text
┌────────────h3Index─┐
│ 644325524701193974 │
└────────────────────┘
diff --git a/docs/ru/query_language/functions/hash_functions.md b/docs/ru/query_language/functions/hash_functions.md
index e171b2bfa38..44793b0e290 100644
--- a/docs/ru/query_language/functions/hash_functions.md
+++ b/docs/ru/query_language/functions/hash_functions.md
@@ -6,7 +6,7 @@
[Интерпретирует](../../query_language/functions/type_conversion_functions.md#type_conversion_functions-reinterpretAsString) все входные параметры как строки и вычисляет хэш [MD5](https://ru.wikipedia.org/wiki/MD5) для каждой из них. Затем объединяет хэши, берет первые 8 байт хэша результирующей строки и интерпретирует их как значение типа `UInt64` с big-endian порядком байтов.
-```
+```sql
halfMD5(par1, ...)
```
@@ -43,7 +43,7 @@ SELECT halfMD5(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')
Генерирует 64-х битное значение [SipHash](https://131002.net/siphash/).
-```
+```sql
sipHash64(par1,...)
```
@@ -70,7 +70,7 @@ sipHash64(par1,...)
SELECT sipHash64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')) AS SipHash, toTypeName(SipHash) AS type
```
-```
+```text
┌──────────────SipHash─┬─type───┐
│ 13726873534472839665 │ UInt64 │
└──────────────────────┴────────┘
@@ -86,7 +86,7 @@ SELECT sipHash64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00
Генерирует 64-х битное значение [CityHash](https://github.com/google/cityhash).
-```
+```sql
cityHash64(par1,...)
```
@@ -151,7 +151,7 @@ SELECT groupBitXor(cityHash64(*)) FROM table
Генерирует 64-х битное значение [FarmHash](https://github.com/google/farmhash).
-```
+```sql
farmHash64(par1, ...)
```
@@ -179,20 +179,71 @@ SELECT farmHash64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:0
## javaHash {#hash_functions-javahash}
-Вычисляет [JavaHash](http://hg.openjdk.java.net/jdk8u/jdk8u/jdk/file/478a4add975b/src/share/classes/java/lang/String.java#l1452) от строки.
-Принимает аргумент типа String. Возвращает значение типа Int32.
+Вычисляет [JavaHash](http://hg.openjdk.java.net/jdk8u/jdk8u/jdk/file/478a4add975b/src/share/classes/java/lang/String.java#l1452) от строки. `JavaHash` не отличается ни скоростью, ни качеством, поэтому эту функцию следует считать устаревшей. Используйте эту функцию, если вам необходимо получить значение хэша по такому же алгоритму.
-## hiveHash
+```sql
+SELECT javaHash('');
+```
-Вычисляет HiveHash от строки.
-Принимает аргумент типа String. Возвращает значение типа Int32.
-HiveHash — это результат [JavaHash](#hash_functions-javahash) с обнулённым битом знака числа. Функция используется в [Apache Hive](https://en.wikipedia.org/wiki/Apache_Hive) вплоть до версии 3.0.
+**Возвращаемое значение**
+
+Хэш-значение типа `Int32`.
+
+Тип: `javaHash`.
+
+**Пример**
+
+Запрос:
+
+```sql
+SELECT javaHash('Hello, world!');
+```
+
+Ответ:
+
+```text
+┌─javaHash('Hello, world!')─┐
+│ -1880044555 │
+└───────────────────────────┘
+```
+
+## hiveHash {#hash_functions-hivehash}
+
+Вычисляет `HiveHash` от строки.
+
+```sql
+SELECT hiveHash('');
+```
+
+`HiveHash` — это результат [JavaHash](#hash_functions-javahash) с обнулённым битом знака числа. Функция используется в [Apache Hive](https://en.wikipedia.org/wiki/Apache_Hive) вплоть до версии 3.0.
+
+**Возвращаемое значение**
+
+Хэш-значение типа `Int32`.
+
+Тип: `hiveHash`.
+
+**Пример**
+
+Запрос:
+
+```sql
+SELECT hiveHash('Hello, world!');
+```
+
+Ответ:
+
+```text
+┌─hiveHash('Hello, world!')─┐
+│ 267439093 │
+└───────────────────────────┘
+```
## metroHash64
Генерирует 64-х битное значение [MetroHash](http://www.jandrewrogers.com/2015/05/27/metrohash/).
-```
+```sql
metroHash64(par1, ...)
```
@@ -226,7 +277,7 @@ SELECT metroHash64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:
Генерирует значение [MurmurHash2](https://github.com/aappleby/smhasher).
-```
+```sql
murmurHash2_32(par1, ...)
murmurHash2_64(par1, ...)
```
@@ -256,7 +307,7 @@ SELECT murmurHash2_64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:
Генерирует значение [MurmurHash3](https://github.com/aappleby/smhasher).
-```
+```sql
murmurHash3_32(par1, ...)
murmurHash3_64(par1, ...)
```
@@ -286,7 +337,7 @@ SELECT murmurHash3_32(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:
Генерирует значение [MurmurHash3](https://github.com/aappleby/smhasher).
-```
+```sql
murmurHash3_128( expr )
```
@@ -310,10 +361,42 @@ SELECT murmurHash3_128('example_string') AS MurmurHash3, toTypeName(MurmurHash3)
└──────────────────┴─────────────────┘
```
-## xxHash32, xxHash64
+## xxHash32, xxHash64 {#hash_functions-xxhash32-xxhash64}
-Вычисляет xxHash от строки.
-Принимает аргумент типа String. Возвращает значение типа Uint64 или Uint32.
-Дополнительные сведения см. по ссылке: [xxHash](http://cyan4973.github.io/xxHash/)
+Вычисляет `xxHash` от строки. Предлагается в двух вариантах: 32 и 64 бита.
+
+```sql
+SELECT xxHash32('');
+
+OR
+
+SELECT xxHash64('');
+```
+
+**Возвращаемое значение**
+
+Хэш-значение типа `Uint32` или `Uint64`.
+
+Тип: `xxHash`.
+
+**Пример**
+
+Запрос:
+
+```sql
+SELECT xxHash32('Hello, world!');
+```
+
+Ответ:
+
+```text
+┌─xxHash32('Hello, world!')─┐
+│ 834093149 │
+└───────────────────────────┘
+```
+
+**Смотрите также**
+
+- [xxHash](http://cyan4973.github.io/xxHash/).
[Оригинальная статья](https://clickhouse.yandex/docs/ru/query_language/functions/hash_functions/)
diff --git a/docs/ru/query_language/functions/higher_order_functions.md b/docs/ru/query_language/functions/higher_order_functions.md
index aea793d27b4..b64a561f58a 100644
--- a/docs/ru/query_language/functions/higher_order_functions.md
+++ b/docs/ru/query_language/functions/higher_order_functions.md
@@ -25,9 +25,10 @@
Примеры:
-``` sql
+```sql
SELECT arrayMap(x -> (x + 2), [1, 2, 3]) as res;
-
+```
+```text
┌─res─────┐
│ [3,4,5] │
└─────────┘
@@ -35,9 +36,10 @@ SELECT arrayMap(x -> (x + 2), [1, 2, 3]) as res;
Следующий пример показывает, как создать кортежи из элементов разных массивов:
-``` sql
+```sql
SELECT arrayMap((x, y) -> (x, y), [1, 2, 3], [4, 5, 6]) AS res
-
+```
+```text
┌─res─────────────────┐
│ [(1,4),(2,5),(3,6)] │
└─────────────────────┘
@@ -51,17 +53,17 @@ SELECT arrayMap((x, y) -> (x, y), [1, 2, 3], [4, 5, 6]) AS res
Примеры:
-``` sql
+```sql
SELECT arrayFilter(x -> x LIKE '%World%', ['Hello', 'abc World']) AS res
```
-```
+```text
┌─res───────────┐
│ ['abc World'] │
└───────────────┘
```
-``` sql
+```sql
SELECT
arrayFilter(
(i, x) -> x LIKE '%World%',
@@ -70,7 +72,7 @@ SELECT
AS res
```
-```
+```text
┌─res─┐
│ [2] │
└─────┘
@@ -107,11 +109,11 @@ SELECT
Пример:
-``` sql
+```sql
SELECT arrayCumSum([1, 1, 1, 1]) AS res
```
-```
+```text
┌─res──────────┐
│ [1, 2, 3, 4] │
└──────────────┘
@@ -126,11 +128,11 @@ SELECT arrayCumSum([1, 1, 1, 1]) AS res
Пример:
-``` sql
+```sql
SELECT arraySort((x, y) -> y, ['hello', 'world'], [2, 1]);
```
-```
+```text
┌─res────────────────┐
│ ['world', 'hello'] │
└────────────────────┘
@@ -144,10 +146,10 @@ SELECT arraySort((x, y) -> y, ['hello', 'world'], [2, 1]);
Пример:
-``` sql
+```sql
SELECT arrayReverseSort((x, y) -> y, ['hello', 'world'], [2, 1]) as res;
```
-``` sql
+```text
┌─res───────────────┐
│ ['hello','world'] │
└───────────────────┘
diff --git a/docs/ru/query_language/functions/ip_address_functions.md b/docs/ru/query_language/functions/ip_address_functions.md
index 57c11b46d81..5b1c5e32172 100644
--- a/docs/ru/query_language/functions/ip_address_functions.md
+++ b/docs/ru/query_language/functions/ip_address_functions.md
@@ -24,7 +24,7 @@ ORDER BY c DESC
LIMIT 10
```
-```
+```text
┌─k──────────────┬─────c─┐
│ 83.149.9.xxx │ 26238 │
│ 217.118.81.xxx │ 26074 │
@@ -50,7 +50,7 @@ IPv6-mapped IPv4 адреса выводится в формате ::ffff:111.22
SELECT IPv6NumToString(toFixedString(unhex('2A0206B8000000000000000000000011'), 16)) AS addr
```
-```
+```text
┌─addr─────────┐
│ 2a02:6b8::11 │
└──────────────┘
@@ -67,7 +67,7 @@ ORDER BY c DESC
LIMIT 10
```
-```
+```text
┌─IPv6NumToString(ClientIP6)──────────────┬─────c─┐
│ 2a02:2168:aaa:bbbb::2 │ 24695 │
│ 2a02:2698:abcd:abcd:abcd:abcd:8888:5555 │ 22408 │
@@ -93,7 +93,7 @@ ORDER BY c DESC
LIMIT 10
```
-```
+```text
┌─IPv6NumToString(ClientIP6)─┬──────c─┐
│ ::ffff:94.26.111.111 │ 747440 │
│ ::ffff:37.143.222.4 │ 529483 │
@@ -121,7 +121,7 @@ HEX может быть в любом регистре.
SELECT IPv6NumToString(IPv4ToIPv6(IPv4StringToNum('192.168.0.1'))) AS addr
```
-```
+```text
┌─addr───────────────┐
│ ::ffff:192.168.0.1 │
└────────────────────┘
@@ -140,7 +140,7 @@ SELECT
cutIPv6(ipv4, 0, 2)
```
-```
+```text
┌─cutIPv6(ipv6, 2, 0)─────────────────┬─cutIPv6(ipv4, 0, 2)─┐
│ 2001:db8:ac10:fe01:feed:babe:cafe:0 │ ::ffff:192.168.0.0 │
└─────────────────────────────────────┴─────────────────────┘
@@ -184,7 +184,7 @@ SELECT
toTypeName(toIPv4(IPv4_string))
```
-```
+```text
┌─toTypeName(IPv4StringToNum(IPv4_string))─┬─toTypeName(toIPv4(IPv4_string))─┐
│ UInt32 │ IPv4 │
└──────────────────────────────────────────┴─────────────────────────────────┘
@@ -198,7 +198,7 @@ SELECT
hex(toIPv4(IPv4_string))
```
-```
+```text
┌─hex(IPv4StringToNum(IPv4_string))─┬─hex(toIPv4(IPv4_string))─┐
│ ABE1822D │ ABE1822D │
└───────────────────────────────────┴──────────────────────────┘
@@ -216,7 +216,7 @@ SELECT
toTypeName(toIPv6(IPv6_string))
```
-```
+```text
┌─toTypeName(IPv6StringToNum(IPv6_string))─┬─toTypeName(toIPv6(IPv6_string))─┐
│ FixedString(16) │ IPv6 │
└──────────────────────────────────────────┴─────────────────────────────────┘
@@ -230,7 +230,7 @@ SELECT
hex(toIPv6(IPv6_string))
```
-```
+```text
┌─hex(IPv6StringToNum(IPv6_string))─┬─hex(toIPv6(IPv6_string))─────────┐
│ 20010438FFFF000000000000407D1BC1 │ 20010438FFFF000000000000407D1BC1 │
└───────────────────────────────────┴──────────────────────────────────┘
diff --git a/docs/ru/query_language/functions/json_functions.md b/docs/ru/query_language/functions/json_functions.md
index d1794112b41..d1de97ef10e 100644
--- a/docs/ru/query_language/functions/json_functions.md
+++ b/docs/ru/query_language/functions/json_functions.md
@@ -35,7 +35,7 @@
Примеры:
-```
+```sql
visitParamExtractRaw('{"abc":"\\n\\u0000"}', 'abc') = '"\\n\\u0000"'
visitParamExtractRaw('{"abc":{"def":[1,2,3]}}', 'abc') = '{"def":[1,2,3]}'
```
@@ -46,7 +46,7 @@ visitParamExtractRaw('{"abc":{"def":[1,2,3]}}', 'abc') = '{"def":[1,2,3]}'
Примеры:
-```
+```sql
visitParamExtractString('{"abc":"\\n\\u0000"}', 'abc') = '\n\0'
visitParamExtractString('{"abc":"\\u263a"}', 'abc') = '☺'
visitParamExtractString('{"abc":"\\u263"}', 'abc') = ''
@@ -65,9 +65,9 @@ visitParamExtractString('{"abc":"hello}', 'abc') = ''
Примеры:
-```
-select JSONHas('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = 1
-select JSONHas('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', 4) = 0
+```sql
+SELECT JSONHas('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = 1
+SELECT JSONHas('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', 4) = 0
```
`indices_or_keys` — это список из нуля или более аргументов каждый из них может быть либо строкой либо целым числом.
@@ -82,12 +82,12 @@ select JSONHas('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', 4) = 0
Примеры:
-```
-select JSONExtractKey('{"a": "hello", "b": [-100, 200.0, 300]}', 1) = 'a'
-select JSONExtractKey('{"a": "hello", "b": [-100, 200.0, 300]}', 2) = 'b'
-select JSONExtractKey('{"a": "hello", "b": [-100, 200.0, 300]}', -1) = 'b'
-select JSONExtractKey('{"a": "hello", "b": [-100, 200.0, 300]}', -2) = 'a'
-select JSONExtractString('{"a": "hello", "b": [-100, 200.0, 300]}', 1) = 'hello'
+```sql
+SELECT JSONExtractKey('{"a": "hello", "b": [-100, 200.0, 300]}', 1) = 'a'
+SELECT JSONExtractKey('{"a": "hello", "b": [-100, 200.0, 300]}', 2) = 'b'
+SELECT JSONExtractKey('{"a": "hello", "b": [-100, 200.0, 300]}', -1) = 'b'
+SELECT JSONExtractKey('{"a": "hello", "b": [-100, 200.0, 300]}', -2) = 'a'
+SELECT JSONExtractString('{"a": "hello", "b": [-100, 200.0, 300]}', 1) = 'hello'
```
## JSONLength(json[, indices_or_keys]...)
@@ -98,9 +98,9 @@ select JSONExtractString('{"a": "hello", "b": [-100, 200.0, 300]}', 1) = 'hello'
Примеры:
-```
-select JSONLength('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = 3
-select JSONLength('{"a": "hello", "b": [-100, 200.0, 300]}') = 2
+```sql
+SELECT JSONLength('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = 3
+SELECT JSONLength('{"a": "hello", "b": [-100, 200.0, 300]}') = 2
```
## JSONType(json[, indices_or_keys]...)
@@ -111,10 +111,10 @@ select JSONLength('{"a": "hello", "b": [-100, 200.0, 300]}') = 2
Примеры:
-```
-select JSONType('{"a": "hello", "b": [-100, 200.0, 300]}') = 'Object'
-select JSONType('{"a": "hello", "b": [-100, 200.0, 300]}', 'a') = 'String'
-select JSONType('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = 'Array'
+```sql
+SELECT JSONType('{"a": "hello", "b": [-100, 200.0, 300]}') = 'Object'
+SELECT JSONType('{"a": "hello", "b": [-100, 200.0, 300]}', 'a') = 'String'
+SELECT JSONType('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = 'Array'
```
## JSONExtractUInt(json[, indices_or_keys]...)
@@ -131,10 +131,10 @@ select JSONType('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = 'Array'
Примеры:
-```
-select JSONExtractInt('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', 1) = -100
-select JSONExtractFloat('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', 2) = 200.0
-select JSONExtractUInt('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', -1) = 300
+```sql
+SELECT JSONExtractInt('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', 1) = -100
+SELECT JSONExtractFloat('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', 2) = 200.0
+SELECT JSONExtractUInt('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', -1) = 300
```
## JSONExtractString(json[, indices_or_keys]...)
@@ -147,12 +147,12 @@ select JSONExtractUInt('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', -1) = 300
Примеры:
-```
-select JSONExtractString('{"a": "hello", "b": [-100, 200.0, 300]}', 'a') = 'hello'
-select JSONExtractString('{"abc":"\\n\\u0000"}', 'abc') = '\n\0'
-select JSONExtractString('{"abc":"\\u263a"}', 'abc') = '☺'
-select JSONExtractString('{"abc":"\\u263"}', 'abc') = ''
-select JSONExtractString('{"abc":"hello}', 'abc') = ''
+```sql
+SELECT JSONExtractString('{"a": "hello", "b": [-100, 200.0, 300]}', 'a') = 'hello'
+SELECT JSONExtractString('{"abc":"\\n\\u0000"}', 'abc') = '\n\0'
+SELECT JSONExtractString('{"abc":"\\u263a"}', 'abc') = '☺'
+SELECT JSONExtractString('{"abc":"\\u263"}', 'abc') = ''
+SELECT JSONExtractString('{"abc":"hello}', 'abc') = ''
```
## JSONExtract(json[, indices_or_keys...], return_type)
@@ -166,7 +166,7 @@ select JSONExtractString('{"abc":"hello}', 'abc') = ''
Примеры:
-```
+```sql
SELECT JSONExtract('{"a": "hello", "b": [-100, 200.0, 300]}', 'Tuple(String, Array(Float64))') = ('hello',[-100,200,300])
SELECT JSONExtract('{"a": "hello", "b": [-100, 200.0, 300]}', 'Tuple(b Array(Float64), a String)') = ([-100,200,300],'hello')
SELECT JSONExtract('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', 'Array(Nullable(Int8))') = [-100, NULL, NULL]
@@ -182,7 +182,7 @@ SELECT JSONExtract('{"day": 5}', 'day', 'Enum8(\'Sunday\' = 0, \'Monday\' = 1, \
Пример:
-```
+```sql
SELECT JSONExtractKeysAndValues('{"x": {"a": 5, "b": 7, "c": 11}}', 'x', 'Int8') = [('a',5),('b',7),('c',11)];
```
@@ -194,8 +194,8 @@ SELECT JSONExtractKeysAndValues('{"x": {"a": 5, "b": 7, "c": 11}}', 'x', 'Int8')
Пример:
-```
-select JSONExtractRaw('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = '[-100, 200.0, 300]'
+```sql
+SELECT JSONExtractRaw('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = '[-100, 200.0, 300]'
```
[Оригинальная статья](https://clickhouse.yandex/docs/ru/query_language/functions/json_functions/)
diff --git a/docs/ru/query_language/functions/math_functions.md b/docs/ru/query_language/functions/math_functions.md
index 0470a469780..92feceb3a95 100644
--- a/docs/ru/query_language/functions/math_functions.md
+++ b/docs/ru/query_language/functions/math_functions.md
@@ -39,11 +39,11 @@
Пример (правило трёх сигм):
-``` sql
+```sql
SELECT erf(3 / sqrt(2))
```
-```
+```text
┌─erf(divide(3, sqrt(2)))─┐
│ 0.9973002039367398 │
└─────────────────────────┘
diff --git a/docs/ru/query_language/functions/other_functions.md b/docs/ru/query_language/functions/other_functions.md
index bfb19bb64b1..987840cac99 100644
--- a/docs/ru/query_language/functions/other_functions.md
+++ b/docs/ru/query_language/functions/other_functions.md
@@ -8,7 +8,7 @@
Извлекает конечную часть строки после последнего слэша или бэкслэша. Функция часто используется для извлечения имени файла из пути.
-```
+```sql
basename( expr )
```
@@ -65,9 +65,11 @@ SELECT 'some-file-name' AS a, basename(a)
`NULL` представляется как строка, соответствующая отображению `NULL` в форматах `Pretty`.
-```
+```sql
SELECT visibleWidth(NULL)
+```
+```text
┌─visibleWidth(NULL)─┐
│ 4 │
└────────────────────┘
@@ -131,7 +133,7 @@ SELECT visibleWidth(NULL)
Пример:
-``` sql
+```sql
SELECT
toHour(EventTime) AS h,
count() AS c,
@@ -141,7 +143,7 @@ GROUP BY h
ORDER BY h ASC
```
-```
+```text
┌──h─┬──────c─┬─bar────────────────┐
│ 0 │ 292907 │ █████████▋ │
│ 1 │ 180563 │ ██████ │
@@ -199,7 +201,7 @@ ORDER BY h ASC
Пример:
-``` sql
+```sql
SELECT
transform(SearchEngineID, [2, 3], ['Yandex', 'Google'], 'Other') AS title,
count() AS c
@@ -209,7 +211,7 @@ GROUP BY title
ORDER BY c DESC
```
-```
+```text
┌─title─────┬──────c─┐
│ Yandex │ 498635 │
│ Google │ 229872 │
@@ -228,7 +230,7 @@ ORDER BY c DESC
Пример:
-``` sql
+```sql
SELECT
transform(domain(Referer), ['yandex.ru', 'google.ru', 'vk.com'], ['www.yandex', 'example.com']) AS s,
count() AS c
@@ -238,7 +240,7 @@ ORDER BY count() DESC
LIMIT 10
```
-```
+```text
┌─s──────────────┬───────c─┐
│ │ 2906259 │
│ www.yandex │ 867767 │
@@ -257,13 +259,13 @@ LIMIT 10
Пример:
-``` sql
+```sql
SELECT
arrayJoin([1, 1024, 1024*1024, 192851925]) AS filesize_bytes,
formatReadableSize(filesize_bytes) AS filesize
```
-```
+```text
┌─filesize_bytes─┬─filesize───┐
│ 1 │ 1.00 B │
│ 1024 │ 1.00 KiB │
@@ -302,7 +304,7 @@ SELECT
Если значение `offset` выходит за пределы блока данных, то берётся значение по-умолчанию для колонки `column`. Если передан параметр `default_value`, то значение берётся из него.
Например, эта функция может использоваться чтобы оценить year-over-year значение показателя:
-``` sql
+```sql
WITH toDate('2018-01-01') AS start_date
SELECT
toStartOfMonth(start_date + (number * 32)) AS month,
@@ -312,7 +314,7 @@ SELECT
FROM numbers(16)
```
-```
+```text
┌──────month─┬─money─┬─prev_year─┬─year_over_year─┐
│ 2018-01-01 │ 32 │ 0 │ 0 │
│ 2018-02-01 │ 63 │ 0 │ 0 │
@@ -342,7 +344,7 @@ FROM numbers(16)
Пример:
-``` sql
+```sql
SELECT
EventID,
EventTime,
@@ -359,7 +361,7 @@ FROM
)
```
-```
+```text
┌─EventID─┬───────────EventTime─┬─delta─┐
│ 1106 │ 2016-11-24 00:00:04 │ 0 │
│ 1107 │ 2016-11-24 00:00:05 │ 1 │
@@ -371,21 +373,22 @@ FROM
Обратите внимание — размер блока влияет на результат. С каждым новым блоком состояние `runningDifference` сбрасывается.
-``` sql
+```sql
SELECT
number,
runningDifference(number + 1) AS diff
FROM numbers(100000)
WHERE diff != 1
-
+```
+```text
┌─number─┬─diff─┐
│ 0 │ 0 │
└────────┴──────┘
┌─number─┬─diff─┐
│ 65536 │ 0 │
└────────┴──────┘
-
-
+```
+```
set max_block_size=100000 -- по умолчанию 65536!
SELECT
@@ -393,7 +396,8 @@ SELECT
runningDifference(number + 1) AS diff
FROM numbers(100000)
WHERE diff != 1
-
+```
+```text
┌─number─┬─diff─┐
│ 0 │ 0 │
└────────┴──────┘
@@ -415,7 +419,7 @@ WHERE diff != 1
Возвращает количество полей в [Enum](../../data_types/enum.md).
-```
+```sql
getSizeOfEnumType(value)
```
@@ -431,9 +435,11 @@ getSizeOfEnumType(value)
**Пример**
-```
+```sql
SELECT getSizeOfEnumType( CAST('a' AS Enum8('a' = 1, 'b' = 2) ) ) AS x
+```
+```text
┌─x─┐
│ 2 │
└───┘
@@ -443,7 +449,7 @@ SELECT getSizeOfEnumType( CAST('a' AS Enum8('a' = 1, 'b' = 2) ) ) AS x
Возвращает имя класса, которым представлен тип данных столбца в оперативной памяти.
-```
+```sql
toColumnTypeName(value)
```
@@ -457,21 +463,20 @@ toColumnTypeName(value)
**Пример разницы между `toTypeName` и `toColumnTypeName`**
+```sql
+SELECT toTypeName(CAST('2018-01-01 01:02:03' AS DateTime))
```
-:) select toTypeName(cast('2018-01-01 01:02:03' AS DateTime))
-
-SELECT toTypeName(CAST('2018-01-01 01:02:03', 'DateTime'))
+```text
┌─toTypeName(CAST('2018-01-01 01:02:03', 'DateTime'))─┐
│ DateTime │
└─────────────────────────────────────────────────────┘
+```
+```sql
+SELECT toColumnTypeName(CAST('2018-01-01 01:02:03' AS DateTime))
+```
-1 rows in set. Elapsed: 0.008 sec.
-
-:) select toColumnTypeName(cast('2018-01-01 01:02:03' AS DateTime))
-
-SELECT toColumnTypeName(CAST('2018-01-01 01:02:03', 'DateTime'))
-
+```text
┌─toColumnTypeName(CAST('2018-01-01 01:02:03', 'DateTime'))─┐
│ Const(UInt32) │
└───────────────────────────────────────────────────────────┘
@@ -483,7 +488,7 @@ SELECT toColumnTypeName(CAST('2018-01-01 01:02:03', 'DateTime'))
Выводит развернутое описание структур данных в оперативной памяти
-```
+```sql
dumpColumnStructure(value)
```
@@ -497,9 +502,11 @@ dumpColumnStructure(value)
**Пример**
-```
+```sql
SELECT dumpColumnStructure(CAST('2018-01-01 01:02:03', 'DateTime'))
+```
+```text
┌─dumpColumnStructure(CAST('2018-01-01 01:02:03', 'DateTime'))─┐
│ DateTime, Const(size = 1, UInt32(size = 1)) │
└──────────────────────────────────────────────────────────────┘
@@ -511,7 +518,7 @@ SELECT dumpColumnStructure(CAST('2018-01-01 01:02:03', 'DateTime'))
Не учитывает значения по умолчанию для столбцов, заданные пользователем.
-```
+```sql
defaultValueOfArgumentType(expression)
```
@@ -527,26 +534,23 @@ defaultValueOfArgumentType(expression)
**Пример**
+```sql
+SELECT defaultValueOfArgumentType( CAST(1 AS Int8) )
```
-:) SELECT defaultValueOfArgumentType( CAST(1 AS Int8) )
-
-SELECT defaultValueOfArgumentType(CAST(1, 'Int8'))
+```text
┌─defaultValueOfArgumentType(CAST(1, 'Int8'))─┐
│ 0 │
└─────────────────────────────────────────────┘
+```
+```sql
+SELECT defaultValueOfArgumentType( CAST(1 AS Nullable(Int8) ) )
+```
-1 rows in set. Elapsed: 0.002 sec.
-
-:) SELECT defaultValueOfArgumentType( CAST(1 AS Nullable(Int8) ) )
-
-SELECT defaultValueOfArgumentType(CAST(1, 'Nullable(Int8)'))
-
+```text
┌─defaultValueOfArgumentType(CAST(1, 'Nullable(Int8)'))─┐
│ ᴺᵁᴸᴸ │
└───────────────────────────────────────────────────────┘
-
-1 rows in set. Elapsed: 0.002 sec.
```
## indexHint
@@ -565,9 +569,11 @@ SELECT defaultValueOfArgumentType(CAST(1, 'Nullable(Int8)'))
Рассмотрим таблицу с тестовыми данными [ontime](../../getting_started/example_datasets/ontime.md).
-```
+```sql
SELECT count() FROM ontime
+```
+```text
┌─count()─┐
│ 4276457 │
└─────────┘
@@ -577,16 +583,11 @@ SELECT count() FROM ontime
Выполним выборку по дате следующим образом:
+```sql
+SELECT FlightDate AS k, count() FROM ontime GROUP BY k ORDER BY k
```
-:) SELECT FlightDate AS k, count() FROM ontime GROUP BY k ORDER BY k
-
-SELECT
- FlightDate AS k,
- count()
-FROM ontime
-GROUP BY k
-ORDER BY k ASC
+```text
┌──────────k─┬─count()─┐
│ 2017-01-01 │ 13970 │
│ 2017-01-02 │ 15882 │
@@ -595,28 +596,18 @@ ORDER BY k ASC
│ 2017-09-29 │ 16384 │
│ 2017-09-30 │ 12520 │
└────────────┴─────────┘
-
-273 rows in set. Elapsed: 0.072 sec. Processed 4.28 million rows, 8.55 MB (59.00 million rows/s., 118.01 MB/s.)
```
В этой выборке индекс не используется и ClickHouse обработал всю таблицу (`Processed 4.28 million rows`). Для подключения индекса выберем конкретную дату и выполним следующий запрос:
+```sql
+SELECT FlightDate AS k, count() FROM ontime WHERE k = '2017-09-15' GROUP BY k ORDER BY k
```
-:) SELECT FlightDate AS k, count() FROM ontime WHERE k = '2017-09-15' GROUP BY k ORDER BY k
-
-SELECT
- FlightDate AS k,
- count()
-FROM ontime
-WHERE k = '2017-09-15'
-GROUP BY k
-ORDER BY k ASC
+```text
┌──────────k─┬─count()─┐
│ 2017-09-15 │ 16428 │
└────────────┴─────────┘
-
-1 rows in set. Elapsed: 0.014 sec. Processed 32.74 thousand rows, 65.49 KB (2.31 million rows/s., 4.63 MB/s.)
```
В последней строке выдачи видно, что благодаря использованию индекса, ClickHouse обработал значительно меньшее количество строк (`Processed 32.74 thousand rows`).
@@ -624,9 +615,7 @@ ORDER BY k ASC
Теперь передадим выражение `k = '2017-09-15'` в функцию `indexHint`:
-```
-:) SELECT FlightDate AS k, count() FROM ontime WHERE indexHint(k = '2017-09-15') GROUP BY k ORDER BY k
-
+```sql
SELECT
FlightDate AS k,
count()
@@ -634,15 +623,15 @@ FROM ontime
WHERE indexHint(k = '2017-09-15')
GROUP BY k
ORDER BY k ASC
+```
+```text
┌──────────k─┬─count()─┐
│ 2017-09-14 │ 7071 │
│ 2017-09-15 │ 16428 │
│ 2017-09-16 │ 1077 │
│ 2017-09-30 │ 8167 │
└────────────┴─────────┘
-
-4 rows in set. Elapsed: 0.004 sec. Processed 32.74 thousand rows, 65.49 KB (8.97 million rows/s., 17.94 MB/s.)
```
В ответе на запрос видно, что ClickHouse применил индекс таким же образом, что и в предыдущий раз (`Processed 32.74 thousand rows`). Однако по результирующему набору строк видно, что выражение `k = '2017-09-15'` не использовалось при формировании результата.
@@ -655,7 +644,7 @@ ORDER BY k ASC
Используется для внутренней реализации [arrayJoin](array_join.md#functions_arrayjoin).
-```
+```sql
replicate(x, arr)
```
@@ -670,9 +659,11 @@ replicate(x, arr)
**Пример**
-```
+```sql
SELECT replicate(1, ['a', 'b', 'c'])
+```
+```text
┌─replicate(1, ['a', 'b', 'c'])─┐
│ [1,1,1] │
└───────────────────────────────┘
@@ -682,7 +673,7 @@ SELECT replicate(1, ['a', 'b', 'c'])
Возвращает объем оставшегося места в файловой системе, в которой расположены файлы баз данных. Смотрите описание конфигурационного параметра сервера [path](../../operations/server_settings/settings.md#server_settings-path).
-```
+```sql
filesystemAvailable()
```
@@ -733,7 +724,8 @@ custom_message - необязательный параметр, констант
```sql
SELECT throwIf(number = 3, 'Too many') FROM numbers(10);
-
+```
+```text
↙ Progress: 0.00 rows, 0.00 B (0.00 rows/s., 0.00 B/s.) Received exception from server (version 19.14.1):
Code: 395. DB::Exception: Received from localhost:9000. DB::Exception: Too many.
```
@@ -744,7 +736,9 @@ Code: 395. DB::Exception: Received from localhost:9000. DB::Exception: Too many.
```sql
SELECT identity(42)
+```
+```text
┌─identity(42)─┐
│ 42 │
└──────────────┘
diff --git a/docs/ru/query_language/functions/rounding_functions.md b/docs/ru/query_language/functions/rounding_functions.md
index 498498266e4..61bf6d94419 100644
--- a/docs/ru/query_language/functions/rounding_functions.md
+++ b/docs/ru/query_language/functions/rounding_functions.md
@@ -22,7 +22,7 @@ N может быть отрицательным.
Функция возвращает ближайшее значение указанного порядка. В случае, когда заданное число равноудалено от чисел необходимого порядка, функция возвращает то из них, которое имеет ближайшую чётную цифру (банковское округление).
-```
+```sql
round(expression [, decimal_places])
```
@@ -43,10 +43,10 @@ round(expression [, decimal_places])
**Пример использования**
-``` sql
+```sql
SELECT number / 2 AS x, round(x) FROM system.numbers LIMIT 3
```
-```
+```text
┌───x─┬─round(divide(number, 2))─┐
│ 0 │ 0 │
│ 0.5 │ 0 │
@@ -58,7 +58,7 @@ SELECT number / 2 AS x, round(x) FROM system.numbers LIMIT 3
Округление до ближайшего числа.
-```
+```text
round(3.2, 0) = 3
round(4.1267, 2) = 4.13
round(22,-1) = 20
@@ -68,7 +68,7 @@ round(-467,-2) = -500
Банковское округление.
-```
+```text
round(3.5) = 4
round(4.5) = 4
round(3.55, 1) = 3.6
diff --git a/docs/ru/query_language/functions/splitting_merging_functions.md b/docs/ru/query_language/functions/splitting_merging_functions.md
index 8561048b864..8d9c6aef14f 100644
--- a/docs/ru/query_language/functions/splitting_merging_functions.md
+++ b/docs/ru/query_language/functions/splitting_merging_functions.md
@@ -19,9 +19,11 @@ separator - необязательный параметр, константна
**Пример:**
-```
+```sql
SELECT alphaTokens('abca1abc')
+```
+```text
┌─alphaTokens('abca1abc')─┐
│ ['abca','abc'] │
└─────────────────────────┘
diff --git a/docs/ru/query_language/functions/string_functions.md b/docs/ru/query_language/functions/string_functions.md
index cc6563dacd5..f514ac1cbd3 100644
--- a/docs/ru/query_language/functions/string_functions.md
+++ b/docs/ru/query_language/functions/string_functions.md
@@ -45,7 +45,7 @@
Заменяет некорректные символы UTF-8 на символ `�` (U+FFFD). Все идущие подряд некорректные символы схлопываются в один заменяющий символ.
-```
+```sql
toValidUTF8( input_string )
```
@@ -80,13 +80,16 @@ SELECT toValidUTF8('\x61\xF0\x80\x80\x80b')
```sql
SELECT format('{1} {0} {1}', 'World', 'Hello')
-
+```
+```text
┌─format('{1} {0} {1}', 'World', 'Hello')─┐
│ Hello World Hello │
└─────────────────────────────────────────┘
-
+```
+```sql
SELECT format('{} {}', 'Hello', 'World')
-
+```
+```text
┌─format('{} {}', 'Hello', 'World')─┐
│ Hello World │
└───────────────────────────────────┘
diff --git a/docs/ru/query_language/functions/string_replace_functions.md b/docs/ru/query_language/functions/string_replace_functions.md
index 498b321fb1b..0c4cb9923f2 100644
--- a/docs/ru/query_language/functions/string_replace_functions.md
+++ b/docs/ru/query_language/functions/string_replace_functions.md
@@ -17,7 +17,7 @@
Пример 1. Переведём дату в американский формат:
-``` sql
+```sql
SELECT DISTINCT
EventDate,
replaceRegexpOne(toString(EventDate), '(\\d{4})-(\\d{2})-(\\d{2})', '\\2/\\3/\\1') AS res
@@ -26,7 +26,7 @@ LIMIT 7
FORMAT TabSeparated
```
-```
+```text
2014-03-17 03/17/2014
2014-03-18 03/18/2014
2014-03-19 03/19/2014
@@ -38,11 +38,11 @@ FORMAT TabSeparated
Пример 2. Размножить строку десять раз:
-``` sql
+```sql
SELECT replaceRegexpOne('Hello, World!', '.*', '\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0') AS res
```
-```
+```text
┌─res────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ Hello, World!Hello, World!Hello, World!Hello, World!Hello, World!Hello, World!Hello, World!Hello, World!Hello, World!Hello, World! │
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
@@ -51,11 +51,11 @@ SELECT replaceRegexpOne('Hello, World!', '.*', '\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0')
## replaceRegexpAll(haystack, pattern, replacement)
То же самое, но делается замена всех вхождений. Пример:
-``` sql
+```sql
SELECT replaceRegexpAll('Hello, World!', '.', '\\0\\0') AS res
```
-```
+```text
┌─res────────────────────────┐
│ HHeelllloo,, WWoorrlldd!! │
└────────────────────────────┘
@@ -64,11 +64,11 @@ SELECT replaceRegexpAll('Hello, World!', '.', '\\0\\0') AS res
В качестве исключения, если регулярное выражение сработало на пустой подстроке, то замена делается не более одного раза.
Пример:
-``` sql
+```sql
SELECT replaceRegexpAll('Hello, World!', '^', 'here: ') AS res
```
-```
+```text
┌─res─────────────────┐
│ here: Hello, World! │
└─────────────────────┘
diff --git a/docs/ru/query_language/functions/type_conversion_functions.md b/docs/ru/query_language/functions/type_conversion_functions.md
index 8635ea089e0..72354e77f8c 100644
--- a/docs/ru/query_language/functions/type_conversion_functions.md
+++ b/docs/ru/query_language/functions/type_conversion_functions.md
@@ -116,7 +116,7 @@ SELECT toDecimal32OrZero(toString(-1.111), 2) AS val, toTypeName(val)
Форматы даты и даты-с-временем для функций toDate/toDateTime определены следующим образом:
-```
+```text
YYYY-MM-DD
YYYY-MM-DD hh:mm:ss
```
@@ -135,7 +135,7 @@ SELECT
toString(now(), 'Asia/Yekaterinburg') AS now_yekat
```
-```
+```text
┌───────────now_local─┬─now_yekat───────────┐
│ 2016-06-15 00:11:21 │ 2016-06-15 02:11:21 │
└─────────────────────┴─────────────────────┘
@@ -158,7 +158,7 @@ SELECT
SELECT toFixedString('foo', 8) AS s, toStringCutToZero(s) AS s_cut
```
-```
+```text
┌─s─────────────┬─s_cut─┐
│ foo\0\0\0\0\0 │ foo │
└───────────────┴───────┘
@@ -168,7 +168,7 @@ SELECT toFixedString('foo', 8) AS s, toStringCutToZero(s) AS s_cut
SELECT toFixedString('foo\0bar', 8) AS s, toStringCutToZero(s) AS s_cut
```
-```
+```text
┌─s──────────┬─s_cut─┐
│ foo\0bar\0 │ foo │
└────────────┴───────┘
@@ -202,7 +202,7 @@ SELECT
CAST(timestamp, 'FixedString(22)') AS fixed_string
```
-```
+```text
┌─timestamp───────────┬────────────datetime─┬───────date─┬─string──────────────┬─fixed_string──────────────┐
│ 2016-06-15 23:00:00 │ 2016-06-15 23:00:00 │ 2016-06-15 │ 2016-06-15 23:00:00 │ 2016-06-15 23:00:00\0\0\0 │
└─────────────────────┴─────────────────────┴────────────┴─────────────────────┴───────────────────────────┘
@@ -212,16 +212,19 @@ SELECT
Поддержано преобразование к типу [Nullable](../../data_types/nullable.md) и обратно. Пример:
-```
+```sql
SELECT toTypeName(x) FROM t_null
-
+```
+```text
┌─toTypeName(x)─┐
│ Int8 │
│ Int8 │
└───────────────┘
-
+```
+```sql
SELECT toTypeName(CAST(x, 'Nullable(UInt16)')) FROM t_null
-
+```
+```text
┌─toTypeName(CAST(x, 'Nullable(UInt16)'))─┐
│ Nullable(UInt16) │
│ Nullable(UInt16) │
diff --git a/docs/ru/query_language/functions/url_functions.md b/docs/ru/query_language/functions/url_functions.md
index 7002273d5cb..a6e97b8a013 100644
--- a/docs/ru/query_language/functions/url_functions.md
+++ b/docs/ru/query_language/functions/url_functions.md
@@ -13,7 +13,7 @@
Извлекает имя хоста из URL.
-```
+```sql
domain(url)
```
@@ -23,7 +23,7 @@ domain(url)
URL может быть указан со схемой или без неё. Примеры:
-```
+```text
svn+ssh://some.svn-hosting.com:80/repo/trunk
some.svn-hosting.com:80/repo/trunk
https://yandex.com/time/
@@ -31,7 +31,7 @@ https://yandex.com/time/
Для указанных примеров функция `domain` возвращает следующие результаты:
-```
+```text
some.svn-hosting.com
some.svn-hosting.com
yandex.com
@@ -64,7 +64,7 @@ SELECT domain('svn+ssh://some.svn-hosting.com:80/repo/trunk')
Извлекает домен верхнего уровня из URL.
-```
+```sql
topLevelDomain(url)
```
@@ -74,7 +74,7 @@ topLevelDomain(url)
URL может быть указан со схемой или без неё. Примеры:
-```
+```text
svn+ssh://some.svn-hosting.com:80/repo/trunk
some.svn-hosting.com:80/repo/trunk
https://yandex.com/time/
@@ -138,7 +138,7 @@ SELECT topLevelDomain('svn+ssh://www.some.svn-hosting.com:80/repo/trunk')
То же самое, но без протокола и хоста в результате. Элемент / (корень) не включается. Пример:
Функция используется для реализации древовидных отчётов по URL в Яндекс.Метрике.
-```
+```text
URLPathHierarchy('https://example.com/browse/CONV-6788') =
[
'/browse/',
@@ -150,11 +150,11 @@ URLPathHierarchy('https://example.com/browse/CONV-6788') =
Возвращает декодированный URL.
Пример:
-``` sql
+```sql
SELECT decodeURLComponent('http://127.0.0.1:8123/?query=SELECT%201%3B') AS DecodedURL;
```
-```
+```text
┌─DecodedURL─────────────────────────────┐
│ http://127.0.0.1:8123/?query=SELECT 1; │
└────────────────────────────────────────┘
diff --git a/docs/ru/query_language/functions/uuid_functions.md b/docs/ru/query_language/functions/uuid_functions.md
index 93020824091..d933130d30c 100644
--- a/docs/ru/query_language/functions/uuid_functions.md
+++ b/docs/ru/query_language/functions/uuid_functions.md
@@ -16,15 +16,15 @@ generateUUIDv4()
Этот пример демонстрирует, как создать таблицу с UUID-колонкой и добавить в нее сгенерированный UUID.
-``` sql
-:) CREATE TABLE t_uuid (x UUID) ENGINE=TinyLog
+```sql
+CREATE TABLE t_uuid (x UUID) ENGINE=TinyLog
-:) INSERT INTO t_uuid SELECT generateUUIDv4()
+INSERT INTO t_uuid SELECT generateUUIDv4()
-:) SELECT * FROM t_uuid
+SELECT * FROM t_uuid
```
-```
+```text
┌────────────────────────────────────x─┐
│ f4bf890f-f9dc-4332-ad5c-0c18e73f28e9 │
└──────────────────────────────────────┘
@@ -44,11 +44,11 @@ toUUID(String)
**Пример использования**
-``` sql
-:) SELECT toUUID('61f0c404-5cb3-11e7-907b-a6006ad3dba0') AS uuid
+```sql
+SELECT toUUID('61f0c404-5cb3-11e7-907b-a6006ad3dba0') AS uuid
```
-```
+```text
┌─────────────────────────────────uuid─┐
│ 61f0c404-5cb3-11e7-907b-a6006ad3dba0 │
└──────────────────────────────────────┘
@@ -58,7 +58,7 @@ toUUID(String)
Принимает строку, содержащую 36 символов в формате `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`, и возвращает в виде набора байт в [FixedString(16)](../../data_types/fixedstring.md).
-``` sql
+```sql
UUIDStringToNum(String)
```
@@ -68,13 +68,13 @@ FixedString(16)
**Пример использования**
-``` sql
-:) SELECT
+```sql
+SELECT
'612f3c40-5d3b-217e-707b-6a546a3d7b29' AS uuid,
UUIDStringToNum(uuid) AS bytes
```
-```
+```text
┌─uuid─────────────────────────────────┬─bytes────────────┐
│ 612f3c40-5d3b-217e-707b-6a546a3d7b29 │ a/<@];!~p{jTj={) │
└──────────────────────────────────────┴──────────────────┘
@@ -84,7 +84,7 @@ FixedString(16)
Принимает значение типа [FixedString(16)](../../data_types/fixedstring.md). Возвращает строку из 36 символов в текстовом виде.
-``` sql
+```sql
UUIDNumToString(FixedString(16))
```
@@ -94,13 +94,13 @@ UUIDNumToString(FixedString(16))
**Пример использования**
-``` sql
+```sql
SELECT
'a/<@];!~p{jTj={)' AS bytes,
UUIDNumToString(toFixedString(bytes, 16)) AS uuid
```
-```
+```text
┌─bytes────────────┬─uuid─────────────────────────────────┐
│ a/<@];!~p{jTj={) │ 612f3c40-5d3b-217e-707b-6a546a3d7b29 │
└──────────────────┴──────────────────────────────────────┘
diff --git a/docs/ru/query_language/functions/ym_dict_functions.md b/docs/ru/query_language/functions/ym_dict_functions.md
index c086d7404c8..2bf511438b1 100644
--- a/docs/ru/query_language/functions/ym_dict_functions.md
+++ b/docs/ru/query_language/functions/ym_dict_functions.md
@@ -21,7 +21,7 @@ ClickHouse поддерживает работу одновременно с н
Во все функции по работе с регионами, в конце добавлен один необязательный аргумент - ключ словаря. Далее он обозначен как geobase.
Пример:
-```
+```text
regionToCountry(RegionID) - использует словарь по умолчанию: /opt/geo/regions_hierarchy.txt;
regionToCountry(RegionID, '') - использует словарь по умолчанию: /opt/geo/regions_hierarchy.txt;
regionToCountry(RegionID, 'ua') - использует словарь для ключа ua: /opt/geo/regions_hierarchy_ua.txt;
@@ -33,13 +33,13 @@ regionToCountry(RegionID, 'ua') - использует словарь для к
### regionToArea(id\[, geobase\])
Переводит регион в область (тип в геобазе - 5). В остальном, аналогично функции regionToCity.
-``` sql
+```sql
SELECT DISTINCT regionToName(regionToArea(toUInt32(number), 'ua'))
FROM system.numbers
LIMIT 15
```
-```
+```text
┌─regionToName(regionToArea(toUInt32(number), \'ua\'))─┐
│ │
│ Москва и Московская область │
@@ -62,13 +62,13 @@ LIMIT 15
### regionToDistrict(id\[, geobase\])
Переводит регион в федеральный округ (тип в геобазе - 4). В остальном, аналогично функции regionToCity.
-``` sql
+```sql
SELECT DISTINCT regionToName(regionToDistrict(toUInt32(number), 'ua'))
FROM system.numbers
LIMIT 15
```
-```
+```text
┌─regionToName(regionToDistrict(toUInt32(number), \'ua\'))─┐
│ │
│ Центральный федеральный округ │
diff --git a/docs/ru/query_language/insert_into.md b/docs/ru/query_language/insert_into.md
index 1f92e6525b8..88c548d394c 100644
--- a/docs/ru/query_language/insert_into.md
+++ b/docs/ru/query_language/insert_into.md
@@ -5,7 +5,7 @@
Базовый формат запроса:
-``` sql
+```sql
INSERT INTO [db.]table [(c1, c2, c3)] VALUES (v11, v12, v13), (v21, v22, v23), ...
```
@@ -18,13 +18,13 @@ INSERT INTO [db.]table [(c1, c2, c3)] VALUES (v11, v12, v13), (v21, v22, v23), .
В INSERT можно передавать данные любого [формата](../interfaces/formats.md#formats), который поддерживает ClickHouse. Для этого формат необходимо указать в запросе в явном виде:
-``` sql
+```sql
INSERT INTO [db.]table [(c1, c2, c3)] FORMAT format_name data_set
```
Например, следующий формат запроса идентичен базовому варианту INSERT ... VALUES:
-``` sql
+```sql
INSERT INTO [db.]table [(c1, c2, c3)] FORMAT Values (v11, v12, v13), (v21, v22, v23), ...
```
@@ -32,7 +32,7 @@ ClickHouse отсекает все пробелы и один перенос с
Пример:
-``` sql
+```sql
INSERT INTO t FORMAT TabSeparated
11 Hello, world!
22 Qwerty
@@ -46,7 +46,7 @@ INSERT INTO t FORMAT TabSeparated
### Вставка результатов `SELECT` {#insert_query_insert-select}
-``` sql
+```sql
INSERT INTO [db.]table [(c1, c2, c3)] SELECT ...
```
diff --git a/docs/ru/query_language/misc.md b/docs/ru/query_language/misc.md
index ab19e559649..00cb0e7fd93 100644
--- a/docs/ru/query_language/misc.md
+++ b/docs/ru/query_language/misc.md
@@ -176,12 +176,16 @@ KILL MUTATION WHERE database = 'default' AND table = 'table' AND mutation_id = '
OPTIMIZE TABLE [db.]name [ON CLUSTER cluster] [PARTITION partition] [FINAL]
```
-Просит движок таблицы сделать что-нибудь, что может привести к более оптимальной работе.
-Поддерживается только движками `*MergeTree`, в котором выполнение этого запроса инициирует внеочередное слияние кусков данных.
-Если указан `PARTITION`, то оптимизация будет производиться только для указаной партиции.
-Если указан `FINAL`, то оптимизация будет производиться даже когда все данные уже лежат в одном куске.
+Запрос пытается запустить внеплановый мёрж кусков данных для таблиц семейства [MergeTree](../operations/table_engines/mergetree.md). Другие движки таблиц не поддерживаются.
-!!! warning "Внимание"Запрос OPTIMIZE не может устранить причину появления ошибки "Too many parts".
+Если `OPTIMIZE` применяется к таблицам семейства [ReplicatedMergeTree](../operations/table_engines/replication.md), ClickHouse создаёт задачу на мёрж и ожидает её исполнения на всех узлах (если активирована настройка `replication_alter_partitions_sync`).
+
+- Если `OPTIMIZE` не выполняет мёрж по любой причине, ClickHouse не оповещает об этом клиента. Чтобы включить оповещения, используйте настройку [optimize_throw_if_noop](../operations/settings/settings.md#setting-optimize_throw_if_noop).
+- Если указать `PARTITION`, то оптимизация выполняется только для указанной партиции.
+- Если указать `FINAL`, то оптимизация выполняется даже в том случае, если все данные уже лежат в одном куске.
+
+!!! warning "Внимание"
+ Запрос `OPTIMIZE` не может устранить причину появления ошибки "Too many parts".
## RENAME {#misc_operations-rename}
@@ -257,7 +261,7 @@ SHOW PROCESSLIST [INTO OUTFILE filename] [FORMAT format]
Полезный совет (выполните в консоли):
```bash
-watch -n1 "clickhouse-client --query='SHOW PROCESSLIST'"
+$ watch -n1 "clickhouse-client --query='SHOW PROCESSLIST'"
```
## SHOW TABLES
diff --git a/docs/ru/query_language/operators.md b/docs/ru/query_language/operators.md
index de38bb5b193..74aa2270e90 100644
--- a/docs/ru/query_language/operators.md
+++ b/docs/ru/query_language/operators.md
@@ -67,7 +67,7 @@
## Оператор для работы с датами и временем {#operators-datetime}
-``` sql
+```sql
EXTRACT(part FROM date);
```
@@ -88,7 +88,7 @@ EXTRACT(part FROM date);
Примеры:
-``` sql
+```sql
SELECT EXTRACT(DAY FROM toDate('2017-06-15'));
SELECT EXTRACT(MONTH FROM toDate('2017-06-15'));
SELECT EXTRACT(YEAR FROM toDate('2017-06-15'));
@@ -96,7 +96,7 @@ SELECT EXTRACT(YEAR FROM toDate('2017-06-15'));
В следующем примере создадим таблицу и добавим в неё значение с типом `DateTime`.
-``` sql
+```sql
CREATE TABLE test.Orders
(
OrderId UInt64,
@@ -106,10 +106,10 @@ CREATE TABLE test.Orders
ENGINE = Log;
```
-``` sql
+```sql
INSERT INTO test.Orders VALUES (1, 'Jarlsberg Cheese', toDateTime('2008-10-11 13:23:44'));
```
-``` sql
+```sql
SELECT
toYear(OrderDate) AS OrderYear,
toMonth(OrderDate) AS OrderMonth,
@@ -118,13 +118,15 @@ SELECT
toMinute(OrderDate) AS OrderMinute,
toSecond(OrderDate) AS OrderSecond
FROM test.Orders;
+```
+```text
┌─OrderYear─┬─OrderMonth─┬─OrderDay─┬─OrderHour─┬─OrderMinute─┬─OrderSecond─┐
│ 2008 │ 10 │ 11 │ 13 │ 23 │ 44 │
└───────────┴────────────┴──────────┴───────────┴─────────────┴─────────────┘
```
-Больше примеров приведено в [тестах](https://github.com/yandex/ClickHouse/blob/master/dbms/tests/queries/0_stateless/00619_extract.sql).
+Больше примеров приведено в [тестах](https://github.com/ClickHouse/ClickHouse/blob/master/dbms/tests/queries/0_stateless/00619_extract.sql).
## Оператор логического отрицания
@@ -148,7 +150,7 @@ FROM test.Orders;
## Условное выражение {#operator_case}
-``` sql
+```sql
CASE [x]
WHEN a THEN b
[WHEN ... THEN ...]
@@ -198,18 +200,13 @@ ClickHouse поддерживает операторы `IS NULL` и `IS NOT NULL
- `0` в обратном случае.
- Для прочих значений оператор `IS NULL` всегда возвращает `0`.
-```bash
-:) SELECT x+100 FROM t_null WHERE y IS NULL
-
-SELECT x + 100
-FROM t_null
-WHERE isNull(y)
-
+```sql
+SELECT x+100 FROM t_null WHERE y IS NULL
+```
+```text
┌─plus(x, 100)─┐
│ 101 │
└──────────────┘
-
-1 rows in set. Elapsed: 0.002 sec.
```
@@ -220,18 +217,13 @@ WHERE isNull(y)
- `1`, в обратном случае.
- Для прочих значений оператор `IS NOT NULL` всегда возвращает `1`.
-```bash
-:) SELECT * FROM t_null WHERE y IS NOT NULL
-
-SELECT *
-FROM t_null
-WHERE isNotNull(y)
-
+```sql
+SELECT * FROM t_null WHERE y IS NOT NULL
+```
+```text
┌─x─┬─y─┐
│ 2 │ 3 │
└───┴───┘
-
-1 rows in set. Elapsed: 0.002 sec.
```
[Оригинальная статья](https://clickhouse.yandex/docs/ru/query_language/operators/)
diff --git a/docs/ru/query_language/select.md b/docs/ru/query_language/select.md
index 955873025f6..d206ba42c0b 100644
--- a/docs/ru/query_language/select.md
+++ b/docs/ru/query_language/select.md
@@ -35,7 +35,7 @@ SELECT [DISTINCT] expr_list
В дальнейшем, результаты выражений можно использовать в секции SELECT.
Пример 1: Использование константного выражения как "переменной"
-```
+```sql
WITH '2019-08-01 15:23:00' as ts_upper_bound
SELECT *
FROM hits
@@ -45,7 +45,7 @@ WHERE
```
Пример 2: Выкидывание выражения sum(bytes) из списка колонок в SELECT
-```
+```sql
WITH sum(bytes) as s
SELECT
formatReadableSize(s),
@@ -56,7 +56,7 @@ ORDER BY s
```
Пример 3: Использование результатов скалярного подзапроса
-```
+```sql
/* запрос покажет TOP 10 самых больших таблиц */
WITH
(
@@ -75,7 +75,7 @@ LIMIT 10
Пример 4: Переиспользование выражения
В настоящий момент, переиспользование выражения из секции WITH внутри подзапроса возможно только через дублирование.
-```
+```sql
WITH ['hello'] AS hello
SELECT
hello,
@@ -85,7 +85,8 @@ FROM
WITH ['hello'] AS hello
SELECT hello
)
-
+```
+```text
┌─hello─────┬─hello─────┐
│ ['hello'] │ ['hello'] │
└───────────┴───────────┘
@@ -94,20 +95,24 @@ FROM
### Секция FROM
Если секция FROM отсутствует, то данные будут читаться из таблицы `system.one`.
-Таблица system.one содержит ровно одну строку (то есть, эта таблица выполняет такую же роль, как таблица DUAL, которую можно найти в других СУБД).
+Таблица `system.one` содержит ровно одну строку (то есть, эта таблица выполняет такую же роль, как таблица DUAL, которую можно найти в других СУБД).
-В секции FROM указывается таблица, из которой будут читаться данные, либо подзапрос, либо табличная функция; дополнительно могут присутствовать ARRAY JOIN и обычный JOIN (смотрите ниже).
+Cекция `FROM` определяет источник данных:
-Вместо таблицы, может быть указан подзапрос SELECT в скобках.
-В этом случае, конвейер обработки подзапроса будет встроен в конвейер обработки внешнего запроса.
-В отличие от стандартного SQL, после подзапроса не нужно указывать его синоним. Для совместимости, присутствует возможность написать AS name после подзапроса, но указанное имя нигде не используется.
+- Таблица
+- Подзапрос
+- [Табличная функция](table_functions/index.md)
-Вместо таблицы, может быть указана табличная функция. Подробнее смотрите раздел "Табличные функции".
+Также могут присутствовать `ARRAY JOIN` и обычный `JOIN` (смотрите ниже).
+
+Вместо таблицы, может быть указан подзапрос `SELECT` в скобках.
+В отличие от стандартного SQL, после подзапроса не обязательно указывать его синоним.
Для выполнения запроса, из соответствующей таблицы, вынимаются все столбцы, перечисленные в запросе. Из подзапросов выкидываются столбцы, не нужные для внешнего запроса.
-Если в запросе не перечислено ни одного столбца (например, SELECT count() FROM t), то из таблицы всё равно вынимается один какой-нибудь столбец (предпочитается самый маленький), для того, чтобы можно было хотя бы посчитать количество строк.
+Если в запросе не перечислено ни одного столбца (например, `SELECT count() FROM t`), то из таблицы всё равно вынимается один какой-нибудь столбец (предпочитается самый маленький), для того, чтобы можно было посчитать количество строк.
-Модификатор FINAL может быть использован при SELECT-е из таблиц типа ReplacingMergeTree, SummingMergeTree, AggregatingMergeTree, CollapsingMergeTree, VersionedCollapsingMergeTree. При указании FINAL, данные будут выбираться полностью "домерженными". Стоит учитывать, что использование FINAL приводит к выбору кроме указанных в SELECT-е столбцов также столбцов, относящихся к первичному ключу. Также, запрос будет выполняться в один поток, и при выполнении запроса будет выполняться слияние данных. Это приводит к тому, что при использовании FINAL, запрос выполняется медленнее. В большинстве случаев, следует избегать использования FINAL.
+Модификатор `FINAL` может быть использован в запросе `SELECT` из таблиц семейства [MergeTree](../operations/table_engines/mergetree.md). При указании `FINAL`, данные будут выбираться полностью "домерженными". Стоит учитывать, что использование `FINAL` приводит к чтению также столбцов, относящихся к первичному ключу. Также, запрос будет выполняться в один поток, и при выполнении запроса будет выполняться слияние данных. Это приводит к тому, что при использовании `FINAL`, запрос выполняется медленнее. В большинстве случаев, следует избегать использования `FINAL`.
+Модификатор `FINAL` может быть использован для всех таблиц семейства `MergeTree`, которые производят преобразования данных в процессе фоновых слияний (кроме GraphiteMergeTree).
### Секция SAMPLE {#select-sample-clause}
@@ -227,7 +232,7 @@ SAMPLE 1/10 OFFSET 1/2
Позволяет выполнить `JOIN` с массивом или вложенной структурой данных. Смысл похож на функцию [arrayJoin](functions/array_join.md#functions_arrayjoin), но функциональность более широкая.
-``` sql
+```sql
SELECT
FROM
[LEFT] ARRAY JOIN
@@ -246,7 +251,7 @@ FROM
Рассмотрим примеры использования `ARRAY JOIN` и `LEFT ARRAY JOIN`. Для начала создадим таблицу, содержащую столбец с типом [Array](../data_types/array.md), и добавим в него значение:
-``` sql
+```sql
CREATE TABLE arrays_test
(
s String,
@@ -256,7 +261,7 @@ CREATE TABLE arrays_test
INSERT INTO arrays_test
VALUES ('Hello', [1,2]), ('World', [3,4,5]), ('Goodbye', []);
```
-```
+```text
┌─s───────────┬─arr─────┐
│ Hello │ [1,2] │
│ World │ [3,4,5] │
@@ -271,7 +276,7 @@ SELECT s, arr
FROM arrays_test
ARRAY JOIN arr;
```
-```
+```text
┌─s─────┬─arr─┐
│ Hello │ 1 │
│ Hello │ 2 │
@@ -288,7 +293,7 @@ SELECT s, arr
FROM arrays_test
LEFT ARRAY JOIN arr;
```
-```
+```text
┌─s───────────┬─arr─┐
│ Hello │ 1 │
│ Hello │ 2 │
@@ -303,13 +308,13 @@ LEFT ARRAY JOIN arr;
Для массива в секции `ARRAY JOIN` может быть указан алиас. В этом случае, элемент массива будет доступен под этим алиасом, а сам массив — под исходным именем. Пример:
-``` sql
+```sql
SELECT s, arr, a
FROM arrays_test
ARRAY JOIN arr AS a;
```
-```
+```text
┌─s─────┬─arr─────┬─a─┐
│ Hello │ [1,2] │ 1 │
│ Hello │ [1,2] │ 2 │
@@ -321,13 +326,13 @@ ARRAY JOIN arr AS a;
Используя алиасы, можно выполнять `JOIN` с внешними массивами:
-``` sql
+```sql
SELECT s, arr_external
FROM arrays_test
ARRAY JOIN [1, 2, 3] AS arr_external;
```
-```
+```text
┌─s───────────┬─arr_external─┐
│ Hello │ 1 │
│ Hello │ 2 │
@@ -343,13 +348,13 @@ ARRAY JOIN [1, 2, 3] AS arr_external;
В секции `ARRAY JOIN` можно указать через запятую сразу несколько массивов. В этом случае, `JOIN` делается с ними одновременно (прямая сумма, а не прямое произведение). Обратите внимание, массивы должны быть одинаковых размеров. Примеры:
-``` sql
+```sql
SELECT s, arr, a, num, mapped
FROM arrays_test
ARRAY JOIN arr AS a, arrayEnumerate(arr) AS num, arrayMap(x -> x + 1, arr) AS mapped;
```
-```
+```text
┌─s─────┬─arr─────┬─a─┬─num─┬─mapped─┐
│ Hello │ [1,2] │ 1 │ 1 │ 2 │
│ Hello │ [1,2] │ 2 │ 2 │ 3 │
@@ -361,13 +366,13 @@ ARRAY JOIN arr AS a, arrayEnumerate(arr) AS num, arrayMap(x -> x + 1, arr) AS ma
В примере ниже используется функция [arrayEnumerate](functions/array_functions.md#array_functions-arrayenumerate):
-``` sql
+```sql
SELECT s, arr, a, num, arrayEnumerate(arr)
FROM arrays_test
ARRAY JOIN arr AS a, arrayEnumerate(arr) AS num;
```
-```
+```text
┌─s─────┬─arr─────┬─a─┬─num─┬─arrayEnumerate(arr)─┐
│ Hello │ [1,2] │ 1 │ 1 │ [1,2] │
│ Hello │ [1,2] │ 2 │ 2 │ [1,2] │
@@ -381,7 +386,7 @@ ARRAY JOIN arr AS a, arrayEnumerate(arr) AS num;
`ARRAY JOIN` также работает с [вложенными структурами данных](../data_types/nested_data_structures/nested.md). Пример:
-``` sql
+```sql
CREATE TABLE nested_test
(
s String,
@@ -394,7 +399,7 @@ INSERT INTO nested_test
VALUES ('Hello', [1,2], [10,20]), ('World', [3,4,5], [30,40,50]), ('Goodbye', [], []);
```
-```
+```text
┌─s───────┬─nest.x──┬─nest.y─────┐
│ Hello │ [1,2] │ [10,20] │
│ World │ [3,4,5] │ [30,40,50] │
@@ -402,13 +407,13 @@ VALUES ('Hello', [1,2], [10,20]), ('World', [3,4,5], [30,40,50]), ('Goodbye', []
└─────────┴─────────┴────────────┘
```
-``` sql
+```sql
SELECT s, `nest.x`, `nest.y`
FROM nested_test
ARRAY JOIN nest;
```
-```
+```text
┌─s─────┬─nest.x─┬─nest.y─┐
│ Hello │ 1 │ 10 │
│ Hello │ 2 │ 20 │
@@ -420,13 +425,13 @@ ARRAY JOIN nest;
При указании имени вложенной структуры данных в `ARRAY JOIN`, смысл такой же, как `ARRAY JOIN` со всеми элементами-массивами, из которых она состоит. Пример:
-``` sql
+```sql
SELECT s, `nest.x`, `nest.y`
FROM nested_test
ARRAY JOIN `nest.x`, `nest.y`;
```
-```
+```text
┌─s─────┬─nest.x─┬─nest.y─┐
│ Hello │ 1 │ 10 │
│ Hello │ 2 │ 20 │
@@ -438,13 +443,13 @@ ARRAY JOIN `nest.x`, `nest.y`;
Такой вариант тоже имеет смысл:
-``` sql
+```sql
SELECT s, `nest.x`, `nest.y`
FROM nested_test
ARRAY JOIN `nest.x`;
```
-```
+```text
┌─s─────┬─nest.x─┬─nest.y─────┐
│ Hello │ 1 │ [10,20] │
│ Hello │ 2 │ [10,20] │
@@ -456,13 +461,13 @@ ARRAY JOIN `nest.x`;
Алиас для вложенной структуры данных можно использовать, чтобы выбрать как результат `JOIN`-а, так и исходный массив. Пример:
-``` sql
+```sql
SELECT s, `n.x`, `n.y`, `nest.x`, `nest.y`
FROM nested_test
ARRAY JOIN nest AS n;
```
-```
+```text
┌─s─────┬─n.x─┬─n.y─┬─nest.x──┬─nest.y─────┐
│ Hello │ 1 │ 10 │ [1,2] │ [10,20] │
│ Hello │ 2 │ 20 │ [1,2] │ [10,20] │
@@ -474,13 +479,13 @@ ARRAY JOIN nest AS n;
Пример использования функции [arrayEnumerate](functions/array_functions.md#array_functions-arrayenumerate):
-``` sql
+```sql
SELECT s, `n.x`, `n.y`, `nest.x`, `nest.y`, num
FROM nested_test
ARRAY JOIN nest AS n, arrayEnumerate(`nest.x`) AS num;
```
-```
+```text
┌─s─────┬─n.x─┬─n.y─┬─nest.x──┬─nest.y─────┬─num─┐
│ Hello │ 1 │ 10 │ [1,2] │ [10,20] │ 1 │
│ Hello │ 2 │ 20 │ [1,2] │ [10,20] │ 2 │
@@ -524,13 +529,13 @@ FROM
Для создания запросов мы рекомендуем использоват синтаксис `JOIN ON` или `JOIN USING`. Например:
-```
+```sql
SELECT * FROM t1 JOIN t2 ON t1.a = t2.a JOIN t3 ON t1.a = t3.a
```
В секции `FROM` вы можете использовать разделенные запятыми списки таблиц для объединения. Этот синтаксис работает только при включённой настройке [allow_experimental_cross_to_join_conversion = 1](../operations/settings/settings.md#settings-allow_experimental_cross_to_join_conversion). Например:
-```
+```sql
SELECT * FROM t1, t2, t3 WHERE t1.a = t2.a AND t1.a = t3.a
```
@@ -550,18 +555,34 @@ ClickHouse не поддерживает синтаксис с запятыми
Таблицы для `ASOF JOIN` должны иметь столбец с отсортированной последовательностью. Этот столбец не может быть единственным в таблице и должен быть одного из типов: `UInt32`, `UInt64`, `Float32`, `Float64`, `Date` и `DateTime`.
-Синтаксис `ASOF JOIN`:
+Можно использовать следующие типы синтаксиса:
-```
-SELECT expression_list FROM table_1 ASOF JOIN table_2 USING(equi_column1, ... equi_columnN, asof_column)
-```
+- `ASOF JOIN ... ON`
-`ASOF JOIN` использует `equi_columnX` для объединения по равенству и `asof_column` для объединения по ближайшему совпадению.
+ ```sql
+ SELECT expressions_list
+ FROM table_1
+ ASOF LEFT JOIN table_2
+ ON equi_cond AND closest_match_cond
+ ```
+
+ Можно использовать произвольное количество условий равенства и одно условие на ближайшее совпадение. Например, `SELECT count() FROM A ASOF LEFT JOIN B ON A.a == B.b AND B.t <= A.t`. Можно использовать только условия `table_2.some_col <= table_1.some_col` и `table_1.some_col >= table2.some_col`. Условия типа `>` или `!=` не поддерживаются.
+
+- `ASOF JOIN ... USING`
+
+ ```sql
+ SELECT expressions_list
+ FROM table_1
+ ASOF JOIN table_2
+ USING (equi_column1, ... equi_columnN, asof_column)
+ ```
+
+ Для слияния по равенству `ASOF JOIN` использует `equi_columnX`, а для слияния по ближайшему совпадению использует `asof_column` с условием `table_1.asof_column >= table2.asof_column`. Столбец `asof_column` должен быть последним в секции `USING`.
Например, рассмотрим следующие таблицы:
```
- table_1 table_2
+ table_1 table_2
event | ev_time | user_id event | ev_time | user_id
----------|---------|---------- ----------|---------|----------
... ...
@@ -573,10 +594,8 @@ event_1_2 | 13:00 | 42 event_2_3 | 13:00 | 42
`ASOF JOIN` принимает метку времени пользовательского события из `table_1` и находит такое событие в `table_2` метка времени которого наиболее близка (равна или меньше) к метке времени события из `table_1`. При этом столбец `user_id` используется для объединения по равенству, а столбец `ev_time` для объединения по ближайшему совпадению. В нашем примере `event_1_1` может быть объединено с `event_2_1`, `event_1_2` может быть объединено с `event_2_3`, а `event_2_2` не объединяется.
-Детали реализации:
-
-- `asof_column` должен быть последним в секции `USING`.
-- `ASOF JOIN` не поддержан для движка таблиц [Join](../operations/table_engines/join.md).
+!!! note "Примечание"
+ `ASOF JOIN` не поддержан для движка таблиц [Join](../operations/table_engines/join.md).
Чтобы задать значение строгости по умолчанию, используйте сессионный параметр [join_default_strictness](../operations/settings/settings.md#settings-join_default_strictness).
@@ -620,7 +639,7 @@ ORDER BY hits DESC
LIMIT 10
```
-```
+```text
┌─CounterID─┬───hits─┬─visits─┐
│ 1143050 │ 523264 │ 13665 │
│ 731962 │ 475698 │ 102716 │
@@ -687,18 +706,14 @@ ClickHouse использует в выражении индексы, если
Пример проверки на `NULL`:
-```bash
-:) SELECT * FROM t_null WHERE y IS NULL
-
-SELECT *
-FROM t_null
-WHERE isNull(y)
+```sql
+SELECT * FROM t_null WHERE y IS NULL
+```
+```text
┌─x─┬────y─┐
│ 1 │ ᴺᵁᴸᴸ │
└───┴──────┘
-
-1 rows in set. Elapsed: 0.002 sec.
```
@@ -728,7 +743,7 @@ WHERE isNull(y)
Пример:
-``` sql
+```sql
SELECT
count(),
median(FetchTiming > 60 ? 60 : FetchTiming),
@@ -742,7 +757,7 @@ FROM hits
Пример:
-``` sql
+```sql
SELECT
domainWithoutWWW(URL) AS domain,
count(),
@@ -765,7 +780,7 @@ GROUP BY вычисляет для каждого встретившегося
Пусть есть таблица:
-```
+```text
┌─x─┬────y─┐
│ 1 │ 2 │
│ 2 │ ᴺᵁᴸᴸ │
@@ -777,7 +792,7 @@ GROUP BY вычисляет для каждого встретившегося
В результате запроса `SELECT sum(x), y FROM t_null_big GROUP BY y` мы получим:
-```
+```text
┌─sum(x)─┬────y─┐
│ 4 │ 2 │
│ 3 │ 3 │
@@ -929,7 +944,7 @@ WHERE и HAVING отличаются тем, что WHERE выполняется
Для таблицы
-```
+```text
┌─x─┬────y─┐
│ 1 │ ᴺᵁᴸᴸ │
│ 2 │ 2 │
@@ -946,7 +961,7 @@ WHERE и HAVING отличаются тем, что WHERE выполняется
Выполним запрос `SELECT * FROM t_null_nan ORDER BY y NULLS FIRST`, получим:
-```
+```text
┌─x─┬────y─┐
│ 1 │ ᴺᵁᴸᴸ │
│ 7 │ ᴺᵁᴸᴸ │
@@ -1040,7 +1055,7 @@ ClickHouse поддерживает использование в одном з
Произвольное количество запросов может быть объединено с помощью `UNION ALL`. Пример:
-``` sql
+```sql
SELECT CounterID, 1 AS table, toInt64(count()) AS c
FROM test.hits
GROUP BY CounterID
@@ -1087,7 +1102,7 @@ SELECT CounterID, 2 AS table, sum(Sign) AS c
Примеры:
-``` sql
+```sql
SELECT UserID IN (123, 456) FROM ...
SELECT (CounterID, UserID) IN ((34, 123), (101500, 456)) FROM ...
```
@@ -1106,7 +1121,7 @@ SELECT (CounterID, UserID) IN ((34, 123), (101500, 456)) FROM ...
В подзапросе может быть указано более одного столбца для фильтрации кортежей.
Пример:
-``` sql
+```sql
SELECT (CounterID, UserID) IN (SELECT CounterID, UserID FROM ...) FROM ...
```
@@ -1115,7 +1130,7 @@ SELECT (CounterID, UserID) IN (SELECT CounterID, UserID FROM ...) FROM ...
Оператор IN и подзапрос могут встречаться в любой части запроса, в том числе в агрегатных и лямбда функциях.
Пример:
-``` sql
+```sql
SELECT
EventDate,
avg(UserID IN
@@ -1129,7 +1144,7 @@ GROUP BY EventDate
ORDER BY EventDate ASC
```
-```
+```text
┌──EventDate─┬────ratio─┐
│ 2014-03-17 │ 1 │
│ 2014-03-18 │ 0.807696 │
@@ -1151,7 +1166,7 @@ ORDER BY EventDate ASC
Рассмотрим для примера таблицу `t_null`:
-```
+```text
┌─x─┬────y─┐
│ 1 │ ᴺᵁᴸᴸ │
│ 2 │ 3 │
@@ -1160,7 +1175,7 @@ ORDER BY EventDate ASC
При выполнении запроса `SELECT x FROM t_null WHERE y IN (NULL,3)` получим следующий результат:
-```
+```text
┌─x─┐
│ 2 │
└───┘
@@ -1168,10 +1183,11 @@ ORDER BY EventDate ASC
Видно, что строка, в которой `y = NULL`, выброшена из результатов запроса. Это произошло потому, что ClickHouse не может решить входит ли `NULL` в множество `(NULL,3)`, возвращает результат операции `0`, а `SELECT` выбрасывает эту строку из финальной выдачи.
-```
+```sql
SELECT y IN (NULL, 3)
FROM t_null
-
+```
+```text
┌─in(y, tuple(NULL, 3))─┐
│ 0 │
│ 1 │
@@ -1200,13 +1216,13 @@ FROM t_null
Например, запрос
-``` sql
+```sql
SELECT uniq(UserID) FROM distributed_table
```
будет отправлен на все удалённые серверы в виде
-``` sql
+```sql
SELECT uniq(UserID) FROM local_table
```
@@ -1214,7 +1230,7 @@ SELECT uniq(UserID) FROM local_table
Теперь рассмотрим запрос с IN-ом:
-``` sql
+```sql
SELECT uniq(UserID) FROM distributed_table WHERE CounterID = 101500 AND UserID IN (SELECT UserID FROM local_table WHERE CounterID = 34)
```
@@ -1222,7 +1238,7 @@ SELECT uniq(UserID) FROM distributed_table WHERE CounterID = 101500 AND UserID I
Этот запрос будет отправлен на все удалённые серверы в виде
-``` sql
+```sql
SELECT uniq(UserID) FROM local_table WHERE CounterID = 101500 AND UserID IN (SELECT UserID FROM local_table WHERE CounterID = 34)
```
@@ -1232,19 +1248,19 @@ SELECT uniq(UserID) FROM local_table WHERE CounterID = 101500 AND UserID IN (SEL
Чтобы исправить работу запроса, когда данные размазаны по серверам кластера произвольным образом, можно было бы указать **distributed_table** внутри подзапроса. Запрос будет выглядеть так:
-``` sql
+```sql
SELECT uniq(UserID) FROM distributed_table WHERE CounterID = 101500 AND UserID IN (SELECT UserID FROM distributed_table WHERE CounterID = 34)
```
Этот запрос будет отправлен на все удалённые серверы в виде
-``` sql
+```sql
SELECT uniq(UserID) FROM local_table WHERE CounterID = 101500 AND UserID IN (SELECT UserID FROM distributed_table WHERE CounterID = 34)
```
На каждом удалённом сервере начнёт выполняться подзапрос. Так как в подзапросе используется распределённая таблица, то подзапрос будет, на каждом удалённом сервере, снова отправлен на каждый удалённый сервер, в виде
-``` sql
+```sql
SELECT UserID FROM local_table WHERE CounterID = 34
```
@@ -1252,19 +1268,19 @@ SELECT UserID FROM local_table WHERE CounterID = 34
В таких случаях всегда следует использовать GLOBAL IN вместо IN. Рассмотрим его работу для запроса
-``` sql
+```sql
SELECT uniq(UserID) FROM distributed_table WHERE CounterID = 101500 AND UserID GLOBAL IN (SELECT UserID FROM distributed_table WHERE CounterID = 34)
```
На сервере-инициаторе запроса будет выполнен подзапрос
-``` sql
+```sql
SELECT UserID FROM distributed_table WHERE CounterID = 34
```
, и результат будет сложен во временную таблицу в оперативке. Затем запрос будет отправлен на каждый удалённый сервер в виде
-``` sql
+```sql
SELECT uniq(UserID) FROM local_table WHERE CounterID = 101500 AND UserID GLOBAL IN _data1
```
diff --git a/docs/ru/query_language/syntax.md b/docs/ru/query_language/syntax.md
index f48c8d236e4..703394789b3 100644
--- a/docs/ru/query_language/syntax.md
+++ b/docs/ru/query_language/syntax.md
@@ -108,7 +108,7 @@ INSERT INTO t VALUES (1, 'Hello, world'), (2, 'abc'), (3, 'def')
Синоним — это пользовательское имя выражения в запросе.
-```
+```sql
expr AS alias
```
@@ -136,7 +136,7 @@ expr AS alias
Будьте осторожны с синонимами, совпадающими с именами столбцов или таблиц. Рассмотрим следующий пример:
-```
+```sql
CREATE TABLE t
(
a Int,
@@ -145,12 +145,13 @@ CREATE TABLE t
ENGINE = TinyLog()
```
-```
+```sql
SELECT
argMax(a, b),
sum(b) AS b
FROM t
-
+```
+```text
Received exception from server (version 18.14.17):
Code: 184. DB::Exception: Received from localhost:9000, 127.0.0.1. DB::Exception: Aggregate function sum(b) is found inside another aggregate function in query.
```
diff --git a/docs/ru/query_language/system.md b/docs/ru/query_language/system.md
index 2abdc5d34de..998cf7fc682 100644
--- a/docs/ru/query_language/system.md
+++ b/docs/ru/query_language/system.md
@@ -62,7 +62,7 @@ ClickHouse может оперировать [распределёнными](..
Отключает фоновую отправку при вставке данных в распределённые таблицы.
-```
+```sql
SYSTEM STOP DISTRIBUTED SENDS [db.]
```
@@ -70,7 +70,7 @@ SYSTEM STOP DISTRIBUTED SENDS [db.]
В синхронном режиме отправляет все данные на узлы кластера. Если какие-либо узлы недоступны, ClickHouse генерирует исключение и останавливает выполнение запроса. Такой запрос можно повторять до успешного завершения, что будет означать возвращение связанности с остальными узлами кластера.
-```
+```sql
SYSTEM FLUSH DISTRIBUTED [db.]