mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-25 17:12:03 +00:00
Merge branch 'master' into initial-explain
This commit is contained in:
commit
fd7fcb28d4
45
.github/workflows/anchore-analysis.yml
vendored
Normal file
45
.github/workflows/anchore-analysis.yml
vendored
Normal file
@ -0,0 +1,45 @@
|
||||
# This workflow checks out code, performs an Anchore container image
|
||||
# vulnerability and compliance scan, and integrates the results with
|
||||
# GitHub Advanced Security code scanning feature. For more information on
|
||||
# the Anchore scan action usage and parameters, see
|
||||
# https://github.com/anchore/scan-action. For more information on
|
||||
# Anchore container image scanning in general, see
|
||||
# https://docs.anchore.com.
|
||||
|
||||
name: Docker Container Scan (clickhouse-server)
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
paths:
|
||||
- docker/server/Dockerfile
|
||||
- .github/workflows/anchore-analysis.yml
|
||||
schedule:
|
||||
- cron: '0 21 * * *'
|
||||
|
||||
jobs:
|
||||
Anchore-Build-Scan:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout the code
|
||||
uses: actions/checkout@v2
|
||||
- name: Build the Docker image
|
||||
run: |
|
||||
cd docker/server
|
||||
perl -pi -e 's|=\$version||g' Dockerfile
|
||||
docker build . --file Dockerfile --tag localbuild/testimage:latest
|
||||
- name: Run the local Anchore scan action itself with GitHub Advanced Security code scanning integration enabled
|
||||
uses: anchore/scan-action@master
|
||||
with:
|
||||
image-reference: "localbuild/testimage:latest"
|
||||
dockerfile-path: "docker/server/Dockerfile"
|
||||
acs-report-enable: true
|
||||
fail-build: true
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@v1.0.0
|
||||
with:
|
||||
name: AnchoreReports
|
||||
path: ./anchore-reports/
|
||||
- name: Upload Anchore Scan Report
|
||||
uses: github/codeql-action/upload-sarif@v1
|
||||
with:
|
||||
sarif_file: results.sarif
|
33
.github/workflows/codeql-analysis.yml
vendored
Normal file
33
.github/workflows/codeql-analysis.yml
vendored
Normal file
@ -0,0 +1,33 @@
|
||||
name: "CodeQL Scanning"
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: '0 19 * * *'
|
||||
jobs:
|
||||
CodeQL-Build:
|
||||
|
||||
runs-on: self-hosted
|
||||
timeout-minutes: 1440
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
fetch-depth: 2
|
||||
submodules: 'recursive'
|
||||
|
||||
- run: git checkout HEAD^2
|
||||
if: ${{ github.event_name == 'pull_request' }}
|
||||
|
||||
- name: Initialize CodeQL
|
||||
uses: github/codeql-action/init@v1
|
||||
|
||||
with:
|
||||
languages: cpp
|
||||
|
||||
- run: sudo apt-get update && sudo apt-get install -y git cmake python ninja-build gcc-9 g++-9 && mkdir build
|
||||
- run: cd build && CC=gcc-9 CXX=g++-9 cmake ..
|
||||
- run: cd build && ninja
|
||||
|
||||
- name: Perform CodeQL Analysis
|
||||
uses: github/codeql-action/analyze@v1
|
5
.gitmodules
vendored
5
.gitmodules
vendored
@ -1,6 +1,6 @@
|
||||
[submodule "contrib/poco"]
|
||||
path = contrib/poco
|
||||
url = https://github.com/ClickHouse-Extras/poco
|
||||
url = https://github.com/ClickHouse-Extras/poco.git
|
||||
branch = clickhouse
|
||||
[submodule "contrib/zstd"]
|
||||
path = contrib/zstd
|
||||
@ -157,6 +157,9 @@
|
||||
[submodule "contrib/openldap"]
|
||||
path = contrib/openldap
|
||||
url = https://github.com/openldap/openldap.git
|
||||
[submodule "contrib/AMQP-CPP"]
|
||||
path = contrib/AMQP-CPP
|
||||
url = https://github.com/CopernicaMarketingSoftware/AMQP-CPP.git
|
||||
[submodule "contrib/cassandra"]
|
||||
path = contrib/cassandra
|
||||
url = https://github.com/ClickHouse-Extras/cpp-driver.git
|
||||
|
590
CHANGELOG.md
590
CHANGELOG.md
@ -1,5 +1,444 @@
|
||||
## ClickHouse release 20.5
|
||||
|
||||
### ClickHouse release v20.5.2.7-stable 2020-07-02
|
||||
|
||||
#### Backward Incompatible Change
|
||||
|
||||
* Return non-Nullable result from COUNT(DISTINCT), and `uniq` aggregate functions family. If all passed values are NULL, return zero instead. This improves SQL compatibility. [#11661](https://github.com/ClickHouse/ClickHouse/pull/11661) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Added a check for the case when user-level setting is specified in a wrong place. User-level settings should be specified in `users.xml` inside `<profile>` section for specific user profile (or in `<default>` for default settings). The server won't start with exception message in log. This fixes [#9051](https://github.com/ClickHouse/ClickHouse/issues/9051). If you want to skip the check, you can either move settings to the appropriate place or add `<skip_check_for_incorrect_settings>1</skip_check_for_incorrect_settings>` to config.xml. [#11449](https://github.com/ClickHouse/ClickHouse/pull/11449) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* The setting `input_format_with_names_use_header` is enabled by default. It will affect parsing of input formats `-WithNames` and `-WithNamesAndTypes`. [#10937](https://github.com/ClickHouse/ClickHouse/pull/10937) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Remove `experimental_use_processors` setting. It is enabled by default. [#10924](https://github.com/ClickHouse/ClickHouse/pull/10924) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Update `zstd` to 1.4.4. It has some minor improvements in performance and compression ratio. If you run replicas with different versions of ClickHouse you may see reasonable error messages `Data after merge is not byte-identical to data on another replicas.` with explanation. These messages are Ok and you should not worry. This change is backward compatible but we list it here in changelog in case you will wonder about these messages. [#10663](https://github.com/ClickHouse/ClickHouse/pull/10663) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Added a check for meaningless codecs and a setting `allow_suspicious_codecs` to control this check. This closes [#4966](https://github.com/ClickHouse/ClickHouse/issues/4966). [#10645](https://github.com/ClickHouse/ClickHouse/pull/10645) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Several Kafka setting changes their defaults. See [#11388](https://github.com/ClickHouse/ClickHouse/pull/11388).
|
||||
|
||||
#### New Feature
|
||||
|
||||
* `TTL DELETE WHERE` and `TTL GROUP BY` for automatic data coarsening and rollup in tables. [#10537](https://github.com/ClickHouse/ClickHouse/pull/10537) ([expl0si0nn](https://github.com/expl0si0nn)).
|
||||
* Implementation of PostgreSQL wire protocol. [#10242](https://github.com/ClickHouse/ClickHouse/pull/10242) ([Movses](https://github.com/MovElb)).
|
||||
* Added system tables for users, roles, grants, settings profiles, quotas, row policies; added commands SHOW USER, SHOW [CURRENT|ENABLED] ROLES, SHOW SETTINGS PROFILES. [#10387](https://github.com/ClickHouse/ClickHouse/pull/10387) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Support writes in ODBC Table function [#10554](https://github.com/ClickHouse/ClickHouse/pull/10554) ([ageraab](https://github.com/ageraab)). [#10901](https://github.com/ClickHouse/ClickHouse/pull/10901) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Add query performance metrics based on Linux `perf_events` (these metrics are calculated with hardware CPU counters and OS counters). It is optional and requires `CAP_SYS_ADMIN` to be set on clickhouse binary. [#9545](https://github.com/ClickHouse/ClickHouse/pull/9545) [Andrey Skobtsov](https://github.com/And42). [#11226](https://github.com/ClickHouse/ClickHouse/pull/11226) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
* Now support `NULL` and `NOT NULL` modifiers for data types in `CREATE` query. [#11057](https://github.com/ClickHouse/ClickHouse/pull/11057) ([Павел Потемкин](https://github.com/Potya)).
|
||||
* Add `ArrowStream` input and output format. [#11088](https://github.com/ClickHouse/ClickHouse/pull/11088) ([hcz](https://github.com/hczhcz)).
|
||||
* Support Cassandra as external dictionary source. [#4978](https://github.com/ClickHouse/ClickHouse/pull/4978) ([favstovol](https://github.com/favstovol)).
|
||||
* Added a new layout `direct` which loads all the data directly from the source for each query, without storing or caching data. [#10622](https://github.com/ClickHouse/ClickHouse/pull/10622) ([Artem Streltsov](https://github.com/kekekekule)).
|
||||
* Added new `complex_key_direct` layout to dictionaries, that does not store anything locally during query execution. [#10850](https://github.com/ClickHouse/ClickHouse/pull/10850) ([Artem Streltsov](https://github.com/kekekekule)).
|
||||
* Added support for MySQL style global variables syntax (stub). This is needed for compatibility of MySQL protocol. [#11832](https://github.com/ClickHouse/ClickHouse/pull/11832) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Added syntax highligting to `clickhouse-client` using `replxx`. [#11422](https://github.com/ClickHouse/ClickHouse/pull/11422) ([Tagir Kuskarov](https://github.com/kuskarov)).
|
||||
* `minMap` and `maxMap` functions were added. [#11603](https://github.com/ClickHouse/ClickHouse/pull/11603) ([Ildus Kurbangaliev](https://github.com/ildus)).
|
||||
* Add the `system.asynchronous_metric_log` table that logs historical metrics from `system.asynchronous_metrics`. [#11588](https://github.com/ClickHouse/ClickHouse/pull/11588) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
* Add functions `extractAllGroupsHorizontal(haystack, re)` and `extractAllGroupsVertical(haystack, re)`. [#11554](https://github.com/ClickHouse/ClickHouse/pull/11554) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||
* Add SHOW CLUSTER(S) queries. [#11467](https://github.com/ClickHouse/ClickHouse/pull/11467) ([hexiaoting](https://github.com/hexiaoting)).
|
||||
* Add `netloc` function for extracting network location, similar to `urlparse(url)`, `netloc` in python. [#11356](https://github.com/ClickHouse/ClickHouse/pull/11356) ([Guillaume Tassery](https://github.com/YiuRULE)).
|
||||
* Add 2 more virtual columns for engine=Kafka to access message headers. [#11283](https://github.com/ClickHouse/ClickHouse/pull/11283) ([filimonov](https://github.com/filimonov)).
|
||||
* Add `_timestamp_ms` virtual column for Kafka engine (type is `Nullable(DateTime64(3))`). [#11260](https://github.com/ClickHouse/ClickHouse/pull/11260) ([filimonov](https://github.com/filimonov)).
|
||||
* Add function `randomFixedString`. [#10866](https://github.com/ClickHouse/ClickHouse/pull/10866) ([Andrei Nekrashevich](https://github.com/xolm)).
|
||||
* Add function `fuzzBits` that randomly flips bits in a string with given probability. [#11237](https://github.com/ClickHouse/ClickHouse/pull/11237) ([Andrei Nekrashevich](https://github.com/xolm)).
|
||||
* Allow comparison of numbers with constant string in comparison operators, IN and VALUES sections. [#11647](https://github.com/ClickHouse/ClickHouse/pull/11647) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add `round_robin` load_balancing mode. [#11645](https://github.com/ClickHouse/ClickHouse/pull/11645) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Add `cast_keep_nullable` setting. If set `CAST(something_nullable AS Type)` return `Nullable(Type)`. [#11733](https://github.com/ClickHouse/ClickHouse/pull/11733) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Added column `position` to `system.columns` table and `column_position` to `system.parts_columns` table. It contains ordinal position of a column in a table starting with 1. This closes [#7744](https://github.com/ClickHouse/ClickHouse/issues/7744). [#11655](https://github.com/ClickHouse/ClickHouse/pull/11655) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* ON CLUSTER support for SYSTEM {FLUSH DISTRIBUTED,STOP/START DISTRIBUTED SEND}. [#11415](https://github.com/ClickHouse/ClickHouse/pull/11415) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Add system.distribution_queue table. [#11394](https://github.com/ClickHouse/ClickHouse/pull/11394) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Support for all format settings in Kafka, expose some setting on table level, adjust the defaults for better performance. [#11388](https://github.com/ClickHouse/ClickHouse/pull/11388) ([filimonov](https://github.com/filimonov)).
|
||||
* Add `port` function (to extract port from URL). [#11120](https://github.com/ClickHouse/ClickHouse/pull/11120) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Now `dictGet*` functions accept table names. [#11050](https://github.com/ClickHouse/ClickHouse/pull/11050) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* The `clickhouse-format` tool is now able to format multiple queries when the `-n` argument is used. [#10852](https://github.com/ClickHouse/ClickHouse/pull/10852) ([Darío](https://github.com/dgrr)).
|
||||
* Possibility to configure proxy-resolver for DiskS3. [#10744](https://github.com/ClickHouse/ClickHouse/pull/10744) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||
* Make `pointInPolygon` work with non-constant polygon. PointInPolygon now can take Array(Array(Tuple(..., ...))) as second argument, array of polygon and holes. [#10623](https://github.com/ClickHouse/ClickHouse/pull/10623) ([Alexey Ilyukhov](https://github.com/livace)) [#11421](https://github.com/ClickHouse/ClickHouse/pull/11421) ([Alexey Ilyukhov](https://github.com/livace)).
|
||||
* Added `move_ttl_info` to `system.parts` in order to provide introspection of move TTL functionality. [#10591](https://github.com/ClickHouse/ClickHouse/pull/10591) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||
* Possibility to work with S3 through proxies. [#10576](https://github.com/ClickHouse/ClickHouse/pull/10576) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||
* Add `NCHAR` and `NVARCHAR` synonims for data types. [#11025](https://github.com/ClickHouse/ClickHouse/pull/11025) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Resolved [#7224](https://github.com/ClickHouse/ClickHouse/issues/7224): added `FailedQuery`, `FailedSelectQuery` and `FailedInsertQuery` metrics to `system.events` table. [#11151](https://github.com/ClickHouse/ClickHouse/pull/11151) ([Nikita Orlov](https://github.com/naorlov)).
|
||||
* Add more `jemalloc` statistics to `system.asynchronous_metrics`, and ensure that we see up-to-date values for them. [#11748](https://github.com/ClickHouse/ClickHouse/pull/11748) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
* Allow to specify default S3 credentials and custom auth headers. [#11134](https://github.com/ClickHouse/ClickHouse/pull/11134) ([Grigory Pervakov](https://github.com/GrigoryPervakov)).
|
||||
* Added new functions to import/export DateTime64 as Int64 with various precision: `to-/fromUnixTimestamp64Milli/-Micro/-Nano`. [#10923](https://github.com/ClickHouse/ClickHouse/pull/10923) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||
* Allow specifying `mongodb://` URI for MongoDB dictionaries. [#10915](https://github.com/ClickHouse/ClickHouse/pull/10915) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
* OFFSET keyword can now be used without an affiliated LIMIT clause. [#10802](https://github.com/ClickHouse/ClickHouse/pull/10802) ([Guillaume Tassery](https://github.com/YiuRULE)).
|
||||
* Added `system.licenses` table. This table contains licenses of third-party libraries that are located in `contrib` directory. This closes [#2890](https://github.com/ClickHouse/ClickHouse/issues/2890). [#10795](https://github.com/ClickHouse/ClickHouse/pull/10795) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* New function function toStartOfSecond(DateTime64) -> DateTime64 that nullifies sub-second part of DateTime64 value. [#10722](https://github.com/ClickHouse/ClickHouse/pull/10722) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||
* Add new input format `JSONAsString` that accepts a sequence of JSON objects separated by newlines, spaces and/or commas. [#10607](https://github.com/ClickHouse/ClickHouse/pull/10607) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Allowed to profile memory with finer granularity steps than 4 MiB. Added sampling memory profiler to capture random allocations/deallocations. [#10598](https://github.com/ClickHouse/ClickHouse/pull/10598) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* `SimpleAggregateFunction` now also supports `sumMap`. [#10000](https://github.com/ClickHouse/ClickHouse/pull/10000) ([Ildus Kurbangaliev](https://github.com/ildus)).
|
||||
* Support `ALTER RENAME COLUMN` for the distributed table engine. Continuation of [#10727](https://github.com/ClickHouse/ClickHouse/issues/10727). Fixes [#10747](https://github.com/ClickHouse/ClickHouse/issues/10747). [#10887](https://github.com/ClickHouse/ClickHouse/pull/10887) ([alesapin](https://github.com/alesapin)).
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix UBSan report in Decimal parse. This fixes [#7540](https://github.com/ClickHouse/ClickHouse/issues/7540). [#10512](https://github.com/ClickHouse/ClickHouse/pull/10512) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix potential floating point exception when parsing DateTime64. This fixes [#11374](https://github.com/ClickHouse/ClickHouse/issues/11374). [#11875](https://github.com/ClickHouse/ClickHouse/pull/11875) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix rare crash caused by using `Nullable` column in prewhere condition. [#11895](https://github.com/ClickHouse/ClickHouse/pull/11895) [#11608](https://github.com/ClickHouse/ClickHouse/issues/11608) [#11869](https://github.com/ClickHouse/ClickHouse/pull/11869) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Don't allow arrayJoin inside higher order functions. It was leading to broken protocol synchronization. This closes [#3933](https://github.com/ClickHouse/ClickHouse/issues/3933). [#11846](https://github.com/ClickHouse/ClickHouse/pull/11846) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix wrong result of comparison of FixedString with constant String. This fixes [#11393](https://github.com/ClickHouse/ClickHouse/issues/11393). This bug appeared in version 20.4. [#11828](https://github.com/ClickHouse/ClickHouse/pull/11828) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix wrong result for `if` with NULLs in condition. [#11807](https://github.com/ClickHouse/ClickHouse/pull/11807) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix using too many threads for queries. [#11788](https://github.com/ClickHouse/ClickHouse/pull/11788) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed `Scalar doesn't exist` exception when using `WITH <scalar subquery> ...` in `SELECT ... FROM merge_tree_table ...` https://github.com/ClickHouse/ClickHouse/issues/11621. [#11767](https://github.com/ClickHouse/ClickHouse/pull/11767) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix unexpected behaviour of queries like `SELECT *, xyz.*` which were success while an error expected. [#11753](https://github.com/ClickHouse/ClickHouse/pull/11753) ([hexiaoting](https://github.com/hexiaoting)).
|
||||
* Now replicated fetches will be cancelled during metadata alter. [#11744](https://github.com/ClickHouse/ClickHouse/pull/11744) ([alesapin](https://github.com/alesapin)).
|
||||
* Parse metadata stored in zookeeper before checking for equality. [#11739](https://github.com/ClickHouse/ClickHouse/pull/11739) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed LOGICAL_ERROR caused by wrong type deduction of complex literals in Values input format. [#11732](https://github.com/ClickHouse/ClickHouse/pull/11732) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix `ORDER BY ... WITH FILL` over const columns. [#11697](https://github.com/ClickHouse/ClickHouse/pull/11697) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix very rare race condition in SYSTEM SYNC REPLICA. If the replicated table is created and at the same time from the separate connection another client is issuing `SYSTEM SYNC REPLICA` command on that table (this is unlikely, because another client should be aware that the table is created), it's possible to get nullptr dereference. [#11691](https://github.com/ClickHouse/ClickHouse/pull/11691) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Pass proper timeouts when communicating with XDBC bridge. Recently timeouts were not respected when checking bridge liveness and receiving meta info. [#11690](https://github.com/ClickHouse/ClickHouse/pull/11690) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix `LIMIT n WITH TIES` usage together with `ORDER BY` statement, which contains aliases. [#11689](https://github.com/ClickHouse/ClickHouse/pull/11689) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix possible `Pipeline stuck` for selects with parallel `FINAL`. Fixes [#11636](https://github.com/ClickHouse/ClickHouse/issues/11636). [#11682](https://github.com/ClickHouse/ClickHouse/pull/11682) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix error which leads to an incorrect state of `system.mutations`. It may show that whole mutation is already done but the server still has `MUTATE_PART` tasks in the replication queue and tries to execute them. This fixes [#11611](https://github.com/ClickHouse/ClickHouse/issues/11611). [#11681](https://github.com/ClickHouse/ClickHouse/pull/11681) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix syntax hilite in CREATE USER query. [#11664](https://github.com/ClickHouse/ClickHouse/pull/11664) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add support for regular expressions with case-insensitive flags. This fixes [#11101](https://github.com/ClickHouse/ClickHouse/issues/11101) and fixes [#11506](https://github.com/ClickHouse/ClickHouse/issues/11506). [#11649](https://github.com/ClickHouse/ClickHouse/pull/11649) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Remove trivial count query optimization if row-level security is set. In previous versions the user get total count of records in a table instead filtered. This fixes [#11352](https://github.com/ClickHouse/ClickHouse/issues/11352). [#11644](https://github.com/ClickHouse/ClickHouse/pull/11644) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix bloom filters for String (data skipping indices). [#11638](https://github.com/ClickHouse/ClickHouse/pull/11638) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Without `-q` option the database does not get created at startup. [#11604](https://github.com/ClickHouse/ClickHouse/pull/11604) ([giordyb](https://github.com/giordyb)).
|
||||
* Fix error `Block structure mismatch` for queries with sampling reading from `Buffer` table. [#11602](https://github.com/ClickHouse/ClickHouse/pull/11602) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix wrong exit code of the clickhouse-client, when `exception.code() % 256 == 0`. [#11601](https://github.com/ClickHouse/ClickHouse/pull/11601) ([filimonov](https://github.com/filimonov)).
|
||||
* Fix race conditions in CREATE/DROP of different replicas of ReplicatedMergeTree. Continue to work if the table was not removed completely from ZooKeeper or not created successfully. This fixes [#11432](https://github.com/ClickHouse/ClickHouse/issues/11432). [#11592](https://github.com/ClickHouse/ClickHouse/pull/11592) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix trivial error in log message about "Mark cache size was lowered" at server startup. This closes [#11399](https://github.com/ClickHouse/ClickHouse/issues/11399). [#11589](https://github.com/ClickHouse/ClickHouse/pull/11589) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix error `Size of offsets doesn't match size of column` for queries with `PREWHERE column in (subquery)` and `ARRAY JOIN`. [#11580](https://github.com/ClickHouse/ClickHouse/pull/11580) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed rare segfault in `SHOW CREATE TABLE` Fixes [#11490](https://github.com/ClickHouse/ClickHouse/issues/11490). [#11579](https://github.com/ClickHouse/ClickHouse/pull/11579) ([tavplubix](https://github.com/tavplubix)).
|
||||
* All queries in HTTP session have had the same query_id. It is fixed. [#11578](https://github.com/ClickHouse/ClickHouse/pull/11578) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Now clickhouse-server docker container will prefer IPv6 checking server aliveness. [#11550](https://github.com/ClickHouse/ClickHouse/pull/11550) ([Ivan Starkov](https://github.com/istarkov)).
|
||||
* Fix the error `Data compressed with different methods` that can happen if `min_bytes_to_use_direct_io` is enabled and PREWHERE is active and using SAMPLE or high number of threads. This fixes [#11539](https://github.com/ClickHouse/ClickHouse/issues/11539). [#11540](https://github.com/ClickHouse/ClickHouse/pull/11540) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix shard_num/replica_num for `<node>` (breaks use_compact_format_in_distributed_parts_names). [#11528](https://github.com/ClickHouse/ClickHouse/pull/11528) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix async INSERT into Distributed for prefer_localhost_replica=0 and w/o internal_replication. [#11527](https://github.com/ClickHouse/ClickHouse/pull/11527) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix memory leak when exception is thrown in the middle of aggregation with `-State` functions. This fixes [#8995](https://github.com/ClickHouse/ClickHouse/issues/8995). [#11496](https://github.com/ClickHouse/ClickHouse/pull/11496) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix `Pipeline stuck` exception for `INSERT SELECT FINAL` where `SELECT` (`max_threads`>1) has multiple streams but `INSERT` has only one (`max_insert_threads`==0). [#11455](https://github.com/ClickHouse/ClickHouse/pull/11455) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix wrong result in queries like `select count() from t, u`. [#11454](https://github.com/ClickHouse/ClickHouse/pull/11454) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix return compressed size for codecs. [#11448](https://github.com/ClickHouse/ClickHouse/pull/11448) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix server crash when a column has compression codec with non-literal arguments. Fixes [#11365](https://github.com/ClickHouse/ClickHouse/issues/11365). [#11431](https://github.com/ClickHouse/ClickHouse/pull/11431) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix potential uninitialized memory read in MergeTree shutdown if table was not created successfully. [#11420](https://github.com/ClickHouse/ClickHouse/pull/11420) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix crash in JOIN over `LowCarinality(T)` and `Nullable(T)`. [#11380](https://github.com/ClickHouse/ClickHouse/issues/11380). [#11414](https://github.com/ClickHouse/ClickHouse/pull/11414) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix error code for wrong `USING` key. [#11373](https://github.com/ClickHouse/ClickHouse/issues/11373). [#11404](https://github.com/ClickHouse/ClickHouse/pull/11404) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fixed `geohashesInBox` with arguments outside of latitude/longitude range. [#11403](https://github.com/ClickHouse/ClickHouse/pull/11403) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||
* Better errors for `joinGet()` functions. [#11389](https://github.com/ClickHouse/ClickHouse/pull/11389) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix possible `Pipeline stuck` error for queries with external sort and limit. Fixes [#11359](https://github.com/ClickHouse/ClickHouse/issues/11359). [#11366](https://github.com/ClickHouse/ClickHouse/pull/11366) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Remove redundant lock during parts send in ReplicatedMergeTree. [#11354](https://github.com/ClickHouse/ClickHouse/pull/11354) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix support for `\G` (vertical output) in clickhouse-client in multiline mode. This closes [#9933](https://github.com/ClickHouse/ClickHouse/issues/9933). [#11350](https://github.com/ClickHouse/ClickHouse/pull/11350) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix potential segfault when using `Lazy` database. [#11348](https://github.com/ClickHouse/ClickHouse/pull/11348) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix crash in direct selects from `Join` table engine (without JOIN) and wrong nullability. [#11340](https://github.com/ClickHouse/ClickHouse/pull/11340) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix crash in `quantilesExactWeightedArray`. [#11337](https://github.com/ClickHouse/ClickHouse/pull/11337) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Now merges stopped before change metadata in `ALTER` queries. [#11335](https://github.com/ClickHouse/ClickHouse/pull/11335) ([alesapin](https://github.com/alesapin)).
|
||||
* Make writing to `MATERIALIZED VIEW` with setting `parallel_view_processing = 1` parallel again. Fixes [#10241](https://github.com/ClickHouse/ClickHouse/issues/10241). [#11330](https://github.com/ClickHouse/ClickHouse/pull/11330) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix `visitParamExtractRaw` when extracted JSON has strings with unbalanced { or [. [#11318](https://github.com/ClickHouse/ClickHouse/pull/11318) ([Ewout](https://github.com/devwout)).
|
||||
* Fix very rare race condition in ThreadPool. [#11314](https://github.com/ClickHouse/ClickHouse/pull/11314) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix insignificant data race in `clickhouse-copier`. Found by integration tests. [#11313](https://github.com/ClickHouse/ClickHouse/pull/11313) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix potential uninitialized memory in conversion. Example: `SELECT toIntervalSecond(now64())`. [#11311](https://github.com/ClickHouse/ClickHouse/pull/11311) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix the issue when index analysis cannot work if a table has Array column in primary key and if a query is filtering by this column with `empty` or `notEmpty` functions. This fixes [#11286](https://github.com/ClickHouse/ClickHouse/issues/11286). [#11303](https://github.com/ClickHouse/ClickHouse/pull/11303) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix bug when query speed estimation can be incorrect and the limit of `min_execution_speed` may not work or work incorrectly if the query is throttled by `max_network_bandwidth`, `max_execution_speed` or `priority` settings. Change the default value of `timeout_before_checking_execution_speed` to non-zero, because otherwise the settings `min_execution_speed` and `max_execution_speed` have no effect. This fixes [#11297](https://github.com/ClickHouse/ClickHouse/issues/11297). This fixes [#5732](https://github.com/ClickHouse/ClickHouse/issues/5732). This fixes [#6228](https://github.com/ClickHouse/ClickHouse/issues/6228). Usability improvement: avoid concatenation of exception message with progress bar in `clickhouse-client`. [#11296](https://github.com/ClickHouse/ClickHouse/pull/11296) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix crash when `SET DEFAULT ROLE` is called with wrong arguments. This fixes https://github.com/ClickHouse/ClickHouse/issues/10586. [#11278](https://github.com/ClickHouse/ClickHouse/pull/11278) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix crash while reading malformed data in `Protobuf` format. This fixes https://github.com/ClickHouse/ClickHouse/issues/5957, fixes https://github.com/ClickHouse/ClickHouse/issues/11203. [#11258](https://github.com/ClickHouse/ClickHouse/pull/11258) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fixed a bug when `cache` dictionary could return default value instead of normal (when there are only expired keys). This affects only string fields. [#11233](https://github.com/ClickHouse/ClickHouse/pull/11233) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Fix error `Block structure mismatch in QueryPipeline` while reading from `VIEW` with constants in inner query. Fixes [#11181](https://github.com/ClickHouse/ClickHouse/issues/11181). [#11205](https://github.com/ClickHouse/ClickHouse/pull/11205) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix possible exception `Invalid status for associated output`. [#11200](https://github.com/ClickHouse/ClickHouse/pull/11200) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Now `primary.idx` will be checked if it's defined in `CREATE` query. [#11199](https://github.com/ClickHouse/ClickHouse/pull/11199) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix possible error `Cannot capture column` for higher-order functions with `Array(Array(LowCardinality))` captured argument. [#11185](https://github.com/ClickHouse/ClickHouse/pull/11185) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed `S3` globbing which could fail in case of more than 1000 keys and some backends. [#11179](https://github.com/ClickHouse/ClickHouse/pull/11179) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||
* If data skipping index is dependent on columns that are going to be modified during background merge (for SummingMergeTree, AggregatingMergeTree as well as for TTL GROUP BY), it was calculated incorrectly. This issue is fixed by moving index calculation after merge so the index is calculated on merged data. [#11162](https://github.com/ClickHouse/ClickHouse/pull/11162) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix for the hang which was happening sometimes during DROP of table engine=Kafka (or during server restarts). [#11145](https://github.com/ClickHouse/ClickHouse/pull/11145) ([filimonov](https://github.com/filimonov)).
|
||||
* Fix excessive reserving of threads for simple queries (optimization for reducing the number of threads, which was partly broken after changes in pipeline). [#11114](https://github.com/ClickHouse/ClickHouse/pull/11114) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Remove logging from mutation finalization task if nothing was finalized. [#11109](https://github.com/ClickHouse/ClickHouse/pull/11109) ([alesapin](https://github.com/alesapin)).
|
||||
* Fixed deadlock during server startup after update with changes in structure of system log tables. [#11106](https://github.com/ClickHouse/ClickHouse/pull/11106) ([alesapin](https://github.com/alesapin)).
|
||||
* Fixed memory leak in registerDiskS3. [#11074](https://github.com/ClickHouse/ClickHouse/pull/11074) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||
* Fix error `No such name in Block::erase()` when JOIN appears with PREWHERE or `optimize_move_to_prewhere` makes PREWHERE from WHERE. [#11051](https://github.com/ClickHouse/ClickHouse/pull/11051) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fixes the potential missed data during termination of Kafka engine table. [#11048](https://github.com/ClickHouse/ClickHouse/pull/11048) ([filimonov](https://github.com/filimonov)).
|
||||
* Fixed parseDateTime64BestEffort argument resolution bugs. [#10925](https://github.com/ClickHouse/ClickHouse/issues/10925). [#11038](https://github.com/ClickHouse/ClickHouse/pull/11038) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||
* Now it's possible to `ADD/DROP` and `RENAME` the same one column in a single `ALTER` query. Exception message for simultaneous `MODIFY` and `RENAME` became more clear. Partially fixes [#10669](https://github.com/ClickHouse/ClickHouse/issues/10669). [#11037](https://github.com/ClickHouse/ClickHouse/pull/11037) ([alesapin](https://github.com/alesapin)).
|
||||
* Fixed parsing of S3 URLs. [#11036](https://github.com/ClickHouse/ClickHouse/pull/11036) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||
* Fix memory tracking for two-level `GROUP BY` when there is a `LIMIT`. [#11022](https://github.com/ClickHouse/ClickHouse/pull/11022) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix very rare potential use-after-free error in MergeTree if table was not created successfully. [#10986](https://github.com/ClickHouse/ClickHouse/pull/10986) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix metadata (relative path for rename) and data (relative path for symlink) handling for Atomic database. [#10980](https://github.com/ClickHouse/ClickHouse/pull/10980) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix server crash on concurrent `ALTER` and `DROP DATABASE` queries with `Atomic` database engine. [#10968](https://github.com/ClickHouse/ClickHouse/pull/10968) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix incorrect raw data size in method getRawData(). [#10964](https://github.com/ClickHouse/ClickHouse/pull/10964) ([Igr](https://github.com/ObjatieGroba)).
|
||||
* Fix incompatibility of two-level aggregation between versions 20.1 and earlier. This incompatibility happens when different versions of ClickHouse are used on initiator node and remote nodes and the size of GROUP BY result is large and aggregation is performed by a single String field. It leads to several unmerged rows for a single key in result. [#10952](https://github.com/ClickHouse/ClickHouse/pull/10952) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Avoid sending partially written files by the DistributedBlockOutputStream. [#10940](https://github.com/ClickHouse/ClickHouse/pull/10940) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix crash in `SELECT count(notNullIn(NULL, []))`. [#10920](https://github.com/ClickHouse/ClickHouse/pull/10920) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix for the hang which was happening sometimes during DROP of table engine=Kafka (or during server restarts). [#10910](https://github.com/ClickHouse/ClickHouse/pull/10910) ([filimonov](https://github.com/filimonov)).
|
||||
* Now it's possible to execute multiple `ALTER RENAME` like `a TO b, c TO a`. [#10895](https://github.com/ClickHouse/ClickHouse/pull/10895) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix possible race which could happen when you get result from aggregate function state from multiple thread for the same column. The only way (which I found) it can happen is when you use `finalizeAggregation` function while reading from table with `Memory` engine which stores `AggregateFunction` state for `quanite*` function. [#10890](https://github.com/ClickHouse/ClickHouse/pull/10890) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix backward compatibility with tuples in Distributed tables. [#10889](https://github.com/ClickHouse/ClickHouse/pull/10889) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix SIGSEGV in StringHashTable (if such key does not exist). [#10870](https://github.com/ClickHouse/ClickHouse/pull/10870) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed `WATCH` hangs after `LiveView` table was dropped from database with `Atomic` engine. [#10859](https://github.com/ClickHouse/ClickHouse/pull/10859) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fixed bug in `ReplicatedMergeTree` which might cause some `ALTER` on `OPTIMIZE` query to hang waiting for some replica after it become inactive. [#10849](https://github.com/ClickHouse/ClickHouse/pull/10849) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Now constraints are updated if the column participating in `CONSTRAINT` expression was renamed. Fixes [#10844](https://github.com/ClickHouse/ClickHouse/issues/10844). [#10847](https://github.com/ClickHouse/ClickHouse/pull/10847) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix potential read of uninitialized memory in cache dictionary. [#10834](https://github.com/ClickHouse/ClickHouse/pull/10834) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix columns order after Block::sortColumns() (also add a test that shows that it affects some real use case - Buffer engine). [#10826](https://github.com/ClickHouse/ClickHouse/pull/10826) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix the issue with ODBC bridge when no quoting of identifiers is requested. This fixes [#7984](https://github.com/ClickHouse/ClickHouse/issues/7984). [#10821](https://github.com/ClickHouse/ClickHouse/pull/10821) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix UBSan and MSan report in DateLUT. [#10798](https://github.com/ClickHouse/ClickHouse/pull/10798) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Make use of `src_type` for correct type conversion in key conditions. Fixes [#6287](https://github.com/ClickHouse/ClickHouse/issues/6287). [#10791](https://github.com/ClickHouse/ClickHouse/pull/10791) ([Andrew Onyshchuk](https://github.com/oandrew)).
|
||||
* Get rid of old libunwind patches. https://github.com/ClickHouse-Extras/libunwind/commit/500aa227911bd185a94bfc071d68f4d3b03cb3b1#r39048012 This allows to disable `-fno-omit-frame-pointer` in `clang` builds that improves performance at least by 1% in average. [#10761](https://github.com/ClickHouse/ClickHouse/pull/10761) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix avgWeighted when using floating-point weight over multiple shards. [#10758](https://github.com/ClickHouse/ClickHouse/pull/10758) ([Baudouin Giard](https://github.com/bgiard)).
|
||||
* Fix `parallel_view_processing` behavior. Now all insertions into `MATERIALIZED VIEW` without exception should be finished if exception happened. Fixes [#10241](https://github.com/ClickHouse/ClickHouse/issues/10241). [#10757](https://github.com/ClickHouse/ClickHouse/pull/10757) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix combinator -OrNull and -OrDefault when combined with -State. [#10741](https://github.com/ClickHouse/ClickHouse/pull/10741) ([hcz](https://github.com/hczhcz)).
|
||||
* Fix crash in `generateRandom` with nested types. Fixes [#10583](https://github.com/ClickHouse/ClickHouse/issues/10583). [#10734](https://github.com/ClickHouse/ClickHouse/pull/10734) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix data corruption for `LowCardinality(FixedString)` key column in `SummingMergeTree` which could have happened after merge. Fixes [#10489](https://github.com/ClickHouse/ClickHouse/issues/10489). [#10721](https://github.com/ClickHouse/ClickHouse/pull/10721) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix usage of primary key wrapped into a function with 'FINAL' modifier and 'ORDER BY' optimization. [#10715](https://github.com/ClickHouse/ClickHouse/pull/10715) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix possible buffer overflow in function `h3EdgeAngle`. [#10711](https://github.com/ClickHouse/ClickHouse/pull/10711) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix disappearing totals. Totals could have being filtered if query had had join or subquery with external where condition. Fixes [#10674](https://github.com/ClickHouse/ClickHouse/issues/10674). [#10698](https://github.com/ClickHouse/ClickHouse/pull/10698) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix atomicity of HTTP insert. This fixes [#9666](https://github.com/ClickHouse/ClickHouse/issues/9666). [#10687](https://github.com/ClickHouse/ClickHouse/pull/10687) ([Andrew Onyshchuk](https://github.com/oandrew)).
|
||||
* Fix multiple usages of `IN` operator with the identical set in one query. [#10686](https://github.com/ClickHouse/ClickHouse/pull/10686) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fixed bug, which causes http requests stuck on client close when `readonly=2` and `cancel_http_readonly_queries_on_client_close=1`. Fixes [#7939](https://github.com/ClickHouse/ClickHouse/issues/7939), [#7019](https://github.com/ClickHouse/ClickHouse/issues/7019), [#7736](https://github.com/ClickHouse/ClickHouse/issues/7736), [#7091](https://github.com/ClickHouse/ClickHouse/issues/7091). [#10684](https://github.com/ClickHouse/ClickHouse/pull/10684) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix order of parameters in AggregateTransform constructor. [#10667](https://github.com/ClickHouse/ClickHouse/pull/10667) ([palasonic1](https://github.com/palasonic1)).
|
||||
* Fix the lack of parallel execution of remote queries with `distributed_aggregation_memory_efficient` enabled. Fixes [#10655](https://github.com/ClickHouse/ClickHouse/issues/10655). [#10664](https://github.com/ClickHouse/ClickHouse/pull/10664) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix possible incorrect number of rows for queries with `LIMIT`. Fixes [#10566](https://github.com/ClickHouse/ClickHouse/issues/10566), [#10709](https://github.com/ClickHouse/ClickHouse/issues/10709). [#10660](https://github.com/ClickHouse/ClickHouse/pull/10660) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix bug which locks concurrent alters when table has a lot of parts. [#10659](https://github.com/ClickHouse/ClickHouse/pull/10659) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix nullptr dereference in StorageBuffer if server was shutdown before table startup. [#10641](https://github.com/ClickHouse/ClickHouse/pull/10641) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix predicates optimization for distributed queries (`enable_optimize_predicate_expression=1`) for queries with `HAVING` section (i.e. when filtering on the server initiator is required), by preserving the order of expressions (and this is enough to fix), and also force aggregator use column names over indexes. Fixes: [#10613](https://github.com/ClickHouse/ClickHouse/issues/10613), [#11413](https://github.com/ClickHouse/ClickHouse/issues/11413). [#10621](https://github.com/ClickHouse/ClickHouse/pull/10621) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix optimize_skip_unused_shards with LowCardinality. [#10611](https://github.com/ClickHouse/ClickHouse/pull/10611) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix segfault in StorageBuffer when exception on server startup. Fixes [#10550](https://github.com/ClickHouse/ClickHouse/issues/10550). [#10609](https://github.com/ClickHouse/ClickHouse/pull/10609) ([tavplubix](https://github.com/tavplubix)).
|
||||
* On `SYSTEM DROP DNS CACHE` query also drop caches, which are used to check if user is allowed to connect from some IP addresses. [#10608](https://github.com/ClickHouse/ClickHouse/pull/10608) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fixed incorrect scalar results inside inner query of `MATERIALIZED VIEW` in case if this query contained dependent table. [#10603](https://github.com/ClickHouse/ClickHouse/pull/10603) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed handling condition variable for synchronous mutations. In some cases signals to that condition variable could be lost. [#10588](https://github.com/ClickHouse/ClickHouse/pull/10588) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||
* Fixes possible crash `createDictionary()` is called before `loadStoredObject()` has finished. [#10587](https://github.com/ClickHouse/ClickHouse/pull/10587) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix error `the BloomFilter false positive must be a double number between 0 and 1` [#10551](https://github.com/ClickHouse/ClickHouse/issues/10551). [#10569](https://github.com/ClickHouse/ClickHouse/pull/10569) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Fix SELECT of column ALIAS which default expression type different from column type. [#10563](https://github.com/ClickHouse/ClickHouse/pull/10563) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Implemented comparison between DateTime64 and String values (just like for DateTime). [#10560](https://github.com/ClickHouse/ClickHouse/pull/10560) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||
* Fix index corruption, which may accur in some cases after merge compact parts into another compact part. [#10531](https://github.com/ClickHouse/ClickHouse/pull/10531) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Disable GROUP BY sharding_key optimization by default (`optimize_distributed_group_by_sharding_key` had been introduced and turned of by default, due to trickery of sharding_key analyzing, simple example is `if` in sharding key) and fix it for WITH ROLLUP/CUBE/TOTALS. [#10516](https://github.com/ClickHouse/ClickHouse/pull/10516) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixes: [#10263](https://github.com/ClickHouse/ClickHouse/issues/10263) (after that PR dist send via INSERT had been postponing on each INSERT) Fixes: [#8756](https://github.com/ClickHouse/ClickHouse/issues/8756) (that PR breaks distributed sends with all of the following conditions met (unlikely setup for now I guess): `internal_replication == false`, multiple local shards (activates the hardlinking code) and `distributed_storage_policy` (makes `link(2)` fails on `EXDEV`)). [#10486](https://github.com/ClickHouse/ClickHouse/pull/10486) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed error with "max_rows_to_sort" limit. [#10268](https://github.com/ClickHouse/ClickHouse/pull/10268) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Get dictionary and check access rights only once per each call of any function reading external dictionaries. [#10928](https://github.com/ClickHouse/ClickHouse/pull/10928) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
|
||||
#### Improvement
|
||||
|
||||
* Apply `TTL` for old data, after `ALTER MODIFY TTL` query. This behaviour is controlled by setting `materialize_ttl_after_modify`, which is enabled by default. [#11042](https://github.com/ClickHouse/ClickHouse/pull/11042) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* When parsing C-style backslash escapes in string literals, VALUES and various text formats (this is an extension to SQL standard that is endemic for ClickHouse and MySQL), keep backslash if unknown escape sequence is found (e.g. `\%` or `\w`) that will make usage of `LIKE` and `match` regular expressions more convenient (it's enough to write `name LIKE 'used\_cars'` instead of `name LIKE 'used\\_cars'`) and more compatible at the same time. This fixes [#10922](https://github.com/ClickHouse/ClickHouse/issues/10922). [#11208](https://github.com/ClickHouse/ClickHouse/pull/11208) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* When reading Decimal value, cut extra digits after point. This behaviour is more compatible with MySQL and PostgreSQL. This fixes [#10202](https://github.com/ClickHouse/ClickHouse/issues/10202). [#11831](https://github.com/ClickHouse/ClickHouse/pull/11831) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Allow to DROP replicated table if the metadata in ZooKeeper was already removed and does not exist (this is also the case when using TestKeeper for testing and the server was restarted). Allow to RENAME replicated table even if there is an error communicating with ZooKeeper. This fixes [#10720](https://github.com/ClickHouse/ClickHouse/issues/10720). [#11652](https://github.com/ClickHouse/ClickHouse/pull/11652) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Slightly improve diagnostic of reading decimal from string. This closes [#10202](https://github.com/ClickHouse/ClickHouse/issues/10202). [#11829](https://github.com/ClickHouse/ClickHouse/pull/11829) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix sleep invocation in signal handler. It was sleeping for less amount of time than expected. [#11825](https://github.com/ClickHouse/ClickHouse/pull/11825) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* (Only Linux) OS related performance metrics (for CPU and I/O) will work even without `CAP_NET_ADMIN` capability. [#10544](https://github.com/ClickHouse/ClickHouse/pull/10544) ([Alexander Kazakov](https://github.com/Akazz)).
|
||||
* Added `hostname` as an alias to function `hostName`. This feature was suggested by Victor Tarnavskiy from Yandex.Metrica. [#11821](https://github.com/ClickHouse/ClickHouse/pull/11821) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Added support for distributed `DDL` (update/delete/drop partition) on cross replication clusters. [#11703](https://github.com/ClickHouse/ClickHouse/pull/11703) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Emit warning instead of error in server log at startup if we cannot listen one of the listen addresses (e.g. IPv6 is unavailable inside Docker). Note that if server fails to listen all listed addresses, it will refuse to startup as before. This fixes [#4406](https://github.com/ClickHouse/ClickHouse/issues/4406). [#11687](https://github.com/ClickHouse/ClickHouse/pull/11687) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Default user and database creation on docker image starting. [#10637](https://github.com/ClickHouse/ClickHouse/pull/10637) ([Paramtamtam](https://github.com/tarampampam)).
|
||||
* When multiline query is printed to server log, the lines are joined. Make it to work correct in case of multiline string literals, identifiers and single-line comments. This fixes [#3853](https://github.com/ClickHouse/ClickHouse/issues/3853). [#11686](https://github.com/ClickHouse/ClickHouse/pull/11686) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Multiple names are now allowed in commands: CREATE USER, CREATE ROLE, ALTER USER, SHOW CREATE USER, SHOW GRANTS and so on. [#11670](https://github.com/ClickHouse/ClickHouse/pull/11670) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Add support for distributed DDL (`UPDATE/DELETE/DROP PARTITION`) on cross replication clusters. [#11508](https://github.com/ClickHouse/ClickHouse/pull/11508) ([frank lee](https://github.com/etah000)).
|
||||
* Clear password from command line in `clickhouse-client` and `clickhouse-benchmark` if the user has specified it with explicit value. This prevents password exposure by `ps` and similar tools. [#11665](https://github.com/ClickHouse/ClickHouse/pull/11665) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Don't use debug info from ELF file if it doesn't correspond to the running binary. It is needed to avoid printing wrong function names and source locations in stack traces. This fixes [#7514](https://github.com/ClickHouse/ClickHouse/issues/7514). [#11657](https://github.com/ClickHouse/ClickHouse/pull/11657) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Return NULL/zero when value is not parsed completely in parseDateTimeBestEffortOrNull/Zero functions. This fixes [#7876](https://github.com/ClickHouse/ClickHouse/issues/7876). [#11653](https://github.com/ClickHouse/ClickHouse/pull/11653) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Skip empty parameters in requested URL. They may appear when you write `http://localhost:8123/?&a=b` or `http://localhost:8123/?a=b&&c=d`. This closes [#10749](https://github.com/ClickHouse/ClickHouse/issues/10749). [#11651](https://github.com/ClickHouse/ClickHouse/pull/11651) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Allow using `groupArrayArray` and `groupUniqArrayArray` as `SimpleAggregateFunction`. [#11650](https://github.com/ClickHouse/ClickHouse/pull/11650) ([Volodymyr Kuznetsov](https://github.com/ksvladimir)).
|
||||
* Allow comparison with constant strings by implicit conversions when analysing index conditions on other types. This may close [#11630](https://github.com/ClickHouse/ClickHouse/issues/11630). [#11648](https://github.com/ClickHouse/ClickHouse/pull/11648) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* https://github.com/ClickHouse/ClickHouse/pull/7572#issuecomment-642815377 Support config default HTTPHandlers. [#11628](https://github.com/ClickHouse/ClickHouse/pull/11628) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Make more input formats to work with Kafka engine. Fix the issue with premature flushes. Fix the performance issue when `kafka_num_consumers` is greater than number of partitions in topic. [#11599](https://github.com/ClickHouse/ClickHouse/pull/11599) ([filimonov](https://github.com/filimonov)).
|
||||
* Improve `multiple_joins_rewriter_version=2` logic. Fix unknown columns error for lambda aliases. [#11587](https://github.com/ClickHouse/ClickHouse/pull/11587) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Better exception message when cannot parse columns declaration list. This closes [#10403](https://github.com/ClickHouse/ClickHouse/issues/10403). [#11537](https://github.com/ClickHouse/ClickHouse/pull/11537) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Improve `enable_optimize_predicate_expression=1` logic for VIEW. [#11513](https://github.com/ClickHouse/ClickHouse/pull/11513) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Adding support for PREWHERE in live view tables. [#11495](https://github.com/ClickHouse/ClickHouse/pull/11495) ([vzakaznikov](https://github.com/vzakaznikov)).
|
||||
* Automatically update DNS cache, which is used to check if user is allowed to connect from an address. [#11487](https://github.com/ClickHouse/ClickHouse/pull/11487) ([tavplubix](https://github.com/tavplubix)).
|
||||
* OPTIMIZE FINAL will force merge even if concurrent merges are performed. This closes [#11309](https://github.com/ClickHouse/ClickHouse/issues/11309) and closes [#11322](https://github.com/ClickHouse/ClickHouse/issues/11322). [#11346](https://github.com/ClickHouse/ClickHouse/pull/11346) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Suppress output of cancelled queries in clickhouse-client. In previous versions result may continue to print in terminal even after you press Ctrl+C to cancel query. This closes [#9473](https://github.com/ClickHouse/ClickHouse/issues/9473). [#11342](https://github.com/ClickHouse/ClickHouse/pull/11342) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Now history file is updated after each query and there is no race condition if multiple clients use one history file. This fixes [#9897](https://github.com/ClickHouse/ClickHouse/issues/9897). [#11453](https://github.com/ClickHouse/ClickHouse/pull/11453) ([Tagir Kuskarov](https://github.com/kuskarov)).
|
||||
* Better log messages in while reloading configuration. [#11341](https://github.com/ClickHouse/ClickHouse/pull/11341) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Remove trailing whitespaces from formatted queries in `clickhouse-client` or `clickhouse-format` in some cases. [#11325](https://github.com/ClickHouse/ClickHouse/pull/11325) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add setting "output_format_pretty_max_value_width". If value is longer, it will be cut to avoid output of too large values in terminal. This closes [#11140](https://github.com/ClickHouse/ClickHouse/issues/11140). [#11324](https://github.com/ClickHouse/ClickHouse/pull/11324) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Better exception message in case when there is shortage of memory mappings. This closes [#11027](https://github.com/ClickHouse/ClickHouse/issues/11027). [#11316](https://github.com/ClickHouse/ClickHouse/pull/11316) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Support (U)Int8, (U)Int16, Date in ASOF JOIN. [#11301](https://github.com/ClickHouse/ClickHouse/pull/11301) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Support kafka_client_id parameter for Kafka tables. It also changes the default `client.id` used by ClickHouse when communicating with Kafka to be more verbose and usable. [#11252](https://github.com/ClickHouse/ClickHouse/pull/11252) ([filimonov](https://github.com/filimonov)).
|
||||
* Keep the value of `DistributedFilesToInsert` metric on exceptions. In previous versions, the value was set when we are going to send some files, but it is zero, if there was an exception and some files are still pending. Now it corresponds to the number of pending files in filesystem. [#11220](https://github.com/ClickHouse/ClickHouse/pull/11220) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add support for multi-word data type names (such as `DOUBLE PRECISION` and `CHAR VARYING`) for better SQL compatibility. [#11214](https://github.com/ClickHouse/ClickHouse/pull/11214) ([Павел Потемкин](https://github.com/Potya)).
|
||||
* Provide synonyms for some data types. [#10856](https://github.com/ClickHouse/ClickHouse/pull/10856) ([Павел Потемкин](https://github.com/Potya)).
|
||||
* The query log is now enabled by default. [#11184](https://github.com/ClickHouse/ClickHouse/pull/11184) ([Ivan Blinkov](https://github.com/blinkov)).
|
||||
* Show authentication type in table system.users and while executing SHOW CREATE USER query. [#11080](https://github.com/ClickHouse/ClickHouse/pull/11080) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Remove data on explicit `DROP DATABASE` for `Memory` database engine. Fixes [#10557](https://github.com/ClickHouse/ClickHouse/issues/10557). [#11021](https://github.com/ClickHouse/ClickHouse/pull/11021) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Set thread names for internal threads of rdkafka library. Make logs from rdkafka available in server logs. [#10983](https://github.com/ClickHouse/ClickHouse/pull/10983) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Support for unicode whitespaces in queries. This helps when queries are copy-pasted from Word or from web page. This fixes [#10896](https://github.com/ClickHouse/ClickHouse/issues/10896). [#10903](https://github.com/ClickHouse/ClickHouse/pull/10903) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Allow large UInt types as the index in function `tupleElement`. [#10874](https://github.com/ClickHouse/ClickHouse/pull/10874) ([hcz](https://github.com/hczhcz)).
|
||||
* Respect prefer_localhost_replica/load_balancing on INSERT into Distributed. [#10867](https://github.com/ClickHouse/ClickHouse/pull/10867) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Introduce `min_insert_block_size_rows_for_materialized_views`, `min_insert_block_size_bytes_for_materialized_views` settings. This settings are similar to `min_insert_block_size_rows` and `min_insert_block_size_bytes`, but applied only for blocks inserted into `MATERIALIZED VIEW`. It helps to control blocks squashing while pushing to MVs and avoid excessive memory usage. [#10858](https://github.com/ClickHouse/ClickHouse/pull/10858) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Get rid of exception from replicated queue during server shutdown. Fixes [#10819](https://github.com/ClickHouse/ClickHouse/issues/10819). [#10841](https://github.com/ClickHouse/ClickHouse/pull/10841) ([alesapin](https://github.com/alesapin)).
|
||||
* Ensure that `varSamp`, `varPop` cannot return negative results due to numerical errors and that `stddevSamp`, `stddevPop` cannot be calculated from negative variance. This fixes [#10532](https://github.com/ClickHouse/ClickHouse/issues/10532). [#10829](https://github.com/ClickHouse/ClickHouse/pull/10829) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Better DNS exception message. This fixes [#10813](https://github.com/ClickHouse/ClickHouse/issues/10813). [#10828](https://github.com/ClickHouse/ClickHouse/pull/10828) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Change HTTP response code in case of some parse errors to 400 Bad Request. This fix [#10636](https://github.com/ClickHouse/ClickHouse/issues/10636). [#10640](https://github.com/ClickHouse/ClickHouse/pull/10640) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Print a message if clickhouse-client is newer than clickhouse-server. [#10627](https://github.com/ClickHouse/ClickHouse/pull/10627) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Adding support for `INSERT INTO [db.]table WATCH` query. [#10498](https://github.com/ClickHouse/ClickHouse/pull/10498) ([vzakaznikov](https://github.com/vzakaznikov)).
|
||||
* Allow to pass quota_key in clickhouse-client. This closes [#10227](https://github.com/ClickHouse/ClickHouse/issues/10227). [#10270](https://github.com/ClickHouse/ClickHouse/pull/10270) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
#### Performance Improvement
|
||||
|
||||
* Allow multiple replicas to assign merges, mutations, partition drop, move and replace concurrently. This closes [#10367](https://github.com/ClickHouse/ClickHouse/issues/10367). [#11639](https://github.com/ClickHouse/ClickHouse/pull/11639) ([alexey-milovidov](https://github.com/alexey-milovidov)) [#11795](https://github.com/ClickHouse/ClickHouse/pull/11795) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Optimization of GROUP BY with respect to table sorting key, enabled with `optimize_aggregation_in_order` setting. [#9113](https://github.com/ClickHouse/ClickHouse/pull/9113) ([dimarub2000](https://github.com/dimarub2000)).
|
||||
* Selects with final are executed in parallel. Added setting `max_final_threads` to limit the number of threads used. [#10463](https://github.com/ClickHouse/ClickHouse/pull/10463) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Improve performance for INSERT queries via `INSERT SELECT` or INSERT with clickhouse-client when small blocks are generated (typical case with parallel parsing). This fixes [#11275](https://github.com/ClickHouse/ClickHouse/issues/11275). Fix the issue that CONSTRAINTs were not working for DEFAULT fields. This fixes [#11273](https://github.com/ClickHouse/ClickHouse/issues/11273). Fix the issue that CONSTRAINTS were ignored for TEMPORARY tables. This fixes [#11274](https://github.com/ClickHouse/ClickHouse/issues/11274). [#11276](https://github.com/ClickHouse/ClickHouse/pull/11276) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Optimization that eliminates min/max/any aggregators of GROUP BY keys in SELECT section, enabled with `optimize_aggregators_of_group_by_keys` setting. [#11667](https://github.com/ClickHouse/ClickHouse/pull/11667) ([xPoSx](https://github.com/xPoSx)). [#11806](https://github.com/ClickHouse/ClickHouse/pull/11806) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* New optimization that takes all operations out of `any` function, enabled with `optimize_move_functions_out_of_any` [#11529](https://github.com/ClickHouse/ClickHouse/pull/11529) ([Ruslan](https://github.com/kamalov-ruslan)).
|
||||
* Improve performance of `clickhouse-client` in interactive mode when Pretty formats are used. In previous versions, significant amount of time can be spent calculating visible width of UTF-8 string. This closes [#11323](https://github.com/ClickHouse/ClickHouse/issues/11323). [#11323](https://github.com/ClickHouse/ClickHouse/pull/11323) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Improved performance for queries with `ORDER BY` and small `LIMIT` (less, then `max_block_size`). [#11171](https://github.com/ClickHouse/ClickHouse/pull/11171) ([Albert Kidrachev](https://github.com/Provet)).
|
||||
* Add runtime CPU detection to select and dispatch the best function implementation. Add support for codegeneration for multiple targets. This closes [#1017](https://github.com/ClickHouse/ClickHouse/issues/1017). [#10058](https://github.com/ClickHouse/ClickHouse/pull/10058) ([DimasKovas](https://github.com/DimasKovas)).
|
||||
* Enable `mlock` of clickhouse binary by default. It will prevent clickhouse executable from being paged out under high IO load. [#11139](https://github.com/ClickHouse/ClickHouse/pull/11139) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Make queries with `sum` aggregate function and without GROUP BY keys to run multiple times faster. [#10992](https://github.com/ClickHouse/ClickHouse/pull/10992) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Improving radix sort (used in `ORDER BY` with simple keys) by removing some redundant data moves. [#10981](https://github.com/ClickHouse/ClickHouse/pull/10981) ([Arslan Gumerov](https://github.com/g-arslan)).
|
||||
* Sort bigger parts of the left table in MergeJoin. Buffer left blocks in memory. Add `partial_merge_join_left_table_buffer_bytes` setting to manage the left blocks buffers sizes. [#10601](https://github.com/ClickHouse/ClickHouse/pull/10601) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Remove duplicate ORDER BY and DISTINCT from subqueries, this optimization is enabled with `optimize_duplicate_order_by_and_distinct` [#10067](https://github.com/ClickHouse/ClickHouse/pull/10067) ([Mikhail Malafeev](https://github.com/demo-99)).
|
||||
* This feature eliminates functions of other keys in GROUP BY section, enabled with `optimize_group_by_function_keys` [#10051](https://github.com/ClickHouse/ClickHouse/pull/10051) ([xPoSx](https://github.com/xPoSx)).
|
||||
* New optimization that takes arithmetic operations out of aggregate functions, enabled with `optimize_arithmetic_operations_in_aggregate_functions` [#10047](https://github.com/ClickHouse/ClickHouse/pull/10047) ([Ruslan](https://github.com/kamalov-ruslan)).
|
||||
* Use HTTP client for S3 based on Poco instead of curl. This will improve performance and lower memory usage of s3 storage and table functions. [#11230](https://github.com/ClickHouse/ClickHouse/pull/11230) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||
* Fix Kafka performance issue related to reschedules based on limits, which were always applied. [#11149](https://github.com/ClickHouse/ClickHouse/pull/11149) ([filimonov](https://github.com/filimonov)).
|
||||
* Enable percpu_arena:percpu for jemalloc (This will reduce memory fragmentation due to thread pool). [#11084](https://github.com/ClickHouse/ClickHouse/pull/11084) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Optimize memory usage when reading a response from an S3 HTTP client. [#11561](https://github.com/ClickHouse/ClickHouse/pull/11561) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||
* Adjust the default Kafka settings for better performance. [#11388](https://github.com/ClickHouse/ClickHouse/pull/11388) ([filimonov](https://github.com/filimonov)).
|
||||
|
||||
#### Experimental Feature
|
||||
|
||||
* Add data type `Point` (Tuple(Float64, Float64)) and `Polygon` (Array(Array(Tuple(Float64, Float64))). [#10678](https://github.com/ClickHouse/ClickHouse/pull/10678) ([Alexey Ilyukhov](https://github.com/livace)).
|
||||
* Add's a `hasSubstr` function that allows for look for subsequences in arrays. Note: this function is likely to be renamed without further notice. [#11071](https://github.com/ClickHouse/ClickHouse/pull/11071) ([Ryad Zenine](https://github.com/r-zenine)).
|
||||
* Added OpenCL support and bitonic sort algorithm, which can be used for sorting integer types of data in single column. Needs to be build with flag `-DENABLE_OPENCL=1`. For using bitonic sort algorithm instead of others you need to set `bitonic_sort` for Setting's option `special_sort` and make sure that OpenCL is available. This feature does not improve performance or anything else, it is only provided as an example and for demonstration purposes. It is likely to be removed in near future if there will be no further development in this direction. [#10232](https://github.com/ClickHouse/ClickHouse/pull/10232) ([Ri](https://github.com/margaritiko)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* Enable clang-tidy for programs and utils. [#10991](https://github.com/ClickHouse/ClickHouse/pull/10991) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Remove dependency on `tzdata`: do not fail if `/usr/share/zoneinfo` directory does not exist. Note that all timezones work in ClickHouse even without tzdata installed in system. [#11827](https://github.com/ClickHouse/ClickHouse/pull/11827) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Added MSan and UBSan stress tests. Note that we already have MSan, UBSan for functional tests and "stress" test is another kind of tests. [#10871](https://github.com/ClickHouse/ClickHouse/pull/10871) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Print compiler build id in crash messages. It will make us slightly more certain about what binary has crashed. Added new function `buildId`. [#11824](https://github.com/ClickHouse/ClickHouse/pull/11824) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Added a test to ensure that mutations continue to work after FREEZE query. [#11820](https://github.com/ClickHouse/ClickHouse/pull/11820) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Don't allow tests with "fail" substring in their names because it makes looking at the tests results in browser less convenient when you type Ctrl+F and search for "fail". [#11817](https://github.com/ClickHouse/ClickHouse/pull/11817) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Removes unused imports from HTTPHandlerFactory. [#11660](https://github.com/ClickHouse/ClickHouse/pull/11660) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||
* Added a random sampling of instances where copier is executed. It is needed to avoid `Too many simultaneous queries` error. Also increased timeout and decreased fault probability. [#11573](https://github.com/ClickHouse/ClickHouse/pull/11573) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Fix missed include. [#11525](https://github.com/ClickHouse/ClickHouse/pull/11525) ([Matwey V. Kornilov](https://github.com/matwey)).
|
||||
* Speed up build by removing old example programs. Also found some orphan functional test. [#11486](https://github.com/ClickHouse/ClickHouse/pull/11486) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Increase ccache size for builds in CI. [#11450](https://github.com/ClickHouse/ClickHouse/pull/11450) ([alesapin](https://github.com/alesapin)).
|
||||
* Leave only unit_tests_dbms in deb build. [#11429](https://github.com/ClickHouse/ClickHouse/pull/11429) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||
* Update librdkafka to version [1.4.2](https://github.com/edenhill/librdkafka/releases/tag/v1.4.2). [#11256](https://github.com/ClickHouse/ClickHouse/pull/11256) ([filimonov](https://github.com/filimonov)).
|
||||
* Refactor CMake build files. [#11390](https://github.com/ClickHouse/ClickHouse/pull/11390) ([Ivan](https://github.com/abyss7)).
|
||||
* Fix several flaky integration tests. [#11355](https://github.com/ClickHouse/ClickHouse/pull/11355) ([alesapin](https://github.com/alesapin)).
|
||||
* Add support for unit tests run with UBSan. [#11345](https://github.com/ClickHouse/ClickHouse/pull/11345) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Remove redundant timeout from integration test `test_insertion_sync_fails_with_timeout`. [#11343](https://github.com/ClickHouse/ClickHouse/pull/11343) ([alesapin](https://github.com/alesapin)).
|
||||
* Better check for hung queries in clickhouse-test. [#11321](https://github.com/ClickHouse/ClickHouse/pull/11321) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Emit a warning if server was build in debug or with sanitizers. [#11304](https://github.com/ClickHouse/ClickHouse/pull/11304) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Now clickhouse-test check the server aliveness before tests run. [#11285](https://github.com/ClickHouse/ClickHouse/pull/11285) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix potentially flacky test `00731_long_merge_tree_select_opened_files.sh`. It does not fail frequently but we have discovered potential race condition in this test while experimenting with ThreadFuzzer: [#9814](https://github.com/ClickHouse/ClickHouse/issues/9814) See [link](https://clickhouse-test-reports.s3.yandex.net/9814/40e3023e215df22985d275bf85f4d2290897b76b/functional_stateless_tests_(unbundled).html#fail1) for the example. [#11270](https://github.com/ClickHouse/ClickHouse/pull/11270) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Repeat test in CI if `curl` invocation was timed out. It is possible due to system hangups for 10+ seconds that are typical in our CI infrastructure. This fixes [#11267](https://github.com/ClickHouse/ClickHouse/issues/11267). [#11268](https://github.com/ClickHouse/ClickHouse/pull/11268) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add a test for Join table engine from @donmikel. This closes [#9158](https://github.com/ClickHouse/ClickHouse/issues/9158). [#11265](https://github.com/ClickHouse/ClickHouse/pull/11265) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix several non significant errors in unit tests. [#11262](https://github.com/ClickHouse/ClickHouse/pull/11262) ([alesapin](https://github.com/alesapin)).
|
||||
* Now parts of linker command for `cctz` library will not be shuffled with other libraries. [#11213](https://github.com/ClickHouse/ClickHouse/pull/11213) ([alesapin](https://github.com/alesapin)).
|
||||
* Split /programs/server into actual program and library. [#11186](https://github.com/ClickHouse/ClickHouse/pull/11186) ([Ivan](https://github.com/abyss7)).
|
||||
* Improve build scripts for protobuf & gRPC. [#11172](https://github.com/ClickHouse/ClickHouse/pull/11172) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Enable performance test that was not working. [#11158](https://github.com/ClickHouse/ClickHouse/pull/11158) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Create root S3 bucket for tests before any CH instance is started. [#11142](https://github.com/ClickHouse/ClickHouse/pull/11142) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||
* Add performance test for non-constant polygons. [#11141](https://github.com/ClickHouse/ClickHouse/pull/11141) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixing `00979_live_view_watch_continuous_aggregates` test. [#11024](https://github.com/ClickHouse/ClickHouse/pull/11024) ([vzakaznikov](https://github.com/vzakaznikov)).
|
||||
* Add ability to run zookeeper in integration tests over tmpfs. [#11002](https://github.com/ClickHouse/ClickHouse/pull/11002) ([alesapin](https://github.com/alesapin)).
|
||||
* Wait for odbc-bridge with exponential backoff. Previous wait time of 200 ms was not enough in our CI environment. [#10990](https://github.com/ClickHouse/ClickHouse/pull/10990) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix non-deterministic test. [#10989](https://github.com/ClickHouse/ClickHouse/pull/10989) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Added a test for empty external data. [#10926](https://github.com/ClickHouse/ClickHouse/pull/10926) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Database is recreated for every test. This improves separation of tests. [#10902](https://github.com/ClickHouse/ClickHouse/pull/10902) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Added more asserts in columns code. [#10833](https://github.com/ClickHouse/ClickHouse/pull/10833) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Better cooperation with sanitizers. Print information about query_id in the message of sanitizer failure. [#10832](https://github.com/ClickHouse/ClickHouse/pull/10832) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix obvious race condition in "Split build smoke test" check. [#10820](https://github.com/ClickHouse/ClickHouse/pull/10820) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix (false) MSan report in MergeTreeIndexFullText. The issue first appeared in [#9968](https://github.com/ClickHouse/ClickHouse/issues/9968). [#10801](https://github.com/ClickHouse/ClickHouse/pull/10801) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add MSan suppression for MariaDB Client library. [#10800](https://github.com/ClickHouse/ClickHouse/pull/10800) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* GRPC make couldn't find protobuf files, changed make file by adding the right link. [#10794](https://github.com/ClickHouse/ClickHouse/pull/10794) ([mnkonkova](https://github.com/mnkonkova)).
|
||||
* Enable extra warnings (`-Weverything`) for base, utils, programs. Note that we already have it for the most of the code. [#10779](https://github.com/ClickHouse/ClickHouse/pull/10779) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Suppressions of warnings from libraries was mistakenly declared as public in [#10396](https://github.com/ClickHouse/ClickHouse/issues/10396). [#10776](https://github.com/ClickHouse/ClickHouse/pull/10776) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Restore a patch that was accidentially deleted in [#10396](https://github.com/ClickHouse/ClickHouse/issues/10396). [#10774](https://github.com/ClickHouse/ClickHouse/pull/10774) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix performance tests errors, part 2. [#10773](https://github.com/ClickHouse/ClickHouse/pull/10773) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix performance test errors. [#10766](https://github.com/ClickHouse/ClickHouse/pull/10766) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Update cross-builds to use clang-10 compiler. [#10724](https://github.com/ClickHouse/ClickHouse/pull/10724) ([Ivan](https://github.com/abyss7)).
|
||||
* Update instruction to install RPM packages. This was suggested by Denis (TG login @ldviolet) and implemented by Arkady Shejn. [#10707](https://github.com/ClickHouse/ClickHouse/pull/10707) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Trying to fix `tests/queries/0_stateless/01246_insert_into_watch_live_view.py` test. [#10670](https://github.com/ClickHouse/ClickHouse/pull/10670) ([vzakaznikov](https://github.com/vzakaznikov)).
|
||||
* Fixing and re-enabling 00979_live_view_watch_continuous_aggregates.py test. [#10658](https://github.com/ClickHouse/ClickHouse/pull/10658) ([vzakaznikov](https://github.com/vzakaznikov)).
|
||||
* Fix OOM in ASan stress test. [#10646](https://github.com/ClickHouse/ClickHouse/pull/10646) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix UBSan report (adding zero to nullptr) in HashTable that appeared after migration to clang-10. [#10638](https://github.com/ClickHouse/ClickHouse/pull/10638) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Remove external call to `ld` (bfd) linker during tzdata processing in compile time. [#10634](https://github.com/ClickHouse/ClickHouse/pull/10634) ([alesapin](https://github.com/alesapin)).
|
||||
* Allow to use `lld` to link blobs (resources). [#10632](https://github.com/ClickHouse/ClickHouse/pull/10632) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix UBSan report in `LZ4` library. [#10631](https://github.com/ClickHouse/ClickHouse/pull/10631) ([alexey-milovidov](https://github.com/alexey-milovidov)). See also [https://github.com/lz4/lz4/issues/857](https://github.com/lz4/lz4/issues/857)
|
||||
* Update LZ4 to the latest dev branch. [#10630](https://github.com/ClickHouse/ClickHouse/pull/10630) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Added auto-generated machine-readable file with the list of stable versions. [#10628](https://github.com/ClickHouse/ClickHouse/pull/10628) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix `capnproto` version check for `capnp::UnalignedFlatArrayMessageReader`. [#10618](https://github.com/ClickHouse/ClickHouse/pull/10618) ([Matwey V. Kornilov](https://github.com/matwey)).
|
||||
* Lower memory usage in tests. [#10617](https://github.com/ClickHouse/ClickHouse/pull/10617) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixing hard coded timeouts in new live view tests. [#10604](https://github.com/ClickHouse/ClickHouse/pull/10604) ([vzakaznikov](https://github.com/vzakaznikov)).
|
||||
* Increasing timeout when opening a client in tests/queries/0_stateless/helpers/client.py. [#10599](https://github.com/ClickHouse/ClickHouse/pull/10599) ([vzakaznikov](https://github.com/vzakaznikov)).
|
||||
* Enable ThinLTO for clang builds, continuation of https://github.com/ClickHouse/ClickHouse/pull/10435. [#10585](https://github.com/ClickHouse/ClickHouse/pull/10585) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Adding fuzzers and preparing for oss-fuzz integration. [#10546](https://github.com/ClickHouse/ClickHouse/pull/10546) ([kyprizel](https://github.com/kyprizel)).
|
||||
* Fix FreeBSD build. [#10150](https://github.com/ClickHouse/ClickHouse/pull/10150) ([Ivan](https://github.com/abyss7)).
|
||||
* Add new build for query tests using pytest framework. [#10039](https://github.com/ClickHouse/ClickHouse/pull/10039) ([Ivan](https://github.com/abyss7)).
|
||||
|
||||
|
||||
## ClickHouse release v20.4
|
||||
|
||||
### ClickHouse release v20.4.6.53-stable 2020-06-25
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix rare crash caused by using `Nullable` column in prewhere condition. Continuation of [#11608](https://github.com/ClickHouse/ClickHouse/issues/11608). [#11869](https://github.com/ClickHouse/ClickHouse/pull/11869) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Don't allow arrayJoin inside higher order functions. It was leading to broken protocol synchronization. This closes [#3933](https://github.com/ClickHouse/ClickHouse/issues/3933). [#11846](https://github.com/ClickHouse/ClickHouse/pull/11846) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix wrong result of comparison of FixedString with constant String. This fixes [#11393](https://github.com/ClickHouse/ClickHouse/issues/11393). This bug appeared in version 20.4. [#11828](https://github.com/ClickHouse/ClickHouse/pull/11828) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix wrong result for `if()` with NULLs in condition. [#11807](https://github.com/ClickHouse/ClickHouse/pull/11807) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix using too many threads for queries. [#11788](https://github.com/ClickHouse/ClickHouse/pull/11788) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix unexpected behaviour of queries like `SELECT *, xyz.*` which were success while an error expected. [#11753](https://github.com/ClickHouse/ClickHouse/pull/11753) ([hexiaoting](https://github.com/hexiaoting)).
|
||||
* Now replicated fetches will be cancelled during metadata alter. [#11744](https://github.com/ClickHouse/ClickHouse/pull/11744) ([alesapin](https://github.com/alesapin)).
|
||||
* Fixed LOGICAL_ERROR caused by wrong type deduction of complex literals in Values input format. [#11732](https://github.com/ClickHouse/ClickHouse/pull/11732) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix `ORDER BY ... WITH FILL` over const columns. [#11697](https://github.com/ClickHouse/ClickHouse/pull/11697) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Pass proper timeouts when communicating with XDBC bridge. Recently timeouts were not respected when checking bridge liveness and receiving meta info. [#11690](https://github.com/ClickHouse/ClickHouse/pull/11690) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix `LIMIT n WITH TIES` usage together with `ORDER BY` statement, which contains aliases. [#11689](https://github.com/ClickHouse/ClickHouse/pull/11689) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix error which leads to an incorrect state of `system.mutations`. It may show that whole mutation is already done but the server still has `MUTATE_PART` tasks in the replication queue and tries to execute them. This fixes [#11611](https://github.com/ClickHouse/ClickHouse/issues/11611). [#11681](https://github.com/ClickHouse/ClickHouse/pull/11681) ([alesapin](https://github.com/alesapin)).
|
||||
* Add support for regular expressions with case-insensitive flags. This fixes [#11101](https://github.com/ClickHouse/ClickHouse/issues/11101) and fixes [#11506](https://github.com/ClickHouse/ClickHouse/issues/11506). [#11649](https://github.com/ClickHouse/ClickHouse/pull/11649) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Remove trivial count query optimization if row-level security is set. In previous versions the user get total count of records in a table instead filtered. This fixes [#11352](https://github.com/ClickHouse/ClickHouse/issues/11352). [#11644](https://github.com/ClickHouse/ClickHouse/pull/11644) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix bloom filters for String (data skipping indices). [#11638](https://github.com/ClickHouse/ClickHouse/pull/11638) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix rare crash caused by using `Nullable` column in prewhere condition. (Probably it is connected with [#11572](https://github.com/ClickHouse/ClickHouse/issues/11572) somehow). [#11608](https://github.com/ClickHouse/ClickHouse/pull/11608) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix error `Block structure mismatch` for queries with sampling reading from `Buffer` table. [#11602](https://github.com/ClickHouse/ClickHouse/pull/11602) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix wrong exit code of the clickhouse-client, when exception.code() % 256 = 0. [#11601](https://github.com/ClickHouse/ClickHouse/pull/11601) ([filimonov](https://github.com/filimonov)).
|
||||
* Fix trivial error in log message about "Mark cache size was lowered" at server startup. This closes [#11399](https://github.com/ClickHouse/ClickHouse/issues/11399). [#11589](https://github.com/ClickHouse/ClickHouse/pull/11589) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix error `Size of offsets doesn't match size of column` for queries with `PREWHERE column in (subquery)` and `ARRAY JOIN`. [#11580](https://github.com/ClickHouse/ClickHouse/pull/11580) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed rare segfault in `SHOW CREATE TABLE` Fixes [#11490](https://github.com/ClickHouse/ClickHouse/issues/11490). [#11579](https://github.com/ClickHouse/ClickHouse/pull/11579) ([tavplubix](https://github.com/tavplubix)).
|
||||
* All queries in HTTP session have had the same query_id. It is fixed. [#11578](https://github.com/ClickHouse/ClickHouse/pull/11578) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Now clickhouse-server docker container will prefer IPv6 checking server aliveness. [#11550](https://github.com/ClickHouse/ClickHouse/pull/11550) ([Ivan Starkov](https://github.com/istarkov)).
|
||||
* Fix shard_num/replica_num for `<node>` (breaks use_compact_format_in_distributed_parts_names). [#11528](https://github.com/ClickHouse/ClickHouse/pull/11528) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix race condition which may lead to an exception during table drop. It's a bit tricky and not dangerous at all. If you want an explanation, just notice me in telegram. [#11523](https://github.com/ClickHouse/ClickHouse/pull/11523) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix memory leak when exception is thrown in the middle of aggregation with -State functions. This fixes [#8995](https://github.com/ClickHouse/ClickHouse/issues/8995). [#11496](https://github.com/ClickHouse/ClickHouse/pull/11496) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* If data skipping index is dependent on columns that are going to be modified during background merge (for SummingMergeTree, AggregatingMergeTree as well as for TTL GROUP BY), it was calculated incorrectly. This issue is fixed by moving index calculation after merge so the index is calculated on merged data. [#11162](https://github.com/ClickHouse/ClickHouse/pull/11162) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Get rid of old libunwind patches. https://github.com/ClickHouse-Extras/libunwind/commit/500aa227911bd185a94bfc071d68f4d3b03cb3b1#r39048012 This allows to disable `-fno-omit-frame-pointer` in `clang` builds that improves performance at least by 1% in average. [#10761](https://github.com/ClickHouse/ClickHouse/pull/10761) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix usage of primary key wrapped into a function with 'FINAL' modifier and 'ORDER BY' optimization. [#10715](https://github.com/ClickHouse/ClickHouse/pull/10715) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* Fix several non significant errors in unit tests. [#11262](https://github.com/ClickHouse/ClickHouse/pull/11262) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix (false) MSan report in MergeTreeIndexFullText. The issue first appeared in [#9968](https://github.com/ClickHouse/ClickHouse/issues/9968). [#10801](https://github.com/ClickHouse/ClickHouse/pull/10801) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
|
||||
### ClickHouse release v20.4.5.36-stable 2020-06-10
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix the error `Data compressed with different methods` that can happen if `min_bytes_to_use_direct_io` is enabled and PREWHERE is active and using SAMPLE or high number of threads. This fixes [#11539](https://github.com/ClickHouse/ClickHouse/issues/11539). [#11540](https://github.com/ClickHouse/ClickHouse/pull/11540) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix return compressed size for codecs. [#11448](https://github.com/ClickHouse/ClickHouse/pull/11448) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix server crash when a column has compression codec with non-literal arguments. Fixes [#11365](https://github.com/ClickHouse/ClickHouse/issues/11365). [#11431](https://github.com/ClickHouse/ClickHouse/pull/11431) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix pointInPolygon with nan as point. Fixes https://github.com/ClickHouse/ClickHouse/issues/11375. [#11421](https://github.com/ClickHouse/ClickHouse/pull/11421) ([Alexey Ilyukhov](https://github.com/livace)).
|
||||
* Fix potential uninitialized memory read in MergeTree shutdown if table was not created successfully. [#11420](https://github.com/ClickHouse/ClickHouse/pull/11420) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed geohashesInBox with arguments outside of latitude/longitude range. [#11403](https://github.com/ClickHouse/ClickHouse/pull/11403) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||
* Fix possible `Pipeline stuck` error for queries with external sort and limit. Fixes [#11359](https://github.com/ClickHouse/ClickHouse/issues/11359). [#11366](https://github.com/ClickHouse/ClickHouse/pull/11366) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Remove redundant lock during parts send in ReplicatedMergeTree. [#11354](https://github.com/ClickHouse/ClickHouse/pull/11354) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix support for `\G` (vertical output) in clickhouse-client in multiline mode. This closes [#9933](https://github.com/ClickHouse/ClickHouse/issues/9933). [#11350](https://github.com/ClickHouse/ClickHouse/pull/11350) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix potential segfault when using `Lazy` database. [#11348](https://github.com/ClickHouse/ClickHouse/pull/11348) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix crash in `quantilesExactWeightedArray`. [#11337](https://github.com/ClickHouse/ClickHouse/pull/11337) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Now merges stopped before change metadata in `ALTER` queries. [#11335](https://github.com/ClickHouse/ClickHouse/pull/11335) ([alesapin](https://github.com/alesapin)).
|
||||
* Make writing to `MATERIALIZED VIEW` with setting `parallel_view_processing = 1` parallel again. Fixes [#10241](https://github.com/ClickHouse/ClickHouse/issues/10241). [#11330](https://github.com/ClickHouse/ClickHouse/pull/11330) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix visitParamExtractRaw when extracted JSON has strings with unbalanced { or [. [#11318](https://github.com/ClickHouse/ClickHouse/pull/11318) ([Ewout](https://github.com/devwout)).
|
||||
* Fix very rare race condition in ThreadPool. [#11314](https://github.com/ClickHouse/ClickHouse/pull/11314) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix insignificant data race in clickhouse-copier. Found by integration tests. [#11313](https://github.com/ClickHouse/ClickHouse/pull/11313) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix potential uninitialized memory in conversion. Example: `SELECT toIntervalSecond(now64())`. [#11311](https://github.com/ClickHouse/ClickHouse/pull/11311) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix the issue when index analysis cannot work if a table has Array column in primary key and if a query is filtering by this column with `empty` or `notEmpty` functions. This fixes [#11286](https://github.com/ClickHouse/ClickHouse/issues/11286). [#11303](https://github.com/ClickHouse/ClickHouse/pull/11303) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix bug when query speed estimation can be incorrect and the limit of `min_execution_speed` may not work or work incorrectly if the query is throttled by `max_network_bandwidth`, `max_execution_speed` or `priority` settings. Change the default value of `timeout_before_checking_execution_speed` to non-zero, because otherwise the settings `min_execution_speed` and `max_execution_speed` have no effect. This fixes [#11297](https://github.com/ClickHouse/ClickHouse/issues/11297). This fixes [#5732](https://github.com/ClickHouse/ClickHouse/issues/5732). This fixes [#6228](https://github.com/ClickHouse/ClickHouse/issues/6228). Usability improvement: avoid concatenation of exception message with progress bar in `clickhouse-client`. [#11296](https://github.com/ClickHouse/ClickHouse/pull/11296) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix crash when SET DEFAULT ROLE is called with wrong arguments. This fixes https://github.com/ClickHouse/ClickHouse/issues/10586. [#11278](https://github.com/ClickHouse/ClickHouse/pull/11278) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix crash while reading malformed data in Protobuf format. This fixes https://github.com/ClickHouse/ClickHouse/issues/5957, fixes https://github.com/ClickHouse/ClickHouse/issues/11203. [#11258](https://github.com/ClickHouse/ClickHouse/pull/11258) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fixed a bug when cache-dictionary could return default value instead of normal (when there are only expired keys). This affects only string fields. [#11233](https://github.com/ClickHouse/ClickHouse/pull/11233) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Fix error `Block structure mismatch in QueryPipeline` while reading from `VIEW` with constants in inner query. Fixes [#11181](https://github.com/ClickHouse/ClickHouse/issues/11181). [#11205](https://github.com/ClickHouse/ClickHouse/pull/11205) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix possible exception `Invalid status for associated output`. [#11200](https://github.com/ClickHouse/ClickHouse/pull/11200) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix possible error `Cannot capture column` for higher-order functions with `Array(Array(LowCardinality))` captured argument. [#11185](https://github.com/ClickHouse/ClickHouse/pull/11185) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed S3 globbing which could fail in case of more than 1000 keys and some backends. [#11179](https://github.com/ClickHouse/ClickHouse/pull/11179) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||
* If data skipping index is dependent on columns that are going to be modified during background merge (for SummingMergeTree, AggregatingMergeTree as well as for TTL GROUP BY), it was calculated incorrectly. This issue is fixed by moving index calculation after merge so the index is calculated on merged data. [#11162](https://github.com/ClickHouse/ClickHouse/pull/11162) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix Kafka performance issue related to reschedules based on limits, which were always applied. [#11149](https://github.com/ClickHouse/ClickHouse/pull/11149) ([filimonov](https://github.com/filimonov)).
|
||||
* Fix for the hang which was happening sometimes during DROP of table engine=Kafka (or during server restarts). [#11145](https://github.com/ClickHouse/ClickHouse/pull/11145) ([filimonov](https://github.com/filimonov)).
|
||||
* Fix excessive reserving of threads for simple queries (optimization for reducing the number of threads, which was partly broken after changes in pipeline). [#11114](https://github.com/ClickHouse/ClickHouse/pull/11114) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix predicates optimization for distributed queries (`enable_optimize_predicate_expression=1`) for queries with `HAVING` section (i.e. when filtering on the server initiator is required), by preserving the order of expressions (and this is enough to fix), and also force aggregator use column names over indexes. Fixes: [#10613](https://github.com/ClickHouse/ClickHouse/issues/10613), [#11413](https://github.com/ClickHouse/ClickHouse/issues/11413). [#10621](https://github.com/ClickHouse/ClickHouse/pull/10621) ([Azat Khuzhin](https://github.com/azat)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* Fix several flaky integration tests. [#11355](https://github.com/ClickHouse/ClickHouse/pull/11355) ([alesapin](https://github.com/alesapin)).
|
||||
|
||||
### ClickHouse release v20.4.4.18-stable 2020-05-26
|
||||
|
||||
No changes compared to v20.4.3.16-stable.
|
||||
|
||||
### ClickHouse release v20.4.3.16-stable 2020-05-23
|
||||
|
||||
#### Bug Fix
|
||||
@ -323,6 +762,81 @@
|
||||
|
||||
## ClickHouse release v20.3
|
||||
|
||||
### ClickHouse release v20.3.12.112-lts 2020-06-25
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix rare crash caused by using `Nullable` column in prewhere condition. Continuation of [#11608](https://github.com/ClickHouse/ClickHouse/issues/11608). [#11869](https://github.com/ClickHouse/ClickHouse/pull/11869) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Don't allow arrayJoin inside higher order functions. It was leading to broken protocol synchronization. This closes [#3933](https://github.com/ClickHouse/ClickHouse/issues/3933). [#11846](https://github.com/ClickHouse/ClickHouse/pull/11846) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix using too many threads for queries. [#11788](https://github.com/ClickHouse/ClickHouse/pull/11788) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix unexpected behaviour of queries like `SELECT *, xyz.*` which were success while an error expected. [#11753](https://github.com/ClickHouse/ClickHouse/pull/11753) ([hexiaoting](https://github.com/hexiaoting)).
|
||||
* Now replicated fetches will be cancelled during metadata alter. [#11744](https://github.com/ClickHouse/ClickHouse/pull/11744) ([alesapin](https://github.com/alesapin)).
|
||||
* Fixed LOGICAL_ERROR caused by wrong type deduction of complex literals in Values input format. [#11732](https://github.com/ClickHouse/ClickHouse/pull/11732) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix `ORDER BY ... WITH FILL` over const columns. [#11697](https://github.com/ClickHouse/ClickHouse/pull/11697) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Pass proper timeouts when communicating with XDBC bridge. Recently timeouts were not respected when checking bridge liveness and receiving meta info. [#11690](https://github.com/ClickHouse/ClickHouse/pull/11690) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix error which leads to an incorrect state of `system.mutations`. It may show that whole mutation is already done but the server still has `MUTATE_PART` tasks in the replication queue and tries to execute them. This fixes [#11611](https://github.com/ClickHouse/ClickHouse/issues/11611). [#11681](https://github.com/ClickHouse/ClickHouse/pull/11681) ([alesapin](https://github.com/alesapin)).
|
||||
* Add support for regular expressions with case-insensitive flags. This fixes [#11101](https://github.com/ClickHouse/ClickHouse/issues/11101) and fixes [#11506](https://github.com/ClickHouse/ClickHouse/issues/11506). [#11649](https://github.com/ClickHouse/ClickHouse/pull/11649) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Remove trivial count query optimization if row-level security is set. In previous versions the user get total count of records in a table instead filtered. This fixes [#11352](https://github.com/ClickHouse/ClickHouse/issues/11352). [#11644](https://github.com/ClickHouse/ClickHouse/pull/11644) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix bloom filters for String (data skipping indices). [#11638](https://github.com/ClickHouse/ClickHouse/pull/11638) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix rare crash caused by using `Nullable` column in prewhere condition. (Probably it is connected with [#11572](https://github.com/ClickHouse/ClickHouse/issues/11572) somehow). [#11608](https://github.com/ClickHouse/ClickHouse/pull/11608) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix error `Block structure mismatch` for queries with sampling reading from `Buffer` table. [#11602](https://github.com/ClickHouse/ClickHouse/pull/11602) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix wrong exit code of the clickhouse-client, when exception.code() % 256 = 0. [#11601](https://github.com/ClickHouse/ClickHouse/pull/11601) ([filimonov](https://github.com/filimonov)).
|
||||
* Fix trivial error in log message about "Mark cache size was lowered" at server startup. This closes [#11399](https://github.com/ClickHouse/ClickHouse/issues/11399). [#11589](https://github.com/ClickHouse/ClickHouse/pull/11589) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix error `Size of offsets doesn't match size of column` for queries with `PREWHERE column in (subquery)` and `ARRAY JOIN`. [#11580](https://github.com/ClickHouse/ClickHouse/pull/11580) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* All queries in HTTP session have had the same query_id. It is fixed. [#11578](https://github.com/ClickHouse/ClickHouse/pull/11578) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Now clickhouse-server docker container will prefer IPv6 checking server aliveness. [#11550](https://github.com/ClickHouse/ClickHouse/pull/11550) ([Ivan Starkov](https://github.com/istarkov)).
|
||||
* Fix shard_num/replica_num for `<node>` (breaks use_compact_format_in_distributed_parts_names). [#11528](https://github.com/ClickHouse/ClickHouse/pull/11528) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix memory leak when exception is thrown in the middle of aggregation with -State functions. This fixes [#8995](https://github.com/ClickHouse/ClickHouse/issues/8995). [#11496](https://github.com/ClickHouse/ClickHouse/pull/11496) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix wrong results of distributed queries when alias could override qualified column name. Fixes [#9672](https://github.com/ClickHouse/ClickHouse/issues/9672) [#9714](https://github.com/ClickHouse/ClickHouse/issues/9714). [#9972](https://github.com/ClickHouse/ClickHouse/pull/9972) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
|
||||
|
||||
### ClickHouse release v20.3.11.97-lts 2020-06-10
|
||||
|
||||
#### New Feature
|
||||
|
||||
* Now ClickHouse controls timeouts of dictionary sources on its side. Two new settings added to cache dictionary configuration: `strict_max_lifetime_seconds`, which is `max_lifetime` by default and `query_wait_timeout_milliseconds`, which is one minute by default. The first settings is also useful with `allow_read_expired_keys` settings (to forbid reading very expired keys). [#10337](https://github.com/ClickHouse/ClickHouse/pull/10337) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix the error `Data compressed with different methods` that can happen if `min_bytes_to_use_direct_io` is enabled and PREWHERE is active and using SAMPLE or high number of threads. This fixes [#11539](https://github.com/ClickHouse/ClickHouse/issues/11539). [#11540](https://github.com/ClickHouse/ClickHouse/pull/11540) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix return compressed size for codecs. [#11448](https://github.com/ClickHouse/ClickHouse/pull/11448) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix server crash when a column has compression codec with non-literal arguments. Fixes [#11365](https://github.com/ClickHouse/ClickHouse/issues/11365). [#11431](https://github.com/ClickHouse/ClickHouse/pull/11431) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix pointInPolygon with nan as point. Fixes https://github.com/ClickHouse/ClickHouse/issues/11375. [#11421](https://github.com/ClickHouse/ClickHouse/pull/11421) ([Alexey Ilyukhov](https://github.com/livace)).
|
||||
* Fix crash in JOIN over LowCarinality(T) and Nullable(T). [#11380](https://github.com/ClickHouse/ClickHouse/issues/11380). [#11414](https://github.com/ClickHouse/ClickHouse/pull/11414) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix error code for wrong `USING` key. [#11373](https://github.com/ClickHouse/ClickHouse/issues/11373). [#11404](https://github.com/ClickHouse/ClickHouse/pull/11404) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fixed geohashesInBox with arguments outside of latitude/longitude range. [#11403](https://github.com/ClickHouse/ClickHouse/pull/11403) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||
* Better errors for `joinGet()` functions. [#11389](https://github.com/ClickHouse/ClickHouse/pull/11389) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix possible `Pipeline stuck` error for queries with external sort and limit. Fixes [#11359](https://github.com/ClickHouse/ClickHouse/issues/11359). [#11366](https://github.com/ClickHouse/ClickHouse/pull/11366) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Remove redundant lock during parts send in ReplicatedMergeTree. [#11354](https://github.com/ClickHouse/ClickHouse/pull/11354) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix support for `\G` (vertical output) in clickhouse-client in multiline mode. This closes [#9933](https://github.com/ClickHouse/ClickHouse/issues/9933). [#11350](https://github.com/ClickHouse/ClickHouse/pull/11350) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix crash in direct selects from StorageJoin (without JOIN) and wrong nullability. [#11340](https://github.com/ClickHouse/ClickHouse/pull/11340) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix crash in `quantilesExactWeightedArray`. [#11337](https://github.com/ClickHouse/ClickHouse/pull/11337) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Now merges stopped before change metadata in `ALTER` queries. [#11335](https://github.com/ClickHouse/ClickHouse/pull/11335) ([alesapin](https://github.com/alesapin)).
|
||||
* Make writing to `MATERIALIZED VIEW` with setting `parallel_view_processing = 1` parallel again. Fixes [#10241](https://github.com/ClickHouse/ClickHouse/issues/10241). [#11330](https://github.com/ClickHouse/ClickHouse/pull/11330) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix visitParamExtractRaw when extracted JSON has strings with unbalanced { or [. [#11318](https://github.com/ClickHouse/ClickHouse/pull/11318) ([Ewout](https://github.com/devwout)).
|
||||
* Fix very rare race condition in ThreadPool. [#11314](https://github.com/ClickHouse/ClickHouse/pull/11314) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix potential uninitialized memory in conversion. Example: `SELECT toIntervalSecond(now64())`. [#11311](https://github.com/ClickHouse/ClickHouse/pull/11311) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix the issue when index analysis cannot work if a table has Array column in primary key and if a query is filtering by this column with `empty` or `notEmpty` functions. This fixes [#11286](https://github.com/ClickHouse/ClickHouse/issues/11286). [#11303](https://github.com/ClickHouse/ClickHouse/pull/11303) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix bug when query speed estimation can be incorrect and the limit of `min_execution_speed` may not work or work incorrectly if the query is throttled by `max_network_bandwidth`, `max_execution_speed` or `priority` settings. Change the default value of `timeout_before_checking_execution_speed` to non-zero, because otherwise the settings `min_execution_speed` and `max_execution_speed` have no effect. This fixes [#11297](https://github.com/ClickHouse/ClickHouse/issues/11297). This fixes [#5732](https://github.com/ClickHouse/ClickHouse/issues/5732). This fixes [#6228](https://github.com/ClickHouse/ClickHouse/issues/6228). Usability improvement: avoid concatenation of exception message with progress bar in `clickhouse-client`. [#11296](https://github.com/ClickHouse/ClickHouse/pull/11296) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix crash while reading malformed data in Protobuf format. This fixes https://github.com/ClickHouse/ClickHouse/issues/5957, fixes https://github.com/ClickHouse/ClickHouse/issues/11203. [#11258](https://github.com/ClickHouse/ClickHouse/pull/11258) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fixed a bug when cache-dictionary could return default value instead of normal (when there are only expired keys). This affects only string fields. [#11233](https://github.com/ClickHouse/ClickHouse/pull/11233) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Fix error `Block structure mismatch in QueryPipeline` while reading from `VIEW` with constants in inner query. Fixes [#11181](https://github.com/ClickHouse/ClickHouse/issues/11181). [#11205](https://github.com/ClickHouse/ClickHouse/pull/11205) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix possible exception `Invalid status for associated output`. [#11200](https://github.com/ClickHouse/ClickHouse/pull/11200) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix possible error `Cannot capture column` for higher-order functions with `Array(Array(LowCardinality))` captured argument. [#11185](https://github.com/ClickHouse/ClickHouse/pull/11185) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed S3 globbing which could fail in case of more than 1000 keys and some backends. [#11179](https://github.com/ClickHouse/ClickHouse/pull/11179) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||
* If data skipping index is dependent on columns that are going to be modified during background merge (for SummingMergeTree, AggregatingMergeTree as well as for TTL GROUP BY), it was calculated incorrectly. This issue is fixed by moving index calculation after merge so the index is calculated on merged data. [#11162](https://github.com/ClickHouse/ClickHouse/pull/11162) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix excessive reserving of threads for simple queries (optimization for reducing the number of threads, which was partly broken after changes in pipeline). [#11114](https://github.com/ClickHouse/ClickHouse/pull/11114) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix predicates optimization for distributed queries (`enable_optimize_predicate_expression=1`) for queries with `HAVING` section (i.e. when filtering on the server initiator is required), by preserving the order of expressions (and this is enough to fix), and also force aggregator use column names over indexes. Fixes: [#10613](https://github.com/ClickHouse/ClickHouse/issues/10613), [#11413](https://github.com/ClickHouse/ClickHouse/issues/11413). [#10621](https://github.com/ClickHouse/ClickHouse/pull/10621) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Introduce commit retry logic to decrease the possibility of getting duplicates from Kafka in rare cases when offset commit was failed. [#9884](https://github.com/ClickHouse/ClickHouse/pull/9884) ([filimonov](https://github.com/filimonov)).
|
||||
|
||||
#### Performance Improvement
|
||||
|
||||
* Get dictionary and check access rights only once per each call of any function reading external dictionaries. [#10928](https://github.com/ClickHouse/ClickHouse/pull/10928) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* Fix several flaky integration tests. [#11355](https://github.com/ClickHouse/ClickHouse/pull/11355) ([alesapin](https://github.com/alesapin)).
|
||||
|
||||
### ClickHouse release v20.3.10.75-lts 2020-05-23
|
||||
|
||||
#### Bug Fix
|
||||
@ -724,6 +1238,82 @@
|
||||
|
||||
## ClickHouse release v20.1
|
||||
|
||||
### ClickHouse release v20.1.16.120-stable 2020-60-26
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix rare crash caused by using `Nullable` column in prewhere condition. Continuation of [#11608](https://github.com/ClickHouse/ClickHouse/issues/11608). [#11869](https://github.com/ClickHouse/ClickHouse/pull/11869) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Don't allow arrayJoin inside higher order functions. It was leading to broken protocol synchronization. This closes [#3933](https://github.com/ClickHouse/ClickHouse/issues/3933). [#11846](https://github.com/ClickHouse/ClickHouse/pull/11846) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix unexpected behaviour of queries like `SELECT *, xyz.*` which were success while an error expected. [#11753](https://github.com/ClickHouse/ClickHouse/pull/11753) ([hexiaoting](https://github.com/hexiaoting)).
|
||||
* Fixed LOGICAL_ERROR caused by wrong type deduction of complex literals in Values input format. [#11732](https://github.com/ClickHouse/ClickHouse/pull/11732) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix `ORDER BY ... WITH FILL` over const columns. [#11697](https://github.com/ClickHouse/ClickHouse/pull/11697) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Pass proper timeouts when communicating with XDBC bridge. Recently timeouts were not respected when checking bridge liveness and receiving meta info. [#11690](https://github.com/ClickHouse/ClickHouse/pull/11690) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add support for regular expressions with case-insensitive flags. This fixes [#11101](https://github.com/ClickHouse/ClickHouse/issues/11101) and fixes [#11506](https://github.com/ClickHouse/ClickHouse/issues/11506). [#11649](https://github.com/ClickHouse/ClickHouse/pull/11649) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix bloom filters for String (data skipping indices). [#11638](https://github.com/ClickHouse/ClickHouse/pull/11638) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix rare crash caused by using `Nullable` column in prewhere condition. (Probably it is connected with [#11572](https://github.com/ClickHouse/ClickHouse/issues/11572) somehow). [#11608](https://github.com/ClickHouse/ClickHouse/pull/11608) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix wrong exit code of the clickhouse-client, when exception.code() % 256 = 0. [#11601](https://github.com/ClickHouse/ClickHouse/pull/11601) ([filimonov](https://github.com/filimonov)).
|
||||
* Fix trivial error in log message about "Mark cache size was lowered" at server startup. This closes [#11399](https://github.com/ClickHouse/ClickHouse/issues/11399). [#11589](https://github.com/ClickHouse/ClickHouse/pull/11589) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Now clickhouse-server docker container will prefer IPv6 checking server aliveness. [#11550](https://github.com/ClickHouse/ClickHouse/pull/11550) ([Ivan Starkov](https://github.com/istarkov)).
|
||||
* Fix memory leak when exception is thrown in the middle of aggregation with -State functions. This fixes [#8995](https://github.com/ClickHouse/ClickHouse/issues/8995). [#11496](https://github.com/ClickHouse/ClickHouse/pull/11496) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix usage of primary key wrapped into a function with 'FINAL' modifier and 'ORDER BY' optimization. [#10715](https://github.com/ClickHouse/ClickHouse/pull/10715) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
|
||||
|
||||
### ClickHouse release v20.1.15.109-stable 2020-06-19
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix excess lock for structure during alter. [#11790](https://github.com/ClickHouse/ClickHouse/pull/11790) ([alesapin](https://github.com/alesapin)).
|
||||
|
||||
|
||||
### ClickHouse release v20.1.14.107-stable 2020-06-11
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix error `Size of offsets doesn't match size of column` for queries with `PREWHERE column in (subquery)` and `ARRAY JOIN`. [#11580](https://github.com/ClickHouse/ClickHouse/pull/11580) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
|
||||
|
||||
### ClickHouse release v20.1.13.105-stable 2020-06-10
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix the error `Data compressed with different methods` that can happen if `min_bytes_to_use_direct_io` is enabled and PREWHERE is active and using SAMPLE or high number of threads. This fixes [#11539](https://github.com/ClickHouse/ClickHouse/issues/11539). [#11540](https://github.com/ClickHouse/ClickHouse/pull/11540) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix return compressed size for codecs. [#11448](https://github.com/ClickHouse/ClickHouse/pull/11448) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix server crash when a column has compression codec with non-literal arguments. Fixes [#11365](https://github.com/ClickHouse/ClickHouse/issues/11365). [#11431](https://github.com/ClickHouse/ClickHouse/pull/11431) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix pointInPolygon with nan as point. Fixes https://github.com/ClickHouse/ClickHouse/issues/11375. [#11421](https://github.com/ClickHouse/ClickHouse/pull/11421) ([Alexey Ilyukhov](https://github.com/livace)).
|
||||
* Fixed geohashesInBox with arguments outside of latitude/longitude range. [#11403](https://github.com/ClickHouse/ClickHouse/pull/11403) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||
* Fix possible `Pipeline stuck` error for queries with external sort and limit. Fixes [#11359](https://github.com/ClickHouse/ClickHouse/issues/11359). [#11366](https://github.com/ClickHouse/ClickHouse/pull/11366) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix crash in `quantilesExactWeightedArray`. [#11337](https://github.com/ClickHouse/ClickHouse/pull/11337) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Make writing to `MATERIALIZED VIEW` with setting `parallel_view_processing = 1` parallel again. Fixes [#10241](https://github.com/ClickHouse/ClickHouse/issues/10241). [#11330](https://github.com/ClickHouse/ClickHouse/pull/11330) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix visitParamExtractRaw when extracted JSON has strings with unbalanced { or [. [#11318](https://github.com/ClickHouse/ClickHouse/pull/11318) ([Ewout](https://github.com/devwout)).
|
||||
* Fix very rare race condition in ThreadPool. [#11314](https://github.com/ClickHouse/ClickHouse/pull/11314) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix potential uninitialized memory in conversion. Example: `SELECT toIntervalSecond(now64())`. [#11311](https://github.com/ClickHouse/ClickHouse/pull/11311) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix the issue when index analysis cannot work if a table has Array column in primary key and if a query is filtering by this column with `empty` or `notEmpty` functions. This fixes [#11286](https://github.com/ClickHouse/ClickHouse/issues/11286). [#11303](https://github.com/ClickHouse/ClickHouse/pull/11303) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix bug when query speed estimation can be incorrect and the limit of `min_execution_speed` may not work or work incorrectly if the query is throttled by `max_network_bandwidth`, `max_execution_speed` or `priority` settings. Change the default value of `timeout_before_checking_execution_speed` to non-zero, because otherwise the settings `min_execution_speed` and `max_execution_speed` have no effect. This fixes [#11297](https://github.com/ClickHouse/ClickHouse/issues/11297). This fixes [#5732](https://github.com/ClickHouse/ClickHouse/issues/5732). This fixes [#6228](https://github.com/ClickHouse/ClickHouse/issues/6228). Usability improvement: avoid concatenation of exception message with progress bar in `clickhouse-client`. [#11296](https://github.com/ClickHouse/ClickHouse/pull/11296) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix crash while reading malformed data in Protobuf format. This fixes https://github.com/ClickHouse/ClickHouse/issues/5957, fixes https://github.com/ClickHouse/ClickHouse/issues/11203. [#11258](https://github.com/ClickHouse/ClickHouse/pull/11258) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix possible error `Cannot capture column` for higher-order functions with `Array(Array(LowCardinality))` captured argument. [#11185](https://github.com/ClickHouse/ClickHouse/pull/11185) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* If data skipping index is dependent on columns that are going to be modified during background merge (for SummingMergeTree, AggregatingMergeTree as well as for TTL GROUP BY), it was calculated incorrectly. This issue is fixed by moving index calculation after merge so the index is calculated on merged data. [#11162](https://github.com/ClickHouse/ClickHouse/pull/11162) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Remove logging from mutation finalization task if nothing was finalized. [#11109](https://github.com/ClickHouse/ClickHouse/pull/11109) ([alesapin](https://github.com/alesapin)).
|
||||
* Fixed parseDateTime64BestEffort argument resolution bugs. [#10925](https://github.com/ClickHouse/ClickHouse/issues/10925). [#11038](https://github.com/ClickHouse/ClickHouse/pull/11038) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||
* Fix incorrect raw data size in method getRawData(). [#10964](https://github.com/ClickHouse/ClickHouse/pull/10964) ([Igr](https://github.com/ObjatieGroba)).
|
||||
* Fix backward compatibility with tuples in Distributed tables. [#10889](https://github.com/ClickHouse/ClickHouse/pull/10889) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix SIGSEGV in StringHashTable (if such key does not exist). [#10870](https://github.com/ClickHouse/ClickHouse/pull/10870) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed bug in `ReplicatedMergeTree` which might cause some `ALTER` on `OPTIMIZE` query to hang waiting for some replica after it become inactive. [#10849](https://github.com/ClickHouse/ClickHouse/pull/10849) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix columns order after Block::sortColumns() (also add a test that shows that it affects some real use case - Buffer engine). [#10826](https://github.com/ClickHouse/ClickHouse/pull/10826) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix the issue with ODBC bridge when no quoting of identifiers is requested. This fixes [#7984](https://github.com/ClickHouse/ClickHouse/issues/7984). [#10821](https://github.com/ClickHouse/ClickHouse/pull/10821) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix UBSan and MSan report in DateLUT. [#10798](https://github.com/ClickHouse/ClickHouse/pull/10798) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* - Make use of `src_type` for correct type conversion in key conditions. Fixes [#6287](https://github.com/ClickHouse/ClickHouse/issues/6287). [#10791](https://github.com/ClickHouse/ClickHouse/pull/10791) ([Andrew Onyshchuk](https://github.com/oandrew)).
|
||||
* Fix `parallel_view_processing` behavior. Now all insertions into `MATERIALIZED VIEW` without exception should be finished if exception happened. Fixes [#10241](https://github.com/ClickHouse/ClickHouse/issues/10241). [#10757](https://github.com/ClickHouse/ClickHouse/pull/10757) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix combinator -OrNull and -OrDefault when combined with -State. [#10741](https://github.com/ClickHouse/ClickHouse/pull/10741) ([hcz](https://github.com/hczhcz)).
|
||||
* Fix disappearing totals. Totals could have being filtered if query had had join or subquery with external where condition. Fixes [#10674](https://github.com/ClickHouse/ClickHouse/issues/10674). [#10698](https://github.com/ClickHouse/ClickHouse/pull/10698) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix multiple usages of `IN` operator with the identical set in one query. [#10686](https://github.com/ClickHouse/ClickHouse/pull/10686) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix order of parameters in AggregateTransform constructor. [#10667](https://github.com/ClickHouse/ClickHouse/pull/10667) ([palasonic1](https://github.com/palasonic1)).
|
||||
* Fix the lack of parallel execution of remote queries with `distributed_aggregation_memory_efficient` enabled. Fixes [#10655](https://github.com/ClickHouse/ClickHouse/issues/10655). [#10664](https://github.com/ClickHouse/ClickHouse/pull/10664) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix predicates optimization for distributed queries (`enable_optimize_predicate_expression=1`) for queries with `HAVING` section (i.e. when filtering on the server initiator is required), by preserving the order of expressions (and this is enough to fix), and also force aggregator use column names over indexes. Fixes: [#10613](https://github.com/ClickHouse/ClickHouse/issues/10613), [#11413](https://github.com/ClickHouse/ClickHouse/issues/11413). [#10621](https://github.com/ClickHouse/ClickHouse/pull/10621) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix error `the BloomFilter false positive must be a double number between 0 and 1` [#10551](https://github.com/ClickHouse/ClickHouse/issues/10551). [#10569](https://github.com/ClickHouse/ClickHouse/pull/10569) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Fix SELECT of column ALIAS which default expression type different from column type. [#10563](https://github.com/ClickHouse/ClickHouse/pull/10563) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* * Implemented comparison between DateTime64 and String values (just like for DateTime). [#10560](https://github.com/ClickHouse/ClickHouse/pull/10560) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||
|
||||
|
||||
### ClickHouse release v20.1.12.86, 2020-05-26
|
||||
|
||||
#### Bug Fix
|
||||
|
@ -342,6 +342,7 @@ include (cmake/find/sparsehash.cmake)
|
||||
include (cmake/find/re2.cmake)
|
||||
include (cmake/find/libgsasl.cmake)
|
||||
include (cmake/find/rdkafka.cmake)
|
||||
include (cmake/find/amqpcpp.cmake)
|
||||
include (cmake/find/capnp.cmake)
|
||||
include (cmake/find/llvm.cmake)
|
||||
include (cmake/find/opencl.cmake)
|
||||
@ -420,3 +421,5 @@ add_subdirectory (tests)
|
||||
add_subdirectory (utils)
|
||||
|
||||
include (cmake/print_include_directories.cmake)
|
||||
|
||||
include (cmake/sanitize_target_link_libraries.cmake)
|
||||
|
@ -13,3 +13,8 @@ ClickHouse is an open-source column-oriented database management system that all
|
||||
* [Yandex.Messenger channel](https://yandex.ru/chat/#/join/20e380d9-c7be-4123-ab06-e95fb946975e) shares announcements and useful links in Russian.
|
||||
* [Contacts](https://clickhouse.tech/#contacts) can help to get your questions answered if there are any.
|
||||
* You can also [fill this form](https://clickhouse.tech/#meet) to meet Yandex ClickHouse team in person.
|
||||
|
||||
## Upcoming Events
|
||||
|
||||
* [ClickHouse for genetic data (in Russian)](https://cloud.yandex.ru/events/152) on July 14, 2020.
|
||||
* [ClickHouse virtual office hours](https://www.meetup.com/San-Francisco-Bay-Area-ClickHouse-Meetup/events/271522978/) on July 15, 2020.
|
||||
|
@ -11,7 +11,10 @@ currently being supported with security updates:
|
||||
| 18.x | :x: |
|
||||
| 19.x | :x: |
|
||||
| 19.14 | :white_check_mark: |
|
||||
| 20.x | :white_check_mark: |
|
||||
| 20.1 | :x: |
|
||||
| 20.3 | :white_check_mark: |
|
||||
| 20.4 | :white_check_mark: |
|
||||
| 20.5 | :white_check_mark: |
|
||||
|
||||
## Reporting a Vulnerability
|
||||
|
||||
|
@ -77,10 +77,8 @@ target_link_libraries (common
|
||||
Poco::Util
|
||||
Poco::Foundation
|
||||
replxx
|
||||
fmt
|
||||
|
||||
PRIVATE
|
||||
cctz
|
||||
fmt
|
||||
)
|
||||
|
||||
if (ENABLE_TESTS)
|
||||
|
@ -48,7 +48,7 @@ protected:
|
||||
};
|
||||
|
||||
const String history_file_path;
|
||||
static constexpr char word_break_characters[] = " \t\v\f\a\b\r\n`~!@#$%^&*()-=+[{]}\\|;:'\",<.>/?_";
|
||||
static constexpr char word_break_characters[] = " \t\v\f\a\b\r\n`~!@#$%^&*()-=+[{]}\\|;:'\",<.>/?";
|
||||
|
||||
String input;
|
||||
|
||||
|
@ -2,7 +2,7 @@ LIBRARY()
|
||||
|
||||
ADDINCL(
|
||||
GLOBAL clickhouse/base
|
||||
contrib/libs/cctz/include
|
||||
GLOBAL contrib/libs/cctz/include
|
||||
)
|
||||
|
||||
CFLAGS (GLOBAL -DARCADIA_BUILD)
|
||||
|
@ -628,7 +628,7 @@ void BaseDaemon::initialize(Application & self)
|
||||
|
||||
/// Create pid file.
|
||||
if (config().has("pid"))
|
||||
pid.emplace(config().getString("pid"));
|
||||
pid.emplace(config().getString("pid"), DB::StatusFile::write_pid);
|
||||
|
||||
/// Change path for logging.
|
||||
if (!log_path.empty())
|
||||
@ -812,63 +812,6 @@ void BaseDaemon::defineOptions(Poco::Util::OptionSet & new_options)
|
||||
Poco::Util::ServerApplication::defineOptions(new_options);
|
||||
}
|
||||
|
||||
bool isPidRunning(pid_t pid)
|
||||
{
|
||||
return getpgid(pid) >= 0;
|
||||
}
|
||||
|
||||
BaseDaemon::PID::PID(const std::string & file_)
|
||||
{
|
||||
file = Poco::Path(file_).absolute().toString();
|
||||
Poco::File poco_file(file);
|
||||
|
||||
if (poco_file.exists())
|
||||
{
|
||||
pid_t pid_read = 0;
|
||||
{
|
||||
std::ifstream in(file);
|
||||
if (in.good())
|
||||
{
|
||||
in >> pid_read;
|
||||
if (pid_read && isPidRunning(pid_read))
|
||||
throw Poco::Exception("Pid file exists and program running with pid = " + std::to_string(pid_read) + ", should not start daemon.");
|
||||
}
|
||||
}
|
||||
std::cerr << "Old pid file exists (with pid = " << pid_read << "), removing." << std::endl;
|
||||
poco_file.remove();
|
||||
}
|
||||
|
||||
int fd = open(file.c_str(),
|
||||
O_CREAT | O_EXCL | O_WRONLY,
|
||||
S_IRUSR | S_IWUSR | S_IRGRP | S_IWGRP | S_IROTH | S_IWOTH);
|
||||
|
||||
if (-1 == fd)
|
||||
{
|
||||
if (EEXIST == errno)
|
||||
throw Poco::Exception("Pid file exists, should not start daemon.");
|
||||
throw Poco::CreateFileException("Cannot create pid file.");
|
||||
}
|
||||
|
||||
SCOPE_EXIT({ close(fd); });
|
||||
|
||||
std::stringstream s;
|
||||
s << getpid();
|
||||
if (static_cast<ssize_t>(s.str().size()) != write(fd, s.str().c_str(), s.str().size()))
|
||||
throw Poco::Exception("Cannot write to pid file.");
|
||||
}
|
||||
|
||||
BaseDaemon::PID::~PID()
|
||||
{
|
||||
try
|
||||
{
|
||||
Poco::File(file).remove();
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
DB::tryLogCurrentException(__PRETTY_FUNCTION__);
|
||||
}
|
||||
}
|
||||
|
||||
void BaseDaemon::handleSignal(int signal_id)
|
||||
{
|
||||
if (signal_id == SIGINT ||
|
||||
|
@ -22,6 +22,7 @@
|
||||
#include <common/getThreadId.h>
|
||||
#include <daemon/GraphiteWriter.h>
|
||||
#include <Common/Config/ConfigProcessor.h>
|
||||
#include <Common/StatusFile.h>
|
||||
#include <loggers/Loggers.h>
|
||||
|
||||
|
||||
@ -163,16 +164,7 @@ protected:
|
||||
|
||||
std::unique_ptr<Poco::TaskManager> task_manager;
|
||||
|
||||
/// RAII wrapper for pid file.
|
||||
struct PID
|
||||
{
|
||||
std::string file;
|
||||
|
||||
PID(const std::string & file_);
|
||||
~PID();
|
||||
};
|
||||
|
||||
std::optional<PID> pid;
|
||||
std::optional<DB::StatusFile> pid;
|
||||
|
||||
std::atomic_bool is_cancelled{false};
|
||||
|
||||
|
@ -8,6 +8,5 @@ target_include_directories (daemon PUBLIC ..)
|
||||
target_link_libraries (daemon PUBLIC loggers PRIVATE clickhouse_common_io clickhouse_common_config common ${EXECINFO_LIBRARIES})
|
||||
|
||||
if (USE_SENTRY)
|
||||
target_link_libraries (daemon PRIVATE curl)
|
||||
target_link_libraries (daemon PRIVATE ${SENTRY_LIBRARY})
|
||||
endif ()
|
||||
|
@ -1,19 +1,16 @@
|
||||
#pragma once
|
||||
|
||||
#include <chrono>
|
||||
#include <ctime>
|
||||
#include <string>
|
||||
#include <iomanip>
|
||||
#include <sstream>
|
||||
#include <cctz/time_zone.h>
|
||||
|
||||
|
||||
namespace ext
|
||||
{
|
||||
inline std::string to_string(const std::time_t & time)
|
||||
{
|
||||
std::stringstream ss;
|
||||
ss << std::put_time(std::localtime(&time), "%Y-%m-%d %X");
|
||||
return ss.str();
|
||||
return cctz::format("%Y-%m-%d %H:%M:%S", std::chrono::system_clock::from_time_t(time), cctz::local_time_zone());
|
||||
}
|
||||
|
||||
template <typename Clock, typename Duration = typename Clock::duration>
|
||||
|
20
cmake/find/amqpcpp.cmake
Normal file
20
cmake/find/amqpcpp.cmake
Normal file
@ -0,0 +1,20 @@
|
||||
SET(ENABLE_AMQPCPP 1)
|
||||
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/AMQP-CPP/CMakeLists.txt")
|
||||
message (WARNING "submodule contrib/AMQP-CPP is missing. to fix try run: \n git submodule update --init --recursive")
|
||||
set (ENABLE_AMQPCPP 0)
|
||||
endif ()
|
||||
|
||||
if (ENABLE_AMQPCPP)
|
||||
|
||||
set (USE_AMQPCPP 1)
|
||||
set (AMQPCPP_LIBRARY AMQP-CPP)
|
||||
|
||||
set (AMQPCPP_INCLUDE_DIR "${ClickHouse_SOURCE_DIR}/contrib/AMQP-CPP/include")
|
||||
|
||||
list (APPEND AMQPCPP_INCLUDE_DIR
|
||||
"${ClickHouse_SOURCE_DIR}/contrib/AMQP-CPP/include"
|
||||
"${ClickHouse_SOURCE_DIR}/contrib/AMQP-CPP")
|
||||
|
||||
endif()
|
||||
|
||||
message (STATUS "Using AMQP-CPP=${USE_AMQPCPP}: ${AMQPCPP_INCLUDE_DIR} : ${AMQPCPP_LIBRARY}")
|
@ -6,9 +6,7 @@ if (NOT EXISTS "${SENTRY_INCLUDE_DIR}/sentry.h")
|
||||
endif ()
|
||||
|
||||
if (NOT OS_FREEBSD AND NOT SPLIT_SHARED_LIBRARIES AND NOT_UNBUNDLED AND NOT (OS_DARWIN AND COMPILER_CLANG))
|
||||
option (USE_SENTRY "Use Sentry" ON)
|
||||
set (CURL_LIBRARY ${ClickHouse_SOURCE_DIR}/contrib/curl/lib)
|
||||
set (CURL_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/contrib/curl/include)
|
||||
option (USE_SENTRY "Use Sentry" ${ENABLE_LIBRARIES})
|
||||
set (SENTRY_TRANSPORT "curl" CACHE STRING "")
|
||||
set (SENTRY_BACKEND "none" CACHE STRING "")
|
||||
set (SENTRY_EXPORT_SYMBOLS OFF CACHE BOOL "")
|
||||
|
56
cmake/sanitize_target_link_libraries.cmake
Normal file
56
cmake/sanitize_target_link_libraries.cmake
Normal file
@ -0,0 +1,56 @@
|
||||
# When you will try to link target with the directory (that exists), cmake will
|
||||
# skip this without an error, only the following warning will be reported:
|
||||
#
|
||||
# target_link_libraries(main /tmp)
|
||||
#
|
||||
# WARNING: Target "main" requests linking to directory "/tmp". Targets may link only to libraries. CMake is dropping the item.
|
||||
#
|
||||
# And there is no cmake policy that controls this.
|
||||
# (I guess the reason that it is allowed is because of FRAMEWORK for OSX).
|
||||
#
|
||||
# So to avoid error-prone cmake rules, this can be sanitized.
|
||||
# There are the following ways:
|
||||
# - overwrite target_link_libraries()/link_libraries() and check *before*
|
||||
# calling real macro, but this requires duplicate all supported syntax
|
||||
# -- too complex
|
||||
# - overwrite target_link_libraries() and check LINK_LIBRARIES property, this
|
||||
# works great
|
||||
# -- but cannot be used with link_libraries()
|
||||
# - use BUILDSYSTEM_TARGETS property to get list of all targets and sanitize
|
||||
# -- this will work.
|
||||
|
||||
# https://stackoverflow.com/a/62311397/328260
|
||||
function (get_all_targets var)
|
||||
set (targets)
|
||||
get_all_targets_recursive (targets ${CMAKE_CURRENT_SOURCE_DIR})
|
||||
set (${var} ${targets} PARENT_SCOPE)
|
||||
endfunction()
|
||||
macro (get_all_targets_recursive targets dir)
|
||||
get_property (subdirectories DIRECTORY ${dir} PROPERTY SUBDIRECTORIES)
|
||||
foreach (subdir ${subdirectories})
|
||||
get_all_targets_recursive (${targets} ${subdir})
|
||||
endforeach ()
|
||||
get_property (current_targets DIRECTORY ${dir} PROPERTY BUILDSYSTEM_TARGETS)
|
||||
list (APPEND ${targets} ${current_targets})
|
||||
endmacro ()
|
||||
|
||||
macro (sanitize_link_libraries target)
|
||||
get_target_property(target_type ${target} TYPE)
|
||||
if (${target_type} STREQUAL "INTERFACE_LIBRARY")
|
||||
get_property(linked_libraries TARGET ${target} PROPERTY INTERFACE_LINK_LIBRARIES)
|
||||
else()
|
||||
get_property(linked_libraries TARGET ${target} PROPERTY LINK_LIBRARIES)
|
||||
endif()
|
||||
foreach (linked_library ${linked_libraries})
|
||||
if (TARGET ${linked_library})
|
||||
# just in case, skip if TARGET
|
||||
elseif (IS_DIRECTORY ${linked_library})
|
||||
message(FATAL_ERROR "${target} requested to link with directory: ${linked_library}")
|
||||
endif()
|
||||
endforeach()
|
||||
endmacro()
|
||||
|
||||
get_all_targets (all_targets)
|
||||
foreach (target ${all_targets})
|
||||
sanitize_link_libraries(${target})
|
||||
endforeach()
|
1
contrib/AMQP-CPP
vendored
Submodule
1
contrib/AMQP-CPP
vendored
Submodule
@ -0,0 +1 @@
|
||||
Subproject commit 1c08399ab0ab9e4042ef8e2bbe9e208e5dcbc13b
|
27
contrib/CMakeLists.txt
vendored
27
contrib/CMakeLists.txt
vendored
@ -106,6 +106,12 @@ if (ENABLE_LDAP AND USE_INTERNAL_LDAP_LIBRARY)
|
||||
add_subdirectory (openldap-cmake)
|
||||
endif ()
|
||||
|
||||
# Should go before:
|
||||
# - mariadb-connector-c
|
||||
# - aws-s3-cmake
|
||||
# - sentry-native
|
||||
add_subdirectory (curl-cmake)
|
||||
|
||||
function(mysql_support)
|
||||
set(CLIENT_PLUGIN_CACHING_SHA2_PASSWORD STATIC)
|
||||
set(CLIENT_PLUGIN_SHA256_PASSWORD STATIC)
|
||||
@ -263,23 +269,6 @@ if (USE_INTERNAL_GRPC_LIBRARY)
|
||||
add_subdirectory(grpc-cmake)
|
||||
endif ()
|
||||
|
||||
if (USE_INTERNAL_AWS_S3_LIBRARY OR USE_SENTRY)
|
||||
set (save_CMAKE_C_FLAGS ${CMAKE_C_FLAGS})
|
||||
set (save_CMAKE_REQUIRED_LIBRARIES ${CMAKE_REQUIRED_LIBRARIES})
|
||||
set (save_CMAKE_REQUIRED_INCLUDES ${CMAKE_REQUIRED_INCLUDES})
|
||||
set (save_CMAKE_REQUIRED_FLAGS ${CMAKE_REQUIRED_FLAGS})
|
||||
set (save_CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH})
|
||||
add_subdirectory(curl-cmake)
|
||||
set (CMAKE_C_FLAGS ${save_CMAKE_C_FLAGS})
|
||||
set (CMAKE_REQUIRED_LIBRARIES ${save_CMAKE_REQUIRED_LIBRARIES})
|
||||
set (CMAKE_CMAKE_REQUIRED_INCLUDES ${save_CMAKE_REQUIRED_INCLUDES})
|
||||
set (CMAKE_REQUIRED_FLAGS ${save_CMAKE_REQUIRED_FLAGS})
|
||||
set (CMAKE_CMAKE_MODULE_PATH ${save_CMAKE_MODULE_PATH})
|
||||
|
||||
# The library is large - avoid bloat.
|
||||
target_compile_options (curl PRIVATE -g0)
|
||||
endif ()
|
||||
|
||||
if (USE_INTERNAL_AWS_S3_LIBRARY)
|
||||
add_subdirectory(aws-s3-cmake)
|
||||
|
||||
@ -301,6 +290,10 @@ if (USE_FASTOPS)
|
||||
add_subdirectory (fastops-cmake)
|
||||
endif()
|
||||
|
||||
if (USE_AMQPCPP)
|
||||
add_subdirectory (amqpcpp-cmake)
|
||||
endif()
|
||||
|
||||
if (USE_CASSANDRA)
|
||||
add_subdirectory (libuv)
|
||||
add_subdirectory (cassandra)
|
||||
|
@ -86,7 +86,10 @@ static INLINE void memcpy_sse2_128(void *dst, const void *src) {
|
||||
//---------------------------------------------------------------------
|
||||
// tiny memory copy with jump table optimized
|
||||
//---------------------------------------------------------------------
|
||||
static INLINE void *memcpy_tiny(void *dst, const void *src, size_t size) {
|
||||
/// Attribute is used to avoid an error with undefined behaviour sanitizer
|
||||
/// ../contrib/FastMemcpy/FastMemcpy.h:91:56: runtime error: applying zero offset to null pointer
|
||||
/// Found by 01307_orc_output_format.sh, cause - ORCBlockInputFormat and external ORC library.
|
||||
__attribute__((__no_sanitize__("undefined"))) static INLINE void *memcpy_tiny(void *dst, const void *src, size_t size) {
|
||||
unsigned char *dd = ((unsigned char*)dst) + size;
|
||||
const unsigned char *ss = ((const unsigned char*)src) + size;
|
||||
|
||||
|
44
contrib/amqpcpp-cmake/CMakeLists.txt
Normal file
44
contrib/amqpcpp-cmake/CMakeLists.txt
Normal file
@ -0,0 +1,44 @@
|
||||
set (LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/AMQP-CPP)
|
||||
|
||||
set (SRCS
|
||||
${LIBRARY_DIR}/src/array.cpp
|
||||
${LIBRARY_DIR}/src/channel.cpp
|
||||
${LIBRARY_DIR}/src/channelimpl.cpp
|
||||
${LIBRARY_DIR}/src/connectionimpl.cpp
|
||||
${LIBRARY_DIR}/src/deferredcancel.cpp
|
||||
${LIBRARY_DIR}/src/deferredconfirm.cpp
|
||||
${LIBRARY_DIR}/src/deferredconsumer.cpp
|
||||
${LIBRARY_DIR}/src/deferredextreceiver.cpp
|
||||
${LIBRARY_DIR}/src/deferredget.cpp
|
||||
${LIBRARY_DIR}/src/deferredpublisher.cpp
|
||||
${LIBRARY_DIR}/src/deferredreceiver.cpp
|
||||
${LIBRARY_DIR}/src/field.cpp
|
||||
${LIBRARY_DIR}/src/flags.cpp
|
||||
${LIBRARY_DIR}/src/linux_tcp/openssl.cpp
|
||||
${LIBRARY_DIR}/src/linux_tcp/tcpconnection.cpp
|
||||
${LIBRARY_DIR}/src/receivedframe.cpp
|
||||
${LIBRARY_DIR}/src/table.cpp
|
||||
${LIBRARY_DIR}/src/watchable.cpp
|
||||
)
|
||||
|
||||
add_library(amqp-cpp ${SRCS})
|
||||
|
||||
target_compile_options (amqp-cpp
|
||||
PUBLIC
|
||||
-Wno-old-style-cast
|
||||
-Wno-inconsistent-missing-destructor-override
|
||||
-Wno-deprecated
|
||||
-Wno-unused-parameter
|
||||
-Wno-shadow
|
||||
-Wno-tautological-type-limit-compare
|
||||
-Wno-extra-semi
|
||||
# NOTE: disable all warnings at last because the warning:
|
||||
# "conversion function converting 'XXX' to itself will never be used"
|
||||
# doesn't have it's own diagnostic flag yet.
|
||||
-w
|
||||
)
|
||||
|
||||
target_include_directories (amqp-cpp PUBLIC ${LIBRARY_DIR}/include)
|
||||
|
||||
target_link_libraries (amqp-cpp PUBLIC ssl)
|
||||
|
@ -1,152 +1,187 @@
|
||||
set (CURL_DIR ${ClickHouse_SOURCE_DIR}/contrib/curl)
|
||||
set (CURL_LIBRARY ${ClickHouse_SOURCE_DIR}/contrib/curl/lib)
|
||||
set (CURL_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/contrib/curl/include)
|
||||
option (ENABLE_CURL "Enable curl" ${ENABLE_LIBRARIES})
|
||||
|
||||
set (SRCS
|
||||
${CURL_DIR}/lib/file.c
|
||||
${CURL_DIR}/lib/timeval.c
|
||||
${CURL_DIR}/lib/base64.c
|
||||
${CURL_DIR}/lib/hostip.c
|
||||
${CURL_DIR}/lib/progress.c
|
||||
${CURL_DIR}/lib/formdata.c
|
||||
${CURL_DIR}/lib/cookie.c
|
||||
${CURL_DIR}/lib/http.c
|
||||
${CURL_DIR}/lib/sendf.c
|
||||
${CURL_DIR}/lib/url.c
|
||||
${CURL_DIR}/lib/dict.c
|
||||
${CURL_DIR}/lib/if2ip.c
|
||||
${CURL_DIR}/lib/speedcheck.c
|
||||
${CURL_DIR}/lib/ldap.c
|
||||
${CURL_DIR}/lib/version.c
|
||||
${CURL_DIR}/lib/getenv.c
|
||||
${CURL_DIR}/lib/escape.c
|
||||
${CURL_DIR}/lib/mprintf.c
|
||||
${CURL_DIR}/lib/telnet.c
|
||||
${CURL_DIR}/lib/netrc.c
|
||||
${CURL_DIR}/lib/getinfo.c
|
||||
${CURL_DIR}/lib/transfer.c
|
||||
${CURL_DIR}/lib/strcase.c
|
||||
${CURL_DIR}/lib/easy.c
|
||||
${CURL_DIR}/lib/security.c
|
||||
${CURL_DIR}/lib/curl_fnmatch.c
|
||||
${CURL_DIR}/lib/fileinfo.c
|
||||
${CURL_DIR}/lib/wildcard.c
|
||||
${CURL_DIR}/lib/krb5.c
|
||||
${CURL_DIR}/lib/memdebug.c
|
||||
${CURL_DIR}/lib/http_chunks.c
|
||||
${CURL_DIR}/lib/strtok.c
|
||||
${CURL_DIR}/lib/connect.c
|
||||
${CURL_DIR}/lib/llist.c
|
||||
${CURL_DIR}/lib/hash.c
|
||||
${CURL_DIR}/lib/multi.c
|
||||
${CURL_DIR}/lib/content_encoding.c
|
||||
${CURL_DIR}/lib/share.c
|
||||
${CURL_DIR}/lib/http_digest.c
|
||||
${CURL_DIR}/lib/md4.c
|
||||
${CURL_DIR}/lib/md5.c
|
||||
${CURL_DIR}/lib/http_negotiate.c
|
||||
${CURL_DIR}/lib/inet_pton.c
|
||||
${CURL_DIR}/lib/strtoofft.c
|
||||
${CURL_DIR}/lib/strerror.c
|
||||
${CURL_DIR}/lib/amigaos.c
|
||||
${CURL_DIR}/lib/hostasyn.c
|
||||
${CURL_DIR}/lib/hostip4.c
|
||||
${CURL_DIR}/lib/hostip6.c
|
||||
${CURL_DIR}/lib/hostsyn.c
|
||||
${CURL_DIR}/lib/inet_ntop.c
|
||||
${CURL_DIR}/lib/parsedate.c
|
||||
${CURL_DIR}/lib/select.c
|
||||
${CURL_DIR}/lib/splay.c
|
||||
${CURL_DIR}/lib/strdup.c
|
||||
${CURL_DIR}/lib/socks.c
|
||||
${CURL_DIR}/lib/curl_addrinfo.c
|
||||
${CURL_DIR}/lib/socks_gssapi.c
|
||||
${CURL_DIR}/lib/socks_sspi.c
|
||||
${CURL_DIR}/lib/curl_sspi.c
|
||||
${CURL_DIR}/lib/slist.c
|
||||
${CURL_DIR}/lib/nonblock.c
|
||||
${CURL_DIR}/lib/curl_memrchr.c
|
||||
${CURL_DIR}/lib/imap.c
|
||||
${CURL_DIR}/lib/pop3.c
|
||||
${CURL_DIR}/lib/smtp.c
|
||||
${CURL_DIR}/lib/pingpong.c
|
||||
${CURL_DIR}/lib/rtsp.c
|
||||
${CURL_DIR}/lib/curl_threads.c
|
||||
${CURL_DIR}/lib/warnless.c
|
||||
${CURL_DIR}/lib/hmac.c
|
||||
${CURL_DIR}/lib/curl_rtmp.c
|
||||
${CURL_DIR}/lib/openldap.c
|
||||
${CURL_DIR}/lib/curl_gethostname.c
|
||||
${CURL_DIR}/lib/gopher.c
|
||||
${CURL_DIR}/lib/idn_win32.c
|
||||
${CURL_DIR}/lib/http_proxy.c
|
||||
${CURL_DIR}/lib/non-ascii.c
|
||||
${CURL_DIR}/lib/asyn-thread.c
|
||||
${CURL_DIR}/lib/curl_gssapi.c
|
||||
${CURL_DIR}/lib/http_ntlm.c
|
||||
${CURL_DIR}/lib/curl_ntlm_wb.c
|
||||
${CURL_DIR}/lib/curl_ntlm_core.c
|
||||
${CURL_DIR}/lib/curl_sasl.c
|
||||
${CURL_DIR}/lib/rand.c
|
||||
${CURL_DIR}/lib/curl_multibyte.c
|
||||
${CURL_DIR}/lib/hostcheck.c
|
||||
${CURL_DIR}/lib/conncache.c
|
||||
${CURL_DIR}/lib/dotdot.c
|
||||
${CURL_DIR}/lib/x509asn1.c
|
||||
${CURL_DIR}/lib/http2.c
|
||||
${CURL_DIR}/lib/smb.c
|
||||
${CURL_DIR}/lib/curl_endian.c
|
||||
${CURL_DIR}/lib/curl_des.c
|
||||
${CURL_DIR}/lib/system_win32.c
|
||||
${CURL_DIR}/lib/mime.c
|
||||
${CURL_DIR}/lib/sha256.c
|
||||
${CURL_DIR}/lib/setopt.c
|
||||
${CURL_DIR}/lib/curl_path.c
|
||||
${CURL_DIR}/lib/curl_ctype.c
|
||||
${CURL_DIR}/lib/curl_range.c
|
||||
${CURL_DIR}/lib/psl.c
|
||||
${CURL_DIR}/lib/doh.c
|
||||
${CURL_DIR}/lib/urlapi.c
|
||||
${CURL_DIR}/lib/curl_get_line.c
|
||||
${CURL_DIR}/lib/altsvc.c
|
||||
${CURL_DIR}/lib/socketpair.c
|
||||
${CURL_DIR}/lib/vauth/vauth.c
|
||||
${CURL_DIR}/lib/vauth/cleartext.c
|
||||
${CURL_DIR}/lib/vauth/cram.c
|
||||
${CURL_DIR}/lib/vauth/digest.c
|
||||
${CURL_DIR}/lib/vauth/digest_sspi.c
|
||||
${CURL_DIR}/lib/vauth/krb5_gssapi.c
|
||||
${CURL_DIR}/lib/vauth/krb5_sspi.c
|
||||
${CURL_DIR}/lib/vauth/ntlm.c
|
||||
${CURL_DIR}/lib/vauth/ntlm_sspi.c
|
||||
${CURL_DIR}/lib/vauth/oauth2.c
|
||||
${CURL_DIR}/lib/vauth/spnego_gssapi.c
|
||||
${CURL_DIR}/lib/vauth/spnego_sspi.c
|
||||
${CURL_DIR}/lib/vtls/openssl.c
|
||||
${CURL_DIR}/lib/vtls/gtls.c
|
||||
${CURL_DIR}/lib/vtls/vtls.c
|
||||
${CURL_DIR}/lib/vtls/nss.c
|
||||
${CURL_DIR}/lib/vtls/polarssl.c
|
||||
${CURL_DIR}/lib/vtls/polarssl_threadlock.c
|
||||
${CURL_DIR}/lib/vtls/wolfssl.c
|
||||
${CURL_DIR}/lib/vtls/schannel.c
|
||||
${CURL_DIR}/lib/vtls/schannel_verify.c
|
||||
${CURL_DIR}/lib/vtls/sectransp.c
|
||||
${CURL_DIR}/lib/vtls/gskit.c
|
||||
${CURL_DIR}/lib/vtls/mbedtls.c
|
||||
${CURL_DIR}/lib/vtls/mesalink.c
|
||||
${CURL_DIR}/lib/vtls/bearssl.c
|
||||
${CURL_DIR}/lib/vquic/ngtcp2.c
|
||||
${CURL_DIR}/lib/vquic/quiche.c
|
||||
${CURL_DIR}/lib/vssh/libssh2.c
|
||||
${CURL_DIR}/lib/vssh/libssh.c
|
||||
)
|
||||
if (ENABLE_CURL)
|
||||
option (USE_INTERNAL_CURL "Use internal curl library" ${NOT_UNBUNDLED})
|
||||
|
||||
add_library (curl ${SRCS})
|
||||
if (USE_INTERNAL_CURL)
|
||||
set (LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/curl")
|
||||
|
||||
target_compile_definitions(curl PRIVATE HAVE_CONFIG_H BUILDING_LIBCURL CURL_HIDDEN_SYMBOLS libcurl_EXPORTS)
|
||||
target_include_directories(curl PUBLIC ${CURL_DIR}/include ${CURL_DIR}/lib .)
|
||||
set (SRCS
|
||||
${LIBRARY_DIR}/lib/file.c
|
||||
${LIBRARY_DIR}/lib/timeval.c
|
||||
${LIBRARY_DIR}/lib/base64.c
|
||||
${LIBRARY_DIR}/lib/hostip.c
|
||||
${LIBRARY_DIR}/lib/progress.c
|
||||
${LIBRARY_DIR}/lib/formdata.c
|
||||
${LIBRARY_DIR}/lib/cookie.c
|
||||
${LIBRARY_DIR}/lib/http.c
|
||||
${LIBRARY_DIR}/lib/sendf.c
|
||||
${LIBRARY_DIR}/lib/url.c
|
||||
${LIBRARY_DIR}/lib/dict.c
|
||||
${LIBRARY_DIR}/lib/if2ip.c
|
||||
${LIBRARY_DIR}/lib/speedcheck.c
|
||||
${LIBRARY_DIR}/lib/ldap.c
|
||||
${LIBRARY_DIR}/lib/version.c
|
||||
${LIBRARY_DIR}/lib/getenv.c
|
||||
${LIBRARY_DIR}/lib/escape.c
|
||||
${LIBRARY_DIR}/lib/mprintf.c
|
||||
${LIBRARY_DIR}/lib/telnet.c
|
||||
${LIBRARY_DIR}/lib/netrc.c
|
||||
${LIBRARY_DIR}/lib/getinfo.c
|
||||
${LIBRARY_DIR}/lib/transfer.c
|
||||
${LIBRARY_DIR}/lib/strcase.c
|
||||
${LIBRARY_DIR}/lib/easy.c
|
||||
${LIBRARY_DIR}/lib/security.c
|
||||
${LIBRARY_DIR}/lib/curl_fnmatch.c
|
||||
${LIBRARY_DIR}/lib/fileinfo.c
|
||||
${LIBRARY_DIR}/lib/wildcard.c
|
||||
${LIBRARY_DIR}/lib/krb5.c
|
||||
${LIBRARY_DIR}/lib/memdebug.c
|
||||
${LIBRARY_DIR}/lib/http_chunks.c
|
||||
${LIBRARY_DIR}/lib/strtok.c
|
||||
${LIBRARY_DIR}/lib/connect.c
|
||||
${LIBRARY_DIR}/lib/llist.c
|
||||
${LIBRARY_DIR}/lib/hash.c
|
||||
${LIBRARY_DIR}/lib/multi.c
|
||||
${LIBRARY_DIR}/lib/content_encoding.c
|
||||
${LIBRARY_DIR}/lib/share.c
|
||||
${LIBRARY_DIR}/lib/http_digest.c
|
||||
${LIBRARY_DIR}/lib/md4.c
|
||||
${LIBRARY_DIR}/lib/md5.c
|
||||
${LIBRARY_DIR}/lib/http_negotiate.c
|
||||
${LIBRARY_DIR}/lib/inet_pton.c
|
||||
${LIBRARY_DIR}/lib/strtoofft.c
|
||||
${LIBRARY_DIR}/lib/strerror.c
|
||||
${LIBRARY_DIR}/lib/amigaos.c
|
||||
${LIBRARY_DIR}/lib/hostasyn.c
|
||||
${LIBRARY_DIR}/lib/hostip4.c
|
||||
${LIBRARY_DIR}/lib/hostip6.c
|
||||
${LIBRARY_DIR}/lib/hostsyn.c
|
||||
${LIBRARY_DIR}/lib/inet_ntop.c
|
||||
${LIBRARY_DIR}/lib/parsedate.c
|
||||
${LIBRARY_DIR}/lib/select.c
|
||||
${LIBRARY_DIR}/lib/splay.c
|
||||
${LIBRARY_DIR}/lib/strdup.c
|
||||
${LIBRARY_DIR}/lib/socks.c
|
||||
${LIBRARY_DIR}/lib/curl_addrinfo.c
|
||||
${LIBRARY_DIR}/lib/socks_gssapi.c
|
||||
${LIBRARY_DIR}/lib/socks_sspi.c
|
||||
${LIBRARY_DIR}/lib/curl_sspi.c
|
||||
${LIBRARY_DIR}/lib/slist.c
|
||||
${LIBRARY_DIR}/lib/nonblock.c
|
||||
${LIBRARY_DIR}/lib/curl_memrchr.c
|
||||
${LIBRARY_DIR}/lib/imap.c
|
||||
${LIBRARY_DIR}/lib/pop3.c
|
||||
${LIBRARY_DIR}/lib/smtp.c
|
||||
${LIBRARY_DIR}/lib/pingpong.c
|
||||
${LIBRARY_DIR}/lib/rtsp.c
|
||||
${LIBRARY_DIR}/lib/curl_threads.c
|
||||
${LIBRARY_DIR}/lib/warnless.c
|
||||
${LIBRARY_DIR}/lib/hmac.c
|
||||
${LIBRARY_DIR}/lib/curl_rtmp.c
|
||||
${LIBRARY_DIR}/lib/openldap.c
|
||||
${LIBRARY_DIR}/lib/curl_gethostname.c
|
||||
${LIBRARY_DIR}/lib/gopher.c
|
||||
${LIBRARY_DIR}/lib/idn_win32.c
|
||||
${LIBRARY_DIR}/lib/http_proxy.c
|
||||
${LIBRARY_DIR}/lib/non-ascii.c
|
||||
${LIBRARY_DIR}/lib/asyn-thread.c
|
||||
${LIBRARY_DIR}/lib/curl_gssapi.c
|
||||
${LIBRARY_DIR}/lib/http_ntlm.c
|
||||
${LIBRARY_DIR}/lib/curl_ntlm_wb.c
|
||||
${LIBRARY_DIR}/lib/curl_ntlm_core.c
|
||||
${LIBRARY_DIR}/lib/curl_sasl.c
|
||||
${LIBRARY_DIR}/lib/rand.c
|
||||
${LIBRARY_DIR}/lib/curl_multibyte.c
|
||||
${LIBRARY_DIR}/lib/hostcheck.c
|
||||
${LIBRARY_DIR}/lib/conncache.c
|
||||
${LIBRARY_DIR}/lib/dotdot.c
|
||||
${LIBRARY_DIR}/lib/x509asn1.c
|
||||
${LIBRARY_DIR}/lib/http2.c
|
||||
${LIBRARY_DIR}/lib/smb.c
|
||||
${LIBRARY_DIR}/lib/curl_endian.c
|
||||
${LIBRARY_DIR}/lib/curl_des.c
|
||||
${LIBRARY_DIR}/lib/system_win32.c
|
||||
${LIBRARY_DIR}/lib/mime.c
|
||||
${LIBRARY_DIR}/lib/sha256.c
|
||||
${LIBRARY_DIR}/lib/setopt.c
|
||||
${LIBRARY_DIR}/lib/curl_path.c
|
||||
${LIBRARY_DIR}/lib/curl_ctype.c
|
||||
${LIBRARY_DIR}/lib/curl_range.c
|
||||
${LIBRARY_DIR}/lib/psl.c
|
||||
${LIBRARY_DIR}/lib/doh.c
|
||||
${LIBRARY_DIR}/lib/urlapi.c
|
||||
${LIBRARY_DIR}/lib/curl_get_line.c
|
||||
${LIBRARY_DIR}/lib/altsvc.c
|
||||
${LIBRARY_DIR}/lib/socketpair.c
|
||||
${LIBRARY_DIR}/lib/vauth/vauth.c
|
||||
${LIBRARY_DIR}/lib/vauth/cleartext.c
|
||||
${LIBRARY_DIR}/lib/vauth/cram.c
|
||||
${LIBRARY_DIR}/lib/vauth/digest.c
|
||||
${LIBRARY_DIR}/lib/vauth/digest_sspi.c
|
||||
${LIBRARY_DIR}/lib/vauth/krb5_gssapi.c
|
||||
${LIBRARY_DIR}/lib/vauth/krb5_sspi.c
|
||||
${LIBRARY_DIR}/lib/vauth/ntlm.c
|
||||
${LIBRARY_DIR}/lib/vauth/ntlm_sspi.c
|
||||
${LIBRARY_DIR}/lib/vauth/oauth2.c
|
||||
${LIBRARY_DIR}/lib/vauth/spnego_gssapi.c
|
||||
${LIBRARY_DIR}/lib/vauth/spnego_sspi.c
|
||||
${LIBRARY_DIR}/lib/vtls/openssl.c
|
||||
${LIBRARY_DIR}/lib/vtls/gtls.c
|
||||
${LIBRARY_DIR}/lib/vtls/vtls.c
|
||||
${LIBRARY_DIR}/lib/vtls/nss.c
|
||||
${LIBRARY_DIR}/lib/vtls/polarssl.c
|
||||
${LIBRARY_DIR}/lib/vtls/polarssl_threadlock.c
|
||||
${LIBRARY_DIR}/lib/vtls/wolfssl.c
|
||||
${LIBRARY_DIR}/lib/vtls/schannel.c
|
||||
${LIBRARY_DIR}/lib/vtls/schannel_verify.c
|
||||
${LIBRARY_DIR}/lib/vtls/sectransp.c
|
||||
${LIBRARY_DIR}/lib/vtls/gskit.c
|
||||
${LIBRARY_DIR}/lib/vtls/mbedtls.c
|
||||
${LIBRARY_DIR}/lib/vtls/mesalink.c
|
||||
${LIBRARY_DIR}/lib/vtls/bearssl.c
|
||||
${LIBRARY_DIR}/lib/vquic/ngtcp2.c
|
||||
${LIBRARY_DIR}/lib/vquic/quiche.c
|
||||
${LIBRARY_DIR}/lib/vssh/libssh2.c
|
||||
${LIBRARY_DIR}/lib/vssh/libssh.c
|
||||
)
|
||||
|
||||
target_compile_definitions(curl PRIVATE OS="${CMAKE_SYSTEM_NAME}")
|
||||
add_library (curl ${SRCS})
|
||||
|
||||
target_link_libraries(curl PRIVATE ssl)
|
||||
target_compile_definitions (curl PRIVATE
|
||||
HAVE_CONFIG_H
|
||||
BUILDING_LIBCURL
|
||||
CURL_HIDDEN_SYMBOLS
|
||||
libcurl_EXPORTS
|
||||
OS="${CMAKE_SYSTEM_NAME}"
|
||||
)
|
||||
target_include_directories (curl PUBLIC
|
||||
${LIBRARY_DIR}/include
|
||||
${LIBRARY_DIR}/lib
|
||||
. # curl_config.h
|
||||
)
|
||||
|
||||
target_link_libraries (curl PRIVATE ssl)
|
||||
|
||||
# The library is large - avoid bloat (XXX: is it?)
|
||||
target_compile_options (curl PRIVATE -g0)
|
||||
|
||||
# find_package(CURL) compatibility for the following packages that uses
|
||||
# find_package(CURL)/include(FindCURL):
|
||||
# - mariadb-connector-c
|
||||
# - aws-s3-cmake
|
||||
# - sentry-native
|
||||
set (CURL_FOUND ON CACHE BOOL "")
|
||||
set (CURL_ROOT_DIR ${LIBRARY_DIR} CACHE PATH "")
|
||||
set (CURL_INCLUDE_DIR ${LIBRARY_DIR}/include CACHE PATH "")
|
||||
set (CURL_INCLUDE_DIRS ${LIBRARY_DIR}/include CACHE PATH "")
|
||||
set (CURL_LIBRARY curl CACHE STRING "")
|
||||
set (CURL_LIBRARIES ${CURL_LIBRARY} CACHE STRING "")
|
||||
set (CURL_VERSION_STRING 7.67.0 CACHE STRING "")
|
||||
add_library (CURL::libcurl ALIAS ${CURL_LIBRARY})
|
||||
else ()
|
||||
find_package (CURL REQUIRED)
|
||||
endif ()
|
||||
endif ()
|
||||
|
||||
message (STATUS "Using curl: ${CURL_INCLUDE_DIRS} : ${CURL_LIBRARIES}")
|
||||
|
@ -22,9 +22,14 @@ if (ENABLE_JEMALLOC)
|
||||
#
|
||||
# By enabling percpu_arena number of arenas limited to number of CPUs and hence
|
||||
# this problem should go away.
|
||||
set (JEMALLOC_CONFIG_MALLOC_CONF "percpu_arena:percpu,oversize_threshold:0")
|
||||
#
|
||||
# muzzy_decay_ms -- use MADV_FREE when available on newer Linuxes, to
|
||||
# avoid spurious latencies and additional work associated with
|
||||
# MADV_DONTNEED. See
|
||||
# https://github.com/ClickHouse/ClickHouse/issues/11121 for motivation.
|
||||
set (JEMALLOC_CONFIG_MALLOC_CONF "percpu_arena:percpu,oversize_threshold:0,muzzy_decay_ms:10000")
|
||||
else()
|
||||
set (JEMALLOC_CONFIG_MALLOC_CONF "oversize_threshold:0")
|
||||
set (JEMALLOC_CONFIG_MALLOC_CONF "oversize_threshold:0,muzzy_decay_ms:10000")
|
||||
endif()
|
||||
# CACHE variable is empty, to allow changing defaults without necessity
|
||||
# to purge cache
|
||||
|
@ -76,7 +76,7 @@
|
||||
do { \
|
||||
fprintf(stderr, "libdivide.h:%d: %s(): Error: %s\n", \
|
||||
__LINE__, LIBDIVIDE_FUNCTION, msg); \
|
||||
exit(-1); \
|
||||
abort(); \
|
||||
} while (0)
|
||||
|
||||
#if defined(LIBDIVIDE_ASSERTIONS_ON)
|
||||
@ -85,7 +85,7 @@
|
||||
if (!(x)) { \
|
||||
fprintf(stderr, "libdivide.h:%d: %s(): Assertion failed: %s\n", \
|
||||
__LINE__, LIBDIVIDE_FUNCTION, #x); \
|
||||
exit(-1); \
|
||||
abort(); \
|
||||
} \
|
||||
} while (0)
|
||||
#else
|
||||
@ -290,10 +290,17 @@ static inline int32_t libdivide_count_leading_zeros32(uint32_t val) {
|
||||
}
|
||||
return 0;
|
||||
#else
|
||||
int32_t result = 0;
|
||||
uint32_t hi = 1U << 31;
|
||||
for (; ~val & hi; hi >>= 1) {
|
||||
result++;
|
||||
if (val == 0)
|
||||
return 32;
|
||||
int32_t result = 8;
|
||||
uint32_t hi = 0xFFU << 24;
|
||||
while ((val & hi) == 0) {
|
||||
hi >>= 8;
|
||||
result += 8;
|
||||
}
|
||||
while (val & hi) {
|
||||
result -= 1;
|
||||
hi <<= 1;
|
||||
}
|
||||
return result;
|
||||
#endif
|
||||
|
2
contrib/poco
vendored
2
contrib/poco
vendored
@ -1 +1 @@
|
||||
Subproject commit be2ab90ba5dccd46919a116e3fe4fa77bb85063b
|
||||
Subproject commit 74c93443342f6028fa6402057684733b316aa737
|
@ -24,7 +24,7 @@ if (ENABLE_ODBC)
|
||||
target_include_directories (_poco_data_odbc SYSTEM PUBLIC ${LIBRARY_DIR}/Data/ODBC/include)
|
||||
target_link_libraries (_poco_data_odbc PUBLIC Poco::Data unixodbc)
|
||||
else ()
|
||||
add_library (Poco::Data::ODBC UNKNOWN IMPORTED)
|
||||
add_library (Poco::Data::ODBC UNKNOWN IMPORTED GLOBAL)
|
||||
|
||||
find_library(LIBRARY_POCO_DATA_ODBC PocoDataODBC)
|
||||
find_path(INCLUDE_POCO_DATA_ODBC Poco/Data/ODBC/ODBC.h)
|
||||
|
@ -307,7 +307,7 @@ if (ENABLE_ODBC)
|
||||
set_target_properties (unixodbc PROPERTIES INTERFACE_INCLUDE_DIRECTORIES ${INCLUDE_ODBC})
|
||||
endif ()
|
||||
|
||||
target_compile_definitions (unixodbc PUBLIC USE_ODBC=1)
|
||||
target_compile_definitions (unixodbc INTERFACE USE_ODBC=1)
|
||||
|
||||
message (STATUS "Using unixodbc")
|
||||
else ()
|
||||
|
2
docker/bare/Dockerfile
Normal file
2
docker/bare/Dockerfile
Normal file
@ -0,0 +1,2 @@
|
||||
FROM scratch
|
||||
ADD root /
|
37
docker/bare/README.md
Normal file
37
docker/bare/README.md
Normal file
@ -0,0 +1,37 @@
|
||||
## The bare minimum ClickHouse Docker image.
|
||||
|
||||
It is intented as a showcase to check the amount of implicit dependencies of ClickHouse from the OS in addition to the OS kernel.
|
||||
|
||||
Example usage:
|
||||
|
||||
```
|
||||
./prepare
|
||||
docker build --tag clickhouse-bare .
|
||||
```
|
||||
|
||||
Run clickhouse-local:
|
||||
```
|
||||
docker run -it --rm --network host clickhouse-bare /clickhouse local --query "SELECT 1"
|
||||
```
|
||||
|
||||
Run clickhouse-client in interactive mode:
|
||||
```
|
||||
docker run -it --rm --network host clickhouse-bare /clickhouse client
|
||||
```
|
||||
|
||||
Run clickhouse-server:
|
||||
```
|
||||
docker run -it --rm --network host clickhouse-bare /clickhouse server
|
||||
```
|
||||
|
||||
It can be also run in chroot instead of Docker (first edit the `prepare` script to enable `proc`):
|
||||
|
||||
```
|
||||
sudo chroot . /clickhouse server
|
||||
```
|
||||
|
||||
## What does it miss?
|
||||
|
||||
- creation of `clickhouse` user to run the server;
|
||||
- VOLUME for server;
|
||||
- most of the details, see other docker images for comparison.
|
24
docker/bare/prepare
Executable file
24
docker/bare/prepare
Executable file
@ -0,0 +1,24 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
SRC_DIR=../..
|
||||
BUILD_DIR=${SRC_DIR}/build
|
||||
|
||||
# BTW, .so files are acceptable from any Linux distribution for the last 12 years (at least).
|
||||
# See https://presentations.clickhouse.tech/cpp_russia_2020/ for the details.
|
||||
|
||||
mkdir root
|
||||
pushd root
|
||||
mkdir lib lib64 etc tmp root
|
||||
cp ${BUILD_DIR}/programs/clickhouse .
|
||||
cp ${SRC_DIR}/programs/server/{config,users}.xml .
|
||||
cp /lib/x86_64-linux-gnu/{libc.so.6,libdl.so.2,libm.so.6,libpthread.so.0,librt.so.1,libnss_dns.so.2,libresolv.so.2} lib
|
||||
cp /lib64/ld-linux-x86-64.so.2 lib64
|
||||
cp /etc/resolv.conf ./etc
|
||||
strip clickhouse
|
||||
|
||||
# This is needed for chroot but not needed for Docker:
|
||||
|
||||
# mkdir proc
|
||||
# sudo mount --bind /proc proc
|
@ -6,6 +6,7 @@ ARG version=20.6.1.*
|
||||
RUN apt-get update \
|
||||
&& apt-get install --yes --no-install-recommends \
|
||||
apt-transport-https \
|
||||
ca-certificates \
|
||||
dirmngr \
|
||||
gnupg \
|
||||
&& mkdir -p /etc/apt/sources.list.d \
|
||||
|
@ -11,7 +11,8 @@
|
||||
"docker/packager/binary": {
|
||||
"name": "yandex/clickhouse-binary-builder",
|
||||
"dependent": [
|
||||
"docker/test/split_build_smoke_test"
|
||||
"docker/test/split_build_smoke_test",
|
||||
"docker/test/pvs"
|
||||
]
|
||||
},
|
||||
"docker/test/coverage": {
|
||||
|
@ -18,7 +18,7 @@ ccache --zero-stats ||:
|
||||
ln -s /usr/lib/x86_64-linux-gnu/libOpenCL.so.1.0.0 /usr/lib/libOpenCL.so ||:
|
||||
rm -f CMakeCache.txt
|
||||
cmake --debug-trycompile --verbose=1 -DCMAKE_VERBOSE_MAKEFILE=1 -LA -DCMAKE_BUILD_TYPE=$BUILD_TYPE -DSANITIZE=$SANITIZER $CMAKE_FLAGS ..
|
||||
ninja -v clickhouse-bundle
|
||||
ninja clickhouse-bundle
|
||||
mv ./programs/clickhouse* /output
|
||||
mv ./src/unit_tests_dbms /output
|
||||
find . -name '*.so' -print -exec mv '{}' /output \;
|
||||
|
@ -68,6 +68,7 @@ RUN apt-get --allow-unauthenticated update -y \
|
||||
libre2-dev \
|
||||
libjemalloc-dev \
|
||||
libmsgpack-dev \
|
||||
libcurl4-openssl-dev \
|
||||
opencl-headers \
|
||||
ocl-icd-libopencl1 \
|
||||
intel-opencl-icd \
|
||||
|
@ -31,7 +31,7 @@ def pull_image(image_name):
|
||||
def build_image(image_name, filepath):
|
||||
subprocess.check_call("docker build --network=host -t {} -f {} .".format(image_name, filepath), shell=True)
|
||||
|
||||
def run_docker_image_with_env(image_name, output, env_variables, ch_root, ccache_dir):
|
||||
def run_docker_image_with_env(image_name, output, env_variables, ch_root, ccache_dir, docker_image_version):
|
||||
env_part = " -e ".join(env_variables)
|
||||
if env_part:
|
||||
env_part = " -e " + env_part
|
||||
@ -46,7 +46,7 @@ def run_docker_image_with_env(image_name, output, env_variables, ch_root, ccache
|
||||
ch_root=ch_root,
|
||||
ccache_dir=ccache_dir,
|
||||
env=env_part,
|
||||
img_name=image_name,
|
||||
img_name=image_name + ":" + docker_image_version,
|
||||
interactive=interactive
|
||||
)
|
||||
|
||||
@ -189,6 +189,7 @@ if __name__ == "__main__":
|
||||
parser.add_argument("--alien-pkgs", nargs='+', default=[])
|
||||
parser.add_argument("--with-coverage", action="store_true")
|
||||
parser.add_argument("--with-binaries", choices=("programs", "tests", ""), default="")
|
||||
parser.add_argument("--docker-image-version", default="latest")
|
||||
|
||||
args = parser.parse_args()
|
||||
if not os.path.isabs(args.output_dir):
|
||||
@ -212,12 +213,14 @@ if __name__ == "__main__":
|
||||
logging.info("Should place {} to output".format(args.with_binaries))
|
||||
|
||||
dockerfile = os.path.join(ch_root, "docker/packager", image_type, "Dockerfile")
|
||||
image_with_version = image_name + ":" + args.docker_image_version
|
||||
if image_type != "freebsd" and not check_image_exists_locally(image_name) or args.force_build_image:
|
||||
if not pull_image(image_name) or args.force_build_image:
|
||||
build_image(image_name, dockerfile)
|
||||
if not pull_image(image_with_version) or args.force_build_image:
|
||||
build_image(image_with_version, dockerfile)
|
||||
env_prepared = parse_env_variables(
|
||||
args.build_type, args.compiler, args.sanitizer, args.package_type, image_type,
|
||||
args.cache, args.distcc_hosts, args.unbundled, args.split_binary, args.clang_tidy,
|
||||
args.version, args.author, args.official, args.alien_pkgs, args.with_coverage, args.with_binaries)
|
||||
run_docker_image_with_env(image_name, args.output_dir, env_prepared, ch_root, args.ccache_dir)
|
||||
|
||||
run_docker_image_with_env(image_name, args.output_dir, env_prepared, ch_root, args.ccache_dir, args.docker_image_version)
|
||||
logging.info("Output placed into {}".format(args.output_dir))
|
||||
|
@ -1,4 +1,4 @@
|
||||
FROM ubuntu:18.04
|
||||
FROM ubuntu:20.04
|
||||
|
||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||
ARG version=20.6.1.*
|
||||
@ -7,19 +7,21 @@ ARG gosu_ver=1.10
|
||||
RUN apt-get update \
|
||||
&& apt-get install --yes --no-install-recommends \
|
||||
apt-transport-https \
|
||||
ca-certificates \
|
||||
dirmngr \
|
||||
gnupg \
|
||||
&& mkdir -p /etc/apt/sources.list.d \
|
||||
&& apt-key adv --keyserver keyserver.ubuntu.com --recv E0C56BD4 \
|
||||
&& echo $repository > /etc/apt/sources.list.d/clickhouse.list \
|
||||
&& apt-get update \
|
||||
&& env DEBIAN_FRONTEND=noninteractive \
|
||||
apt-get --yes -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" upgrade \
|
||||
&& env DEBIAN_FRONTEND=noninteractive \
|
||||
apt-get install --allow-unauthenticated --yes --no-install-recommends \
|
||||
clickhouse-common-static=$version \
|
||||
clickhouse-client=$version \
|
||||
clickhouse-server=$version \
|
||||
locales \
|
||||
ca-certificates \
|
||||
wget \
|
||||
&& rm -rf \
|
||||
/var/lib/apt/lists/* \
|
||||
|
@ -1,6 +1,6 @@
|
||||
## Docker containers for integration tests
|
||||
- `base` container with required packages
|
||||
- `runner` container with that runs integration tests in docker
|
||||
- `compose` contains docker_compose YaML files that are used in tests
|
||||
- `runnner/compose` contains docker\_compose YaML files that are used in tests
|
||||
|
||||
How to run integration tests is described in tests/integration/README.md
|
||||
How to run integration tests is described in tests/integration/README.md
|
||||
|
@ -26,6 +26,7 @@ RUN apt-get update \
|
||||
liblua5.1-dev \
|
||||
luajit \
|
||||
libssl-dev \
|
||||
libcurl4-openssl-dev \
|
||||
gdb \
|
||||
&& rm -rf \
|
||||
/var/lib/apt/lists/* \
|
||||
@ -62,6 +63,7 @@ RUN set -eux; \
|
||||
|
||||
COPY modprobe.sh /usr/local/bin/modprobe
|
||||
COPY dockerd-entrypoint.sh /usr/local/bin/
|
||||
COPY compose/ /compose/
|
||||
|
||||
RUN set -x \
|
||||
&& addgroup --system dockremap \
|
||||
|
@ -0,0 +1,12 @@
|
||||
version: '2.3'
|
||||
|
||||
services:
|
||||
rabbitmq1:
|
||||
image: rabbitmq:3-management
|
||||
hostname: rabbitmq1
|
||||
ports:
|
||||
- "5672:5672"
|
||||
- "15672:15672"
|
||||
environment:
|
||||
RABBITMQ_DEFAULT_USER: "root"
|
||||
RABBITMQ_DEFAULT_PASS: "clickhouse"
|
@ -157,7 +157,11 @@ function run_tests
|
||||
|
||||
TIMEFORMAT=$(printf "$test_name\t%%3R\t%%3U\t%%3S\n")
|
||||
# the grep is to filter out set -x output and keep only time output
|
||||
{ time "$script_dir/perf.py" --host localhost localhost --port 9001 9002 -- "$test" > "$test_name-raw.tsv" 2> "$test_name-err.log" ; } 2>&1 >/dev/null | grep -v ^+ >> "wall-clock-times.tsv" || continue
|
||||
{ \
|
||||
time "$script_dir/perf.py" --host localhost localhost --port 9001 9002 \
|
||||
-- "$test" > "$test_name-raw.tsv" 2> "$test_name-err.log" ; \
|
||||
} 2>&1 >/dev/null | grep -v ^+ >> "wall-clock-times.tsv" \
|
||||
|| echo "Test $test_name failed with error code $?" >> "$test_name-err.log"
|
||||
done
|
||||
|
||||
unset TIMEFORMAT
|
||||
@ -274,10 +278,11 @@ for test_file in $(find . -maxdepth 1 -name "*-raw.tsv" -print)
|
||||
do
|
||||
test_name=$(basename "$test_file" "-raw.tsv")
|
||||
sed -n "s/^query\t/$test_name\t/p" < "$test_file" >> "analyze/query-runs.tsv"
|
||||
sed -n "s/^client-time/$test_name/p" < "$test_file" >> "analyze/client-times.tsv"
|
||||
sed -n "s/^report-threshold/$test_name/p" < "$test_file" >> "analyze/report-thresholds.tsv"
|
||||
sed -n "s/^skipped/$test_name/p" < "$test_file" >> "analyze/skipped-tests.tsv"
|
||||
sed -n "s/^display-name/$test_name/p" < "$test_file" >> "analyze/query-display-names.tsv"
|
||||
sed -n "s/^client-time\t/$test_name\t/p" < "$test_file" >> "analyze/client-times.tsv"
|
||||
sed -n "s/^report-threshold\t/$test_name\t/p" < "$test_file" >> "analyze/report-thresholds.tsv"
|
||||
sed -n "s/^skipped\t/$test_name\t/p" < "$test_file" >> "analyze/skipped-tests.tsv"
|
||||
sed -n "s/^display-name\t/$test_name\t/p" < "$test_file" >> "analyze/query-display-names.tsv"
|
||||
sed -n "s/^partial\t/$test_name\t/p" < "$test_file" >> "analyze/partial-queries.tsv"
|
||||
done
|
||||
unset IFS
|
||||
|
||||
@ -286,6 +291,18 @@ clickhouse-local --query "
|
||||
create view query_runs as select * from file('analyze/query-runs.tsv', TSV,
|
||||
'test text, query_index int, query_id text, version UInt8, time float');
|
||||
|
||||
create view partial_queries as select test, query_index
|
||||
from file('analyze/partial-queries.tsv', TSV,
|
||||
'test text, query_index int, servers Array(int)');
|
||||
|
||||
create table partial_query_times engine File(TSVWithNamesAndTypes,
|
||||
'analyze/partial-query-times.tsv')
|
||||
as select test, query_index, stddevPop(time) time_stddev, median(time) time_median
|
||||
from query_runs
|
||||
where (test, query_index) in partial_queries
|
||||
group by test, query_index
|
||||
;
|
||||
|
||||
create view left_query_log as select *
|
||||
from file('left-query-log.tsv', TSVWithNamesAndTypes,
|
||||
'$(cat "left-query-log.tsv.columns")');
|
||||
@ -329,6 +346,7 @@ create table query_run_metrics_full engine File(TSV, 'analyze/query-run-metrics-
|
||||
right join query_runs
|
||||
on query_logs.query_id = query_runs.query_id
|
||||
and query_logs.version = query_runs.version
|
||||
where (test, query_index) not in partial_queries
|
||||
;
|
||||
|
||||
create table query_run_metrics engine File(
|
||||
@ -350,6 +368,7 @@ create table query_run_metric_names engine File(TSV, 'analyze/query-run-metric-n
|
||||
# query. We also don't have lateral joins. So I just put all runs of each
|
||||
# query into a separate file, and then compute randomization distribution
|
||||
# for each file. I do this in parallel using GNU parallel.
|
||||
( set +x # do not bloat the log
|
||||
IFS=$'\n'
|
||||
for prefix in $(cut -f1,2 "analyze/query-run-metrics.tsv" | sort | uniq)
|
||||
do
|
||||
@ -366,6 +385,7 @@ do
|
||||
done
|
||||
wait
|
||||
unset IFS
|
||||
)
|
||||
|
||||
parallel --joblog analyze/parallel-log.txt --null < analyze/commands.txt 2>> analyze/errors.log
|
||||
}
|
||||
@ -389,12 +409,20 @@ create view query_display_names as select * from
|
||||
'test text, query_index int, query_display_name text')
|
||||
;
|
||||
|
||||
create table partial_queries_report engine File(TSV, 'report/partial-queries-report.tsv')
|
||||
as select floor(time_median, 3) m, floor(time_stddev / time_median, 3) v,
|
||||
test, query_index, query_display_name
|
||||
from file('analyze/partial-query-times.tsv', TSVWithNamesAndTypes,
|
||||
'test text, query_index int, time_stddev float, time_median float') t
|
||||
join query_display_names using (test, query_index)
|
||||
order by test, query_index
|
||||
;
|
||||
|
||||
-- WITH, ARRAY JOIN and CROSS JOIN do not like each other:
|
||||
-- https://github.com/ClickHouse/ClickHouse/issues/11868
|
||||
-- https://github.com/ClickHouse/ClickHouse/issues/11757
|
||||
-- Because of this, we make a view with arrays first, and then apply all the
|
||||
-- array joins.
|
||||
|
||||
create view query_metric_stat_arrays as
|
||||
with (select * from file('analyze/query-run-metric-names.tsv',
|
||||
TSV, 'n Array(String)')) as metric_name
|
||||
@ -766,9 +794,16 @@ done
|
||||
wait
|
||||
unset IFS
|
||||
|
||||
# Remember that grep sets error code when nothing is found, hence the bayan
|
||||
# operator.
|
||||
grep -H -m2 -i '\(Exception\|Error\):[^:]' ./*-err.log | sed 's/:/\t/' >> run-errors.tsv ||:
|
||||
# Prefer to grep for clickhouse_driver exception messages, but if there are none,
|
||||
# just show a couple of lines from the log.
|
||||
for log in *-err.log
|
||||
do
|
||||
test=$(basename "$log" "-err.log")
|
||||
{
|
||||
grep -H -m2 -i '\(Exception\|Error\):[^:]' "$log" \
|
||||
|| head -2 "$log"
|
||||
} | sed "s/^/$test\t/" >> run-errors.tsv ||:
|
||||
done
|
||||
}
|
||||
|
||||
function report_metrics
|
||||
@ -858,10 +893,6 @@ case "$stage" in
|
||||
cat "/proc/$pid/smaps" > "$pid-smaps.txt" ||:
|
||||
done
|
||||
|
||||
# Sleep for five minutes to see how the servers enter a quiescent state (e.g.
|
||||
# how fast the memory usage drops).
|
||||
sleep 300
|
||||
|
||||
# We had a bug where getting profiles froze sometimes, so try to save some
|
||||
# logs if this happens again. Give the servers some time to collect all info,
|
||||
# then trace and kill. Start in a subshell, so that both function don't
|
||||
|
@ -20,4 +20,6 @@
|
||||
</metric_log>
|
||||
|
||||
<uncompressed_cache_size>1000000000</uncompressed_cache_size>
|
||||
|
||||
<asynchronous_metrics_update_period_s>10</asynchronous_metrics_update_period_s>
|
||||
</yandex>
|
||||
|
@ -24,14 +24,32 @@ dataset_paths["values"]="https://clickhouse-datasets.s3.yandex.net/values_with_e
|
||||
|
||||
function download
|
||||
{
|
||||
# Historically there were various path for the performance test package.
|
||||
# Test all of them.
|
||||
for path in "https://clickhouse-builds.s3.yandex.net/$left_pr/$left_sha/"{,clickhouse_build_check/}"performance/performance.tgz"
|
||||
do
|
||||
if curl --fail --head "$path"
|
||||
then
|
||||
left_path="$path"
|
||||
fi
|
||||
done
|
||||
|
||||
for path in "https://clickhouse-builds.s3.yandex.net/$right_pr/$right_sha/"{,clickhouse_build_check/}"performance/performance.tgz"
|
||||
do
|
||||
if curl --fail --head "$path"
|
||||
then
|
||||
right_path="$path"
|
||||
fi
|
||||
done
|
||||
|
||||
# might have the same version on left and right
|
||||
if ! [ "$left_sha" = "$right_sha" ]
|
||||
if ! [ "$left_path" = "$right_path" ]
|
||||
then
|
||||
wget -nv -nd -c "https://clickhouse-builds.s3.yandex.net/$left_pr/$left_sha/clickhouse_build_check/performance/performance.tgz" -O- | tar -C left --strip-components=1 -zxv &
|
||||
wget -nv -nd -c "https://clickhouse-builds.s3.yandex.net/$right_pr/$right_sha/clickhouse_build_check/performance/performance.tgz" -O- | tar -C right --strip-components=1 -zxv &
|
||||
wget -nv -nd -c "$left_path" -O- | tar -C left --strip-components=1 -zxv &
|
||||
wget -nv -nd -c "$right_path" -O- | tar -C right --strip-components=1 -zxv &
|
||||
else
|
||||
mkdir right ||:
|
||||
wget -nv -nd -c "https://clickhouse-builds.s3.yandex.net/$left_pr/$left_sha/clickhouse_build_check/performance/performance.tgz" -O- | tar -C left --strip-components=1 -zxv && cp -a left/* right &
|
||||
wget -nv -nd -c "$left_path" -O- | tar -C left --strip-components=1 -zxv && cp -a left/* right &
|
||||
fi
|
||||
|
||||
for dataset_name in $datasets
|
||||
|
@ -50,10 +50,18 @@ function find_reference_sha
|
||||
|
||||
# FIXME sometimes we have testing tags on commits without published builds --
|
||||
# normally these are documentation commits. Loop to skip them.
|
||||
if curl --fail --head "https://clickhouse-builds.s3.yandex.net/0/$REF_SHA/clickhouse_build_check/performance/performance.tgz"
|
||||
then
|
||||
break
|
||||
fi
|
||||
# Historically there were various path for the performance test package.
|
||||
# Test all of them.
|
||||
unset found
|
||||
for path in "https://clickhouse-builds.s3.yandex.net/0/$REF_SHA/"{,clickhouse_build_check/}"performance/performance.tgz"
|
||||
do
|
||||
if curl --fail --head "$path"
|
||||
then
|
||||
found="$path"
|
||||
break
|
||||
fi
|
||||
done
|
||||
if [ -n "$found" ] ; then break; fi
|
||||
|
||||
start_ref="$REF_SHA~"
|
||||
done
|
||||
|
@ -7,6 +7,7 @@ import clickhouse_driver
|
||||
import xml.etree.ElementTree as et
|
||||
import argparse
|
||||
import pprint
|
||||
import re
|
||||
import string
|
||||
import time
|
||||
import traceback
|
||||
@ -102,10 +103,11 @@ for s in servers:
|
||||
# connection loses the changes in settings.
|
||||
drop_query_templates = [q.text for q in root.findall('drop_query')]
|
||||
drop_queries = substitute_parameters(drop_query_templates)
|
||||
for c in connections:
|
||||
for conn_index, c in enumerate(connections):
|
||||
for q in drop_queries:
|
||||
try:
|
||||
c.execute(q)
|
||||
print(f'drop\t{conn_index}\t{c.last_query.elapsed}\t{tsv_escape(q)}')
|
||||
except:
|
||||
pass
|
||||
|
||||
@ -117,10 +119,12 @@ for c in connections:
|
||||
# configurable). So the end result is uncertain, but hopefully we'll be able to
|
||||
# run at least some queries.
|
||||
settings = root.findall('settings/*')
|
||||
for c in connections:
|
||||
for conn_index, c in enumerate(connections):
|
||||
for s in settings:
|
||||
try:
|
||||
c.execute("set {} = '{}'".format(s.tag, s.text))
|
||||
q = f"set {s.tag} = '{s.text}'"
|
||||
c.execute(q)
|
||||
print(f'set\t{conn_index}\t{c.last_query.elapsed}\t{tsv_escape(q)}')
|
||||
except:
|
||||
print(traceback.format_exc(), file=sys.stderr)
|
||||
|
||||
@ -139,16 +143,28 @@ for t in tables:
|
||||
# Run create queries
|
||||
create_query_templates = [q.text for q in root.findall('create_query')]
|
||||
create_queries = substitute_parameters(create_query_templates)
|
||||
for c in connections:
|
||||
|
||||
# Disallow temporary tables, because the clickhouse_driver reconnects on errors,
|
||||
# and temporary tables are destroyed. We want to be able to continue after some
|
||||
# errors.
|
||||
for q in create_queries:
|
||||
if re.search('create temporary table', q, flags=re.IGNORECASE):
|
||||
print(f"Temporary tables are not allowed in performance tests: '{q}'",
|
||||
file = sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
for conn_index, c in enumerate(connections):
|
||||
for q in create_queries:
|
||||
c.execute(q)
|
||||
print(f'create\t{conn_index}\t{c.last_query.elapsed}\t{tsv_escape(q)}')
|
||||
|
||||
# Run fill queries
|
||||
fill_query_templates = [q.text for q in root.findall('fill_query')]
|
||||
fill_queries = substitute_parameters(fill_query_templates)
|
||||
for c in connections:
|
||||
for conn_index, c in enumerate(connections):
|
||||
for q in fill_queries:
|
||||
c.execute(q)
|
||||
print(f'fill\t{conn_index}\t{c.last_query.elapsed}\t{tsv_escape(q)}')
|
||||
|
||||
# Run test queries
|
||||
for query_index, q in enumerate(test_queries):
|
||||
@ -165,31 +181,47 @@ for query_index, q in enumerate(test_queries):
|
||||
|
||||
# Prewarm: run once on both servers. Helps to bring the data into memory,
|
||||
# precompile the queries, etc.
|
||||
try:
|
||||
for conn_index, c in enumerate(connections):
|
||||
# A query might not run on the old server if it uses a function added in the
|
||||
# new one. We want to run them on the new server only, so that the PR author
|
||||
# can ensure that the test works properly. Remember the errors we had on
|
||||
# each server.
|
||||
query_error_on_connection = [None] * len(connections);
|
||||
for conn_index, c in enumerate(connections):
|
||||
try:
|
||||
prewarm_id = f'{query_prefix}.prewarm0'
|
||||
res = c.execute(q, query_id = prewarm_id)
|
||||
print(f'prewarm\t{query_index}\t{prewarm_id}\t{conn_index}\t{c.last_query.elapsed}')
|
||||
except KeyboardInterrupt:
|
||||
raise
|
||||
except:
|
||||
# If prewarm fails for some query -- skip it, and try to test the others.
|
||||
# This might happen if the new test introduces some function that the
|
||||
# old server doesn't support. Still, report it as an error.
|
||||
# FIXME the driver reconnects on error and we lose settings, so this might
|
||||
# lead to further errors or unexpected behavior.
|
||||
print(traceback.format_exc(), file=sys.stderr)
|
||||
except KeyboardInterrupt:
|
||||
raise
|
||||
except:
|
||||
# FIXME the driver reconnects on error and we lose settings, so this
|
||||
# might lead to further errors or unexpected behavior.
|
||||
query_error_on_connection[conn_index] = traceback.format_exc();
|
||||
continue
|
||||
|
||||
# If prewarm fails for the query on both servers -- report the error, skip
|
||||
# the query and continue testing the next query.
|
||||
if query_error_on_connection.count(None) == 0:
|
||||
print(query_error_on_connection[0], file = sys.stderr)
|
||||
continue
|
||||
|
||||
# If prewarm fails on one of the servers, run the query on the rest of them.
|
||||
# Useful for queries that use new functions added in the new server version.
|
||||
if query_error_on_connection.count(None) < len(query_error_on_connection):
|
||||
no_error = [i for i, e in enumerate(query_error_on_connection) if not e]
|
||||
print(f'partial\t{query_index}\t{no_error}')
|
||||
|
||||
# Now, perform measured runs.
|
||||
# Track the time spent by the client to process this query, so that we can notice
|
||||
# out the queries that take long to process on the client side, e.g. by sending
|
||||
# excessive data.
|
||||
# Track the time spent by the client to process this query, so that we can
|
||||
# notice the queries that take long to process on the client side, e.g. by
|
||||
# sending excessive data.
|
||||
start_seconds = time.perf_counter()
|
||||
server_seconds = 0
|
||||
for run in range(0, args.runs):
|
||||
run_id = f'{query_prefix}.run{run}'
|
||||
for conn_index, c in enumerate(connections):
|
||||
if query_error_on_connection[conn_index]:
|
||||
continue
|
||||
res = c.execute(q, query_id = run_id)
|
||||
print(f'query\t{query_index}\t{run_id}\t{conn_index}\t{c.last_query.elapsed}')
|
||||
server_seconds += c.last_query.elapsed
|
||||
@ -198,8 +230,8 @@ for query_index, q in enumerate(test_queries):
|
||||
print(f'client-time\t{query_index}\t{client_seconds}\t{server_seconds}')
|
||||
|
||||
# Run drop queries
|
||||
drop_query_templates = [q.text for q in root.findall('drop_query')]
|
||||
drop_queries = substitute_parameters(drop_query_templates)
|
||||
for c in connections:
|
||||
for conn_index, c in enumerate(connections):
|
||||
for q in drop_queries:
|
||||
c.execute(q)
|
||||
print(f'drop\t{conn_index}\t{c.last_query.elapsed}\t{tsv_escape(q)}')
|
||||
|
@ -7,6 +7,7 @@ import csv
|
||||
import itertools
|
||||
import json
|
||||
import os
|
||||
import os.path
|
||||
import pprint
|
||||
import sys
|
||||
import traceback
|
||||
@ -23,6 +24,7 @@ faster_queries = 0
|
||||
slower_queries = 0
|
||||
unstable_queries = 0
|
||||
very_unstable_queries = 0
|
||||
unstable_partial_queries = 0
|
||||
|
||||
# max seconds to run one query by itself, not counting preparation
|
||||
allowed_single_run_time = 2
|
||||
@ -194,6 +196,31 @@ if args.report == 'main':
|
||||
['Client time, s', 'Server time, s', 'Ratio', 'Test', 'Query'],
|
||||
slow_on_client_rows)
|
||||
|
||||
def print_partial():
|
||||
rows = tsvRows('report/partial-queries-report.tsv')
|
||||
if not rows:
|
||||
return
|
||||
global unstable_partial_queries, slow_average_tests
|
||||
print(tableStart('Partial queries'))
|
||||
columns = ['Median time, s', 'Relative time variance', 'Test', '#', 'Query']
|
||||
print(tableHeader(columns))
|
||||
attrs = ['' for c in columns]
|
||||
for row in rows:
|
||||
if float(row[1]) > 0.10:
|
||||
attrs[1] = f'style="background: {color_bad}"'
|
||||
unstable_partial_queries += 1
|
||||
else:
|
||||
attrs[1] = ''
|
||||
if float(row[0]) > allowed_single_run_time:
|
||||
attrs[0] = f'style="background: {color_bad}"'
|
||||
slow_average_tests += 1
|
||||
else:
|
||||
attrs[0] = ''
|
||||
print(tableRow(row, attrs))
|
||||
print(tableEnd())
|
||||
|
||||
print_partial()
|
||||
|
||||
def print_changes():
|
||||
rows = tsvRows('report/changed-perf.tsv')
|
||||
if not rows:
|
||||
@ -324,6 +351,9 @@ if args.report == 'main':
|
||||
print_test_times()
|
||||
|
||||
def print_benchmark_results():
|
||||
if not os.path.isfile('benchmark/website-left.json'):
|
||||
return
|
||||
|
||||
json_reports = [json.load(open(f'benchmark/website-{x}.json')) for x in ['left', 'right']]
|
||||
stats = [next(iter(x.values()))["statistics"] for x in json_reports]
|
||||
qps = [x["QPS"] for x in stats]
|
||||
@ -417,6 +447,11 @@ if args.report == 'main':
|
||||
status = 'failure'
|
||||
message_array.append(str(slower_queries) + ' slower')
|
||||
|
||||
if unstable_partial_queries:
|
||||
unstable_queries += unstable_partial_queries
|
||||
error_tests += unstable_partial_queries
|
||||
status = 'failure'
|
||||
|
||||
if unstable_queries:
|
||||
message_array.append(str(unstable_queries) + ' unstable')
|
||||
|
||||
|
@ -25,7 +25,7 @@ ENV PKG_VERSION="pvs-studio-7.08.39365.50-amd64.deb"
|
||||
RUN wget "https://files.viva64.com/$PKG_VERSION"
|
||||
RUN sudo dpkg -i "$PKG_VERSION"
|
||||
|
||||
CMD cd /repo_folder && pvs-studio-analyzer credentials $LICENCE_NAME $LICENCE_KEY -o ./licence.lic \
|
||||
CMD echo "Running PVS version $PKG_VERSION" && cd /repo_folder && pvs-studio-analyzer credentials $LICENCE_NAME $LICENCE_KEY -o ./licence.lic \
|
||||
&& cmake . && ninja re2_st && \
|
||||
pvs-studio-analyzer analyze -o pvs-studio.log -e contrib -j 4 -l ./licence.lic; \
|
||||
plog-converter -a GA:1,2 -t fullhtml -o /test_output/pvs-studio-html-report pvs-studio.log; \
|
||||
|
@ -53,4 +53,4 @@ CMD dpkg -i package_folder/clickhouse-common-static_*.deb; \
|
||||
&& clickhouse-client --query "RENAME TABLE datasets.hits_v1 TO test.hits" \
|
||||
&& clickhouse-client --query "RENAME TABLE datasets.visits_v1 TO test.visits" \
|
||||
&& clickhouse-client --query "SHOW TABLES FROM test" \
|
||||
&& clickhouse-test --testname --shard --zookeeper --no-stateless $ADDITIONAL_OPTIONS $SKIP_TESTS_OPTION 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee test_output/test_result.txt
|
||||
&& clickhouse-test --testname --shard --zookeeper --no-stateless --use-skip-list $ADDITIONAL_OPTIONS $SKIP_TESTS_OPTION 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee test_output/test_result.txt
|
||||
|
@ -105,7 +105,7 @@ LLVM_PROFILE_FILE='client_%h_%p_%m.profraw' clickhouse-client --query "SHOW TABL
|
||||
LLVM_PROFILE_FILE='client_%h_%p_%m.profraw' clickhouse-client --query "RENAME TABLE datasets.hits_v1 TO test.hits"
|
||||
LLVM_PROFILE_FILE='client_%h_%p_%m.profraw' clickhouse-client --query "RENAME TABLE datasets.visits_v1 TO test.visits"
|
||||
LLVM_PROFILE_FILE='client_%h_%p_%m.profraw' clickhouse-client --query "SHOW TABLES FROM test"
|
||||
LLVM_PROFILE_FILE='client_%h_%p_%m.profraw' clickhouse-test --testname --shard --zookeeper --no-stateless $ADDITIONAL_OPTIONS $SKIP_TESTS_OPTION 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee test_output/test_result.txt
|
||||
LLVM_PROFILE_FILE='client_%h_%p_%m.profraw' clickhouse-test --testname --shard --zookeeper --no-stateless --use-skip-list $ADDITIONAL_OPTIONS $SKIP_TESTS_OPTION 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee test_output/test_result.txt
|
||||
|
||||
kill_clickhouse
|
||||
|
||||
|
@ -83,4 +83,4 @@ CMD dpkg -i package_folder/clickhouse-common-static_*.deb; \
|
||||
if [[ -n "$USE_DATABASE_ATOMIC" ]] && [[ "$USE_DATABASE_ATOMIC" -eq 1 ]]; then ln -s /usr/share/clickhouse-test/config/database_atomic_usersd.xml /etc/clickhouse-server/users.d/; fi; \
|
||||
ln -sf /usr/share/clickhouse-test/config/client_config.xml /etc/clickhouse-client/config.xml; \
|
||||
service zookeeper start; sleep 5; \
|
||||
service clickhouse-server start && sleep 5 && clickhouse-test --testname --shard --zookeeper $ADDITIONAL_OPTIONS $SKIP_TESTS_OPTION 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee test_output/test_result.txt
|
||||
service clickhouse-server start && sleep 5 && clickhouse-test --testname --shard --zookeeper --use-skip-list $ADDITIONAL_OPTIONS $SKIP_TESTS_OPTION 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee test_output/test_result.txt
|
||||
|
@ -76,7 +76,7 @@ start_clickhouse
|
||||
|
||||
sleep 10
|
||||
|
||||
LLVM_PROFILE_FILE='client_coverage.profraw' clickhouse-test --testname --shard --zookeeper $ADDITIONAL_OPTIONS $SKIP_TESTS_OPTION 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee test_output/test_result.txt
|
||||
LLVM_PROFILE_FILE='client_coverage.profraw' clickhouse-test --testname --shard --zookeeper --use-skip-list $ADDITIONAL_OPTIONS $SKIP_TESTS_OPTION 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee test_output/test_result.txt
|
||||
|
||||
kill_clickhouse
|
||||
|
||||
|
@ -17,13 +17,13 @@ def run_perf_test(cmd, xmls_path, output_folder):
|
||||
def run_func_test(cmd, output_prefix, num_processes, skip_tests_option):
|
||||
output_paths = [os.path.join(output_prefix, "stress_test_run_{}.txt".format(i)) for i in range(num_processes)]
|
||||
f = open(output_paths[0], 'w')
|
||||
main_command = "{} {}".format(cmd, skip_tests_option)
|
||||
main_command = "{} --use-skip-list {}".format(cmd, skip_tests_option)
|
||||
logging.info("Run func tests main cmd '%s'", main_command)
|
||||
pipes = [Popen(main_command, shell=True, stdout=f, stderr=f)]
|
||||
for output_path in output_paths[1:]:
|
||||
time.sleep(0.5)
|
||||
f = open(output_path, 'w')
|
||||
full_command = "{} --order=random {}".format(cmd, skip_tests_option)
|
||||
full_command = "{} --use-skip-list --order=random {}".format(cmd, skip_tests_option)
|
||||
logging.info("Run func tests '%s'", full_command)
|
||||
p = Popen(full_command, shell=True, stdout=f, stderr=f)
|
||||
pipes.append(p)
|
||||
|
@ -1,7 +1,18 @@
|
||||
---
|
||||
toc_folder_title: Commercial
|
||||
toc_priority: 70
|
||||
toc_title: Commercial
|
||||
toc_title: Introduction
|
||||
---
|
||||
|
||||
# ClickHouse Commercial Services
|
||||
|
||||
This section is a directory of commercial service providers specializing in ClickHouse. They are independent companies not necessarily affiliated with Yandex.
|
||||
|
||||
Service categories:
|
||||
|
||||
- [Cloud](cloud.md)
|
||||
- [Support](support.md)
|
||||
|
||||
|
||||
!!! note "For service providers"
|
||||
If you happen to represent one of them, feel free to open a pull request adding your company to the respective section (or even adding a new section if the service doesn't fit into existing categories). The easiest way to open a pull-request for documentation page is by using a “pencil” edit button in the top-right corner. If your service available in some local market, make sure to mention it in a localized documentation page as well (or at least point it out in a pull-request description).
|
||||
|
@ -120,7 +120,7 @@ There are ordinary functions and aggregate functions. For aggregate functions, s
|
||||
|
||||
Ordinary functions don’t change the number of rows – they work as if they are processing each row independently. In fact, functions are not called for individual rows, but for `Block`’s of data to implement vectorized query execution.
|
||||
|
||||
There are some miscellaneous functions, like [blockSize](../sql-reference/functions/other-functions.md#function-blocksize), [rowNumberInBlock](../sql-reference/functions/other-functions.md#function-rownumberinblock), and [runningAccumulate](../sql-reference/functions/other-functions.md#function-runningaccumulate), that exploit block processing and violate the independence of rows.
|
||||
There are some miscellaneous functions, like [blockSize](../sql-reference/functions/other-functions.md#function-blocksize), [rowNumberInBlock](../sql-reference/functions/other-functions.md#function-rownumberinblock), and [runningAccumulate](../sql-reference/functions/other-functions.md#runningaccumulatexploit block processing and violate the independence of rows.
|
||||
|
||||
ClickHouse has strong typing, so there’s no implicit type conversion. If a function doesn't support a specific combination of types, it throws an exception. But functions can work (be overloaded) for many different combinations of types. For example, the `plus` function (to implement the `+` operator) works for any combination of numeric types: `UInt8` + `Float32`, `UInt16` + `Int8`, and so on. Also, some variadic functions can accept any number of arguments, such as the `concat` function.
|
||||
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
toc_priority: 36
|
||||
toc_priority: 4
|
||||
toc_title: HDFS
|
||||
---
|
||||
|
||||
|
@ -3,4 +3,14 @@ toc_folder_title: Integrations
|
||||
toc_priority: 30
|
||||
---
|
||||
|
||||
# Table Engines for Integrations
|
||||
|
||||
ClickHouse provides various means for integrating with external systems, including table engines. Like with all other table engines, the configuration is done using `CREATE TABLE` or `ALTER TABLE` queries. Then from a user perspective, the configured integration looks like a normal table, but queries to it are proxied to the external system. This transparent querying is one of the key advantages of this approach over alternative integration methods, like external dictionaries or table functions, which require to use custom query methods on each use.
|
||||
|
||||
List of supported integrations:
|
||||
|
||||
- [ODBC](odbc.md)
|
||||
- [JDBC](jdbc.md)
|
||||
- [MySQL](mysql.md)
|
||||
- [HDFS](hdfs.md)
|
||||
- [Kafka](kafka.md)
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
toc_priority: 34
|
||||
toc_priority: 2
|
||||
toc_title: JDBC
|
||||
---
|
||||
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
toc_priority: 32
|
||||
toc_priority: 5
|
||||
toc_title: Kafka
|
||||
---
|
||||
|
||||
|
@ -1,9 +1,9 @@
|
||||
---
|
||||
toc_priority: 33
|
||||
toc_priority: 3
|
||||
toc_title: MySQL
|
||||
---
|
||||
|
||||
# Mysql {#mysql}
|
||||
# MySQL {#mysql}
|
||||
|
||||
The MySQL engine allows you to perform `SELECT` queries on data that is stored on a remote MySQL server.
|
||||
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
toc_priority: 35
|
||||
toc_priority: 1
|
||||
toc_title: ODBC
|
||||
---
|
||||
|
||||
|
@ -50,6 +50,8 @@ Clusters are set like this:
|
||||
<!-- Optional. Whether to write data to just one of the replicas. Default: false (write data to all replicas). -->
|
||||
<internal_replication>false</internal_replication>
|
||||
<replica>
|
||||
<!-- Optional. Priority of the replica for load balancing (see also load_balancing setting). Default: 1 (less value has more priority). -->
|
||||
<priority>1</priority>
|
||||
<host>example01-01-1</host>
|
||||
<port>9000</port>
|
||||
</replica>
|
||||
|
@ -3,4 +3,12 @@ toc_folder_title: Special
|
||||
toc_priority: 31
|
||||
---
|
||||
|
||||
# Special Table Engines
|
||||
|
||||
There are three main categories of table engines:
|
||||
|
||||
- [MergeTree engine family](../../../engines/table-engines/mergetree-family/index.md) for main production use.
|
||||
- [Log engine family](../../../engines/table-engines/log-family/index.md) for small temporary data.
|
||||
- [Table engines for integrations](../../../engines/table-engines/integrations/index.md).
|
||||
|
||||
The remaining engines are unique in their purpose and are not grouped into families yet, thus they are placed in this “special” category.
|
||||
|
@ -1055,11 +1055,11 @@ Each Avro message embeds a schema id that can be resolved to the actual schema w
|
||||
|
||||
Schemas are cached once resolved.
|
||||
|
||||
Schema Registry URL is configured with [format\_avro\_schema\_registry\_url](../operations/settings/settings.md#settings-format_avro_schema_registry_url)
|
||||
Schema Registry URL is configured with [format\_avro\_schema\_registry\_url](../operations/settings/settings.md#format_avro_schema_registry_url).
|
||||
|
||||
### Data Types Matching {#data_types-matching-1}
|
||||
|
||||
Same as [Avro](#data-format-avro)
|
||||
Same as [Avro](#data-format-avro).
|
||||
|
||||
### Usage {#usage}
|
||||
|
||||
@ -1093,7 +1093,7 @@ SELECT * FROM topic1_stream;
|
||||
```
|
||||
|
||||
!!! note "Warning"
|
||||
Setting `format_avro_schema_registry_url` needs to be configured in `users.xml` to maintain it’s value after a restart.
|
||||
Setting `format_avro_schema_registry_url` needs to be configured in `users.xml` to maintain it’s value after a restart. Also you can use the `format_avro_schema_registry_url` setting of the `Kafka` table engine.
|
||||
|
||||
## Parquet {#data-format-parquet}
|
||||
|
||||
|
11
docs/en/interfaces/third-party/index.md
vendored
11
docs/en/interfaces/third-party/index.md
vendored
@ -3,4 +3,15 @@ toc_folder_title: Third-Party
|
||||
toc_priority: 24
|
||||
---
|
||||
|
||||
# Third-Party Interfaces
|
||||
|
||||
This is a collection of links to third-party tools that provide some sort of interface to ClickHouse. It can be either visual interface, command-line interface or an API:
|
||||
|
||||
- [Client libraries](client-libraries.md)
|
||||
- [Integrations](integrations.md)
|
||||
- [GUI](gui.md)
|
||||
- [Proxies](proxy.md)
|
||||
|
||||
|
||||
!!! note "Note"
|
||||
Generic tools that support common API like [ODBC](../../interfaces/odbc.md) or [JDBC](../../interfaces/jdbc.md) usually can work with ClickHouse as well, but are not listed here because there are way too many of them.
|
||||
|
@ -50,7 +50,8 @@ toc_title: Adopters
|
||||
| <a href="http://www.pragma-innovation.fr/" class="favicon">Pragma Innovation</a> | Telemetry and Big Data Analysis | Main product | — | — | [Slides in English, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup18/4_pragma_innovation.pdf) |
|
||||
| <a href="https://www.qingcloud.com/" class="favicon">QINGCLOUD</a> | Cloud services | Main product | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/4.%20Cloud%20%2B%20TSDB%20for%20ClickHouse%20张健%20QingCloud.pdf) |
|
||||
| <a href="https://qrator.net" class="favicon">Qrator</a> | DDoS protection | Main product | — | — | [Blog Post, March 2019](https://blog.qrator.net/en/clickhouse-ddos-mitigation_37/) |
|
||||
| <a href="https://www.percent.cn/" class="favicon">Percent 百分点</a> | Analytics | Main Product | — | — | [Slides in Chinese, June 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup24/4.%20ClickHouse万亿数据双中心的设计与实践%20.pdf) |
|
||||
| <a href="https://www.percent.cn/" class="favicon">Percent 百分点</a> | Analytics | Main Product | — | — | [Slides in Chinese, June 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup24/4.%20ClickHouse万亿数据双中心的设计与实践%20.pdf) |
|
||||
| <a href="https://plausible.io/" class="favicon">Plausible</a> | Analytics | Main Product | — | — | [Blog post, June 2020](https://twitter.com/PlausibleHQ/status/1273889629087969280) |
|
||||
| <a href="https://rambler.ru" class="favicon">Rambler</a> | Internet services | Analytics | — | — | [Talk in Russian, April 2018](https://medium.com/@ramblertop/разработка-api-clickhouse-для-рамблер-топ-100-f4c7e56f3141) |
|
||||
| <a href="https://www.tencent.com" class="favicon">Tencent</a> | Messaging | Logging | — | — | [Talk in Chinese, November 2019](https://youtu.be/T-iVQRuw-QY?t=5050) |
|
||||
| <a href="https://trafficstars.com/" class="favicon">Traffic Stars</a> | AD network | — | — | — | [Slides in Russian, May 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup15/lightning/ninja.pdf) |
|
||||
|
@ -34,7 +34,7 @@ By default, the ClickHouse server provides the `default` user account which is n
|
||||
If you just started using ClickHouse, consider the following scenario:
|
||||
|
||||
1. [Enable](#enabling-access-control) SQL-driven access control and account management for the `default` user.
|
||||
2. Log in to the `default` user account and create all the required users. Don’t forget to create an administrator account (`GRANT ALL ON *.* WITH GRANT OPTION TO admin_user_account`).
|
||||
2. Log in to the `default` user account and create all the required users. Don’t forget to create an administrator account (`GRANT ALL ON *.* TO admin_user_account WITH GRANT OPTION`).
|
||||
3. [Restrict permissions](../operations/settings/permissions-for-queries.md#permissions_for_queries) for the `default` user and disable SQL-driven access control and account management for it.
|
||||
|
||||
### Properties of Current Solution {#access-control-properties}
|
||||
|
@ -1,6 +1,9 @@
|
||||
---
|
||||
toc_folder_title: Optimizing Performance
|
||||
toc_priority: 52
|
||||
toc_hidden: true
|
||||
---
|
||||
|
||||
# Optimizing Performance
|
||||
|
||||
- [Sampling query profiler](sampling-query-profiler.md)
|
||||
|
@ -398,6 +398,27 @@ The cache is shared for the server and memory is allocated as needed. The cache
|
||||
<mark_cache_size>5368709120</mark_cache_size>
|
||||
```
|
||||
|
||||
|
||||
## max_server_memory_usage {#max_server_memory_usage}
|
||||
|
||||
Limits total RAM usage by the ClickHouse server. You can specify it only for the default profile.
|
||||
|
||||
Possible values:
|
||||
|
||||
- Positive integer.
|
||||
- 0 — Unlimited.
|
||||
|
||||
Default value: `0`.
|
||||
|
||||
**Additional Info**
|
||||
|
||||
On hosts with low RAM and swap, you possibly need setting `max_server_memory_usage_to_ram_ratio > 1`.
|
||||
|
||||
**See also**
|
||||
|
||||
- [max_memory_usage](../settings/query-complexity.md#settings_max_memory_usage)
|
||||
|
||||
|
||||
## max\_concurrent\_queries {#max-concurrent-queries}
|
||||
|
||||
The maximum number of simultaneously processed requests.
|
||||
|
@ -36,7 +36,7 @@ Memory usage is not monitored for the states of certain aggregate functions.
|
||||
|
||||
Memory usage is not fully tracked for states of the aggregate functions `min`, `max`, `any`, `anyLast`, `argMin`, `argMax` from `String` and `Array` arguments.
|
||||
|
||||
Memory consumption is also restricted by the parameters `max_memory_usage_for_user` and `max_memory_usage_for_all_queries`.
|
||||
Memory consumption is also restricted by the parameters `max_memory_usage_for_user` and [max_server_memory_usage](../server-configuration-parameters/settings.md#max_server_memory_usage).
|
||||
|
||||
## max\_memory\_usage\_for\_user {#max-memory-usage-for-user}
|
||||
|
||||
@ -46,18 +46,10 @@ Default values are defined in [Settings.h](https://github.com/ClickHouse/ClickHo
|
||||
|
||||
See also the description of [max\_memory\_usage](#settings_max_memory_usage).
|
||||
|
||||
## max\_memory\_usage\_for\_all\_queries {#max-memory-usage-for-all-queries}
|
||||
|
||||
The maximum amount of RAM to use for running all queries on a single server.
|
||||
|
||||
Default values are defined in [Settings.h](https://github.com/ClickHouse/ClickHouse/blob/master/src/Core/Settings.h#L289). By default, the amount is not restricted (`max_memory_usage_for_all_queries = 0`).
|
||||
|
||||
See also the description of [max\_memory\_usage](#settings_max_memory_usage).
|
||||
|
||||
## max\_rows\_to\_read {#max-rows-to-read}
|
||||
|
||||
The following restrictions can be checked on each block (instead of on each row). That is, the restrictions can be broken a little.
|
||||
When running a query in multiple threads, the following restrictions apply to each thread separately.
|
||||
|
||||
A maximum number of rows that can be read from a table when running a query.
|
||||
|
||||
|
@ -727,6 +727,17 @@ The INSERT query also contains data for INSERT that is processed by a separate s
|
||||
|
||||
Default value: 256 KiB.
|
||||
|
||||
## max\_parser\_depth {#max_parser_depth}
|
||||
|
||||
Limits maximum recursion depth in the recursive descent parser. Allows to control stack size.
|
||||
|
||||
Possible values:
|
||||
|
||||
- Positive integer.
|
||||
- 0 — Recursion depth is unlimited.
|
||||
|
||||
Default value: 1000.
|
||||
|
||||
## interactive\_delay {#interactive-delay}
|
||||
|
||||
The interval in microseconds for checking whether request execution has been cancelled and sending the progress.
|
||||
@ -1368,13 +1379,11 @@ Possible values: 32 (32 bytes) - 1073741824 (1 GiB)
|
||||
|
||||
Default value: 32768 (32 KiB)
|
||||
|
||||
## format\_avro\_schema\_registry\_url {#settings-format_avro_schema_registry_url}
|
||||
## format\_avro\_schema\_registry\_url {#format_avro_schema_registry_url}
|
||||
|
||||
Sets Confluent Schema Registry URL to use with [AvroConfluent](../../interfaces/formats.md#data-format-avro-confluent) format
|
||||
Sets [Confluent Schema Registry](https://docs.confluent.io/current/schema-registry/index.html) URL to use with [AvroConfluent](../../interfaces/formats.md#data-format-avro-confluent) format.
|
||||
|
||||
Type: URL
|
||||
|
||||
Default value: Empty
|
||||
Default value: `Empty`.
|
||||
|
||||
## background\_pool\_size {#background_pool_size}
|
||||
|
||||
@ -1418,6 +1427,23 @@ Possible values:
|
||||
|
||||
Default value: 16.
|
||||
|
||||
## always_fetch_merged_part {#always_fetch_merged_part}
|
||||
|
||||
Prohibits data parts merging in [Replicated*MergeTree](../../engines/table-engines/mergetree-family/replication.md)-engine tables.
|
||||
|
||||
When merging is prohibited, the replica never merges parts and always downloads merged parts from other replicas. If there is no required data yet, the replica waits for it. CPU and disk load on the replica server decreases, but the network load on cluster increases. This setting can be useful on servers with relatively weak CPUs or slow disks, such as servers for backups storage.
|
||||
|
||||
Possible values:
|
||||
|
||||
- 0 — `Replicated*MergeTree`-engine tables merge data parts at the replica.
|
||||
- 1 — `Replicated*MergeTree`-engine tables don't merge data parts at the replica. The tables download merged data parts from other replicas.
|
||||
|
||||
Default value: 0.
|
||||
|
||||
**See Also**
|
||||
|
||||
- [Data Replication](../../engines/table-engines/mergetree-family/replication.md)
|
||||
|
||||
## background\_distributed\_schedule\_pool\_size {#background_distributed_schedule_pool_size}
|
||||
|
||||
Sets the number of threads performing background tasks for [distributed](../../engines/table-engines/special/distributed.md) sends. This setting is applied at ClickHouse server start and can’t be changed in a user session.
|
||||
|
@ -120,6 +120,7 @@ zoo.cfg:
|
||||
tickTime=2000
|
||||
# The number of ticks that the initial
|
||||
# synchronization phase can take
|
||||
# This value is not quite motivated
|
||||
initLimit=30000
|
||||
# The number of ticks that can pass between
|
||||
# sending a request and getting an acknowledgement
|
||||
@ -127,6 +128,9 @@ syncLimit=10
|
||||
|
||||
maxClientCnxns=2000
|
||||
|
||||
# It is the maximum value that client may request and the server will accept.
|
||||
# It is Ok to have high maxSessionTimeout on server to allow clients to work with high session timeout if they want.
|
||||
# But we request session timeout of 30 seconds by default (you can change it with session_timeout_ms in ClickHouse config).
|
||||
maxSessionTimeout=60000000
|
||||
# the directory where the snapshot is stored.
|
||||
dataDir=/opt/zookeeper/{{ '{{' }} cluster['name'] {{ '}}' }}/data
|
||||
|
@ -33,7 +33,7 @@ To work with these states, use:
|
||||
|
||||
- [AggregatingMergeTree](../../engines/table-engines/mergetree-family/aggregatingmergetree.md) table engine.
|
||||
- [finalizeAggregation](../../sql-reference/functions/other-functions.md#function-finalizeaggregation) function.
|
||||
- [runningAccumulate](../../sql-reference/functions/other-functions.md#function-runningaccumulate) function.
|
||||
- [runningAccumulate](../../sql-reference/functions/other-functions.md#runningaccumulate) function.
|
||||
- [-Merge](#aggregate_functions_combinators-merge) combinator.
|
||||
- [-MergeState](#aggregate_functions_combinators-mergestate) combinator.
|
||||
|
||||
|
@ -1,6 +1,31 @@
|
||||
---
|
||||
toc_folder_title: Domains
|
||||
toc_priority: 56
|
||||
toc_folder_title: Domains
|
||||
toc_title: Overview
|
||||
---
|
||||
|
||||
# Domains {#domains}
|
||||
|
||||
Domains are special-purpose types that add some extra features atop of existing base type, but leaving on-wire and on-disc format of the underlying data type intact. At the moment, ClickHouse does not support user-defined domains.
|
||||
|
||||
You can use domains anywhere corresponding base type can be used, for example:
|
||||
|
||||
- Create a column of a domain type
|
||||
- Read/write values from/to domain column
|
||||
- Use it as an index if a base type can be used as an index
|
||||
- Call functions with values of domain column
|
||||
|
||||
### Extra Features of Domains {#extra-features-of-domains}
|
||||
|
||||
- Explicit column type name in `SHOW CREATE TABLE` or `DESCRIBE TABLE`
|
||||
- Input from human-friendly format with `INSERT INTO domain_table(domain_column) VALUES(...)`
|
||||
- Output to human-friendly format for `SELECT domain_column FROM domain_table`
|
||||
- Loading data from an external source in the human-friendly format: `INSERT INTO domain_table FORMAT CSV ...`
|
||||
|
||||
### Limitations {#limitations}
|
||||
|
||||
- Can’t convert index column of base type to domain type via `ALTER TABLE`.
|
||||
- Can’t implicitly convert string values into domain values when inserting data from another column or table.
|
||||
- Domain adds no constrains on stored values.
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/data_types/domains/) <!--hide-->
|
||||
|
@ -1,30 +0,0 @@
|
||||
---
|
||||
toc_priority: 58
|
||||
toc_title: Overview
|
||||
---
|
||||
|
||||
# Domains {#domains}
|
||||
|
||||
Domains are special-purpose types that add some extra features atop of existing base type, but leaving on-wire and on-disc format of the underlying data type intact. At the moment, ClickHouse does not support user-defined domains.
|
||||
|
||||
You can use domains anywhere corresponding base type can be used, for example:
|
||||
|
||||
- Create a column of a domain type
|
||||
- Read/write values from/to domain column
|
||||
- Use it as an index if a base type can be used as an index
|
||||
- Call functions with values of domain column
|
||||
|
||||
### Extra Features of Domains {#extra-features-of-domains}
|
||||
|
||||
- Explicit column type name in `SHOW CREATE TABLE` or `DESCRIBE TABLE`
|
||||
- Input from human-friendly format with `INSERT INTO domain_table(domain_column) VALUES(...)`
|
||||
- Output to human-friendly format for `SELECT domain_column FROM domain_table`
|
||||
- Loading data from an external source in the human-friendly format: `INSERT INTO domain_table FORMAT CSV ...`
|
||||
|
||||
### Limitations {#limitations}
|
||||
|
||||
- Can’t convert index column of base type to domain type via `ALTER TABLE`.
|
||||
- Can’t implicitly convert string values into domain values when inserting data from another column or table.
|
||||
- Domain adds no constrains on stored values.
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/data_types/domains/overview) <!--hide-->
|
@ -149,6 +149,63 @@ Rounds down a date with time to the start of the hour.
|
||||
|
||||
Rounds down a date with time to the start of the minute.
|
||||
|
||||
## toStartOfSecond {#tostartofsecond}
|
||||
|
||||
Truncates sub-seconds.
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
toStartOfSecond(value[, timezone])
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `value` — Date and time. [DateTime64](../data-types/datetime64.md).
|
||||
- `timezone` — [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) for the returned value (optional). If not specified, the function uses the timezone of the `value` parameter. [String](../data-types/string.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
- Input value without sub-seconds.
|
||||
|
||||
Type: [DateTime64](../data-types/datetime64.md).
|
||||
|
||||
**Examples**
|
||||
|
||||
Query without timezone:
|
||||
|
||||
``` sql
|
||||
WITH toDateTime64('2020-01-01 10:20:30.999', 3) AS dt64
|
||||
SELECT toStartOfSecond(dt64);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌───toStartOfSecond(dt64)─┐
|
||||
│ 2020-01-01 10:20:30.000 │
|
||||
└─────────────────────────┘
|
||||
```
|
||||
|
||||
Query with timezone:
|
||||
|
||||
``` sql
|
||||
WITH toDateTime64('2020-01-01 10:20:30.999', 3) AS dt64
|
||||
SELECT toStartOfSecond(dt64, 'Europe/Moscow');
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─toStartOfSecond(dt64, 'Europe/Moscow')─┐
|
||||
│ 2020-01-01 13:20:30.000 │
|
||||
└────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**See also**
|
||||
|
||||
- [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) server configuration parameter.
|
||||
|
||||
## toStartOfFiveMinute {#tostartoffiveminute}
|
||||
|
||||
Rounds down a date with time to the start of the five-minute interval.
|
||||
|
@ -267,7 +267,7 @@ SELECT geohashesInBox(24.48, 40.56, 24.785, 40.81, 4) AS thasos
|
||||
|
||||
## h3GetBaseCell {#h3getbasecell}
|
||||
|
||||
Returns the base cell number of the index.
|
||||
Returns the base cell number of the H3 index.
|
||||
|
||||
**Syntax**
|
||||
|
||||
@ -275,20 +275,22 @@ Returns the base cell number of the index.
|
||||
h3GetBaseCell(index)
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
**Parameter**
|
||||
|
||||
- `index` — Hexagon index number. Type: [UInt64](../../sql-reference/data-types/int-uint.md).
|
||||
|
||||
**Returned values**
|
||||
**Returned value**
|
||||
|
||||
- Hexagon base cell number. Type: [UInt8](../../sql-reference/data-types/int-uint.md).
|
||||
- Hexagon base cell number.
|
||||
|
||||
Type: [UInt8](../../sql-reference/data-types/int-uint.md).
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT h3GetBaseCell(612916788725809151) as basecell
|
||||
SELECT h3GetBaseCell(612916788725809151) as basecell;
|
||||
```
|
||||
|
||||
Result:
|
||||
@ -301,7 +303,7 @@ Result:
|
||||
|
||||
## h3HexAreaM2 {#h3hexaream2}
|
||||
|
||||
Average hexagon area in square meters at the given resolution.
|
||||
Returns average hexagon area in square meters at the given resolution.
|
||||
|
||||
**Syntax**
|
||||
|
||||
@ -309,20 +311,22 @@ Average hexagon area in square meters at the given resolution.
|
||||
h3HexAreaM2(resolution)
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
**Parameter**
|
||||
|
||||
- `resolution` — Index resolution. Range: `[0, 15]`. Type: [UInt8](../../sql-reference/data-types/int-uint.md).
|
||||
|
||||
**Returned values**
|
||||
**Returned value**
|
||||
|
||||
- Area in m². Type: [Float64](../../sql-reference/data-types/float.md).
|
||||
- Area in square meters.
|
||||
|
||||
Type: [Float64](../../sql-reference/data-types/float.md).
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT h3HexAreaM2(13) as area
|
||||
SELECT h3HexAreaM2(13) as area;
|
||||
```
|
||||
|
||||
Result:
|
||||
@ -335,7 +339,7 @@ Result:
|
||||
|
||||
## h3IndexesAreNeighbors {#h3indexesareneighbors}
|
||||
|
||||
Returns whether or not the provided H3Indexes are neighbors.
|
||||
Returns whether or not the provided H3 indexes are neighbors.
|
||||
|
||||
**Syntax**
|
||||
|
||||
@ -348,16 +352,19 @@ h3IndexesAreNeighbors(index1, index2)
|
||||
- `index1` — Hexagon index number. Type: [UInt64](../../sql-reference/data-types/int-uint.md).
|
||||
- `index2` — Hexagon index number. Type: [UInt64](../../sql-reference/data-types/int-uint.md).
|
||||
|
||||
**Returned values**
|
||||
**Returned value**
|
||||
|
||||
- Returns `1` if the indexes are neighbors, `0` otherwise. Type: [UInt8](../../sql-reference/data-types/int-uint.md).
|
||||
- `1` — Indexes are neighbours.
|
||||
- `0` — Indexes are not neighbours.
|
||||
|
||||
Type: [UInt8](../../sql-reference/data-types/int-uint.md).
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT h3IndexesAreNeighbors(617420388351344639, 617420388352655359) AS n
|
||||
SELECT h3IndexesAreNeighbors(617420388351344639, 617420388352655359) AS n;
|
||||
```
|
||||
|
||||
Result:
|
||||
@ -370,7 +377,7 @@ Result:
|
||||
|
||||
## h3ToChildren {#h3tochildren}
|
||||
|
||||
Returns an array with the child indexes of the given index.
|
||||
Returns an array of child indexes for the given H3 index.
|
||||
|
||||
**Syntax**
|
||||
|
||||
@ -385,14 +392,16 @@ h3ToChildren(index, resolution)
|
||||
|
||||
**Returned values**
|
||||
|
||||
- Array with the child H3 indexes. Array of type: [UInt64](../../sql-reference/data-types/int-uint.md).
|
||||
- Array of the child H3-indexes.
|
||||
|
||||
Type: [Array](../../sql-reference/data-types/array.md)([UInt64](../../sql-reference/data-types/int-uint.md)).
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT h3ToChildren(599405990164561919, 6) AS children
|
||||
SELECT h3ToChildren(599405990164561919, 6) AS children;
|
||||
```
|
||||
|
||||
Result:
|
||||
@ -405,7 +414,7 @@ Result:
|
||||
|
||||
## h3ToParent {#h3toparent}
|
||||
|
||||
Returns the parent (coarser) index containing the given index.
|
||||
Returns the parent (coarser) index containing the given H3 index.
|
||||
|
||||
**Syntax**
|
||||
|
||||
@ -418,16 +427,18 @@ h3ToParent(index, resolution)
|
||||
- `index` — Hexagon index number. Type: [UInt64](../../sql-reference/data-types/int-uint.md).
|
||||
- `resolution` — Index resolution. Range: `[0, 15]`. Type: [UInt8](../../sql-reference/data-types/int-uint.md).
|
||||
|
||||
**Returned values**
|
||||
**Returned value**
|
||||
|
||||
- Parent H3 index. Type: [UInt64](../../sql-reference/data-types/int-uint.md).
|
||||
- Parent H3 index.
|
||||
|
||||
Type: [UInt64](../../sql-reference/data-types/int-uint.md).
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT h3ToParent(599405990164561919, 3) as parent
|
||||
SELECT h3ToParent(599405990164561919, 3) as parent;
|
||||
```
|
||||
|
||||
Result:
|
||||
@ -440,26 +451,28 @@ Result:
|
||||
|
||||
## h3ToString {#h3tostring}
|
||||
|
||||
Converts the H3Index representation of the index to the string representation.
|
||||
Converts the `H3Index` representation of the index to the string representation.
|
||||
|
||||
``` sql
|
||||
h3ToString(index)
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
**Parameter**
|
||||
|
||||
- `index` — Hexagon index number. Type: [UInt64](../../sql-reference/data-types/int-uint.md).
|
||||
|
||||
**Returned values**
|
||||
**Returned value**
|
||||
|
||||
- String representation of the H3 index. Type: [String](../../sql-reference/data-types/string.md).
|
||||
- String representation of the H3 index.
|
||||
|
||||
Type: [String](../../sql-reference/data-types/string.md).
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT h3ToString(617420388352917503) as h3_string
|
||||
SELECT h3ToString(617420388352917503) as h3_string;
|
||||
```
|
||||
|
||||
Result:
|
||||
@ -472,17 +485,19 @@ Result:
|
||||
|
||||
## stringToH3 {#stringtoh3}
|
||||
|
||||
Converts the string representation to H3Index (UInt64) representation.
|
||||
Converts the string representation to the `H3Index` (UInt64) representation.
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
stringToH3(index_str)
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
**Parameter**
|
||||
|
||||
- `index_str` — String representation of the H3 index. Type: [String](../../sql-reference/data-types/string.md).
|
||||
|
||||
**Returned values**
|
||||
**Returned value**
|
||||
|
||||
- Hexagon index number. Returns 0 on error. Type: [UInt64](../../sql-reference/data-types/int-uint.md).
|
||||
|
||||
@ -491,7 +506,7 @@ stringToH3(index_str)
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT stringToH3('89184926cc3ffff') as index
|
||||
SELECT stringToH3('89184926cc3ffff') as index;
|
||||
```
|
||||
|
||||
Result:
|
||||
@ -504,7 +519,7 @@ Result:
|
||||
|
||||
## h3GetResolution {#h3getresolution}
|
||||
|
||||
Returns the resolution of the index.
|
||||
Returns the resolution of the H3 index.
|
||||
|
||||
**Syntax**
|
||||
|
||||
@ -512,11 +527,11 @@ Returns the resolution of the index.
|
||||
h3GetResolution(index)
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
**Parameter**
|
||||
|
||||
- `index` — Hexagon index number. Type: [UInt64](../../sql-reference/data-types/int-uint.md).
|
||||
|
||||
**Returned values**
|
||||
**Returned value**
|
||||
|
||||
- Index resolution. Range: `[0, 15]`. Type: [UInt8](../../sql-reference/data-types/int-uint.md).
|
||||
|
||||
@ -525,7 +540,7 @@ h3GetResolution(index)
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT h3GetResolution(617420388352917503) as res
|
||||
SELECT h3GetResolution(617420388352917503) as res;
|
||||
```
|
||||
|
||||
Result:
|
||||
@ -536,4 +551,4 @@ Result:
|
||||
└─────┘
|
||||
```
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/query_language/functions/geo/) <!--hide-->
|
||||
[Original article](https://clickhouse.tech/docs/en/sql-reference/functions/geo/) <!--hide-->
|
||||
|
@ -1054,11 +1054,110 @@ Result:
|
||||
|
||||
Takes state of aggregate function. Returns result of aggregation (finalized state).
|
||||
|
||||
## runningAccumulate {#function-runningaccumulate}
|
||||
## runningAccumulate {#runningaccumulate}
|
||||
|
||||
Takes the states of the aggregate function and returns a column with values, are the result of the accumulation of these states for a set of block lines, from the first to the current line.
|
||||
For example, takes state of aggregate function (example runningAccumulate(uniqState(UserID))), and for each row of block, return result of aggregate function on merge of states of all previous rows and current row.
|
||||
So, result of function depends on partition of data to blocks and on order of data in block.
|
||||
Accumulates states of an aggregate function for each row of a data block.
|
||||
|
||||
!!! warning "Warning"
|
||||
The state is reset for each new data block.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
runningAccumulate(agg_state[, grouping]);
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `agg_state` — State of the aggregate function. [AggregateFunction](../../sql-reference/data-types/aggregatefunction.md#data-type-aggregatefunction).
|
||||
- `grouping` — Grouping key. Optional. The state of the function is reset if the `grouping` value is changed. It can be any of the [supported data types](../../sql-reference/data-types/index.md) for which the equality operator is defined.
|
||||
|
||||
**Returned value**
|
||||
|
||||
- Each resulting row contains a result of the aggregate function, accumulated for all the input rows from 0 to the current position. `runningAccumulate` resets states for each new data block or when the `grouping` value changes.
|
||||
|
||||
Type depends on the aggregate function used.
|
||||
|
||||
**Examples**
|
||||
|
||||
Consider how you can use `runningAccumulate` to find the cumulative sum of numbers without and with grouping.
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT k, runningAccumulate(sum_k) AS res FROM (SELECT number as k, sumState(k) AS sum_k FROM numbers(10) GROUP BY k ORDER BY k);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```text
|
||||
┌─k─┬─res─┐
|
||||
│ 0 │ 0 │
|
||||
│ 1 │ 1 │
|
||||
│ 2 │ 3 │
|
||||
│ 3 │ 6 │
|
||||
│ 4 │ 10 │
|
||||
│ 5 │ 15 │
|
||||
│ 6 │ 21 │
|
||||
│ 7 │ 28 │
|
||||
│ 8 │ 36 │
|
||||
│ 9 │ 45 │
|
||||
└───┴─────┘
|
||||
```
|
||||
|
||||
The subquery generates `sumState` for every number from `0` to `9`. `sumState` returns the state of the [sum](../aggregate-functions/reference/sum.md) function that contains the sum of a single number.
|
||||
|
||||
The whole query does the following:
|
||||
|
||||
1. For the first row, `runningAccumulate` takes `sumState(0)` and returns `0`.
|
||||
2. For the second row, the function merges `sumState(0)` and `sumState(1)` resulting in `sumState(0 + 1)`, and returns `1` as a result.
|
||||
3. For the third row, the function merges `sumState(0 + 1)` and `sumState(2)` resulting in `sumState(0 + 1 + 2)`, and returns `3` as a result.
|
||||
4. The actions are repeated until the block ends.
|
||||
|
||||
The following example shows the `groupping` parameter usage:
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
grouping,
|
||||
item,
|
||||
runningAccumulate(state, grouping) AS res
|
||||
FROM
|
||||
(
|
||||
SELECT
|
||||
toInt8(number / 4) AS grouping,
|
||||
number AS item,
|
||||
sumState(number) AS state
|
||||
FROM numbers(15)
|
||||
GROUP BY item
|
||||
ORDER BY item ASC
|
||||
);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```text
|
||||
┌─grouping─┬─item─┬─res─┐
|
||||
│ 0 │ 0 │ 0 │
|
||||
│ 0 │ 1 │ 1 │
|
||||
│ 0 │ 2 │ 3 │
|
||||
│ 0 │ 3 │ 6 │
|
||||
│ 1 │ 4 │ 4 │
|
||||
│ 1 │ 5 │ 9 │
|
||||
│ 1 │ 6 │ 15 │
|
||||
│ 1 │ 7 │ 22 │
|
||||
│ 2 │ 8 │ 8 │
|
||||
│ 2 │ 9 │ 17 │
|
||||
│ 2 │ 10 │ 27 │
|
||||
│ 2 │ 11 │ 38 │
|
||||
│ 3 │ 12 │ 12 │
|
||||
│ 3 │ 13 │ 25 │
|
||||
│ 3 │ 14 │ 39 │
|
||||
└──────────┴──────┴─────┘
|
||||
```
|
||||
|
||||
As you can see, `runningAccumulate` merges states for each group of rows separately.
|
||||
|
||||
## joinGet {#joinget}
|
||||
|
||||
|
@ -521,6 +521,80 @@ Result:
|
||||
- [toDate](#todate)
|
||||
- [toDateTime](#todatetime)
|
||||
|
||||
## parseDateTimeBestEffortUS {#parsedatetimebesteffortUS}
|
||||
|
||||
This function is similar to ['parseDateTimeBestEffort'](#parsedatetimebesteffort), the only difference is that this function prefers US style (`MM/DD/YYYY` etc) in case of ambiguouty.
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
parseDateTimeBestEffortUS(time_string [, time_zone]);
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `time_string` — String containing a date and time to convert. [String](../../sql-reference/data-types/string.md).
|
||||
- `time_zone` — Time zone. The function parses `time_string` according to the time zone. [String](../../sql-reference/data-types/string.md).
|
||||
|
||||
**Supported non-standard formats**
|
||||
|
||||
- A string containing 9..10 digit [unix timestamp](https://en.wikipedia.org/wiki/Unix_time).
|
||||
- A string with a date and a time component: `YYYYMMDDhhmmss`, `MM/DD/YYYY hh:mm:ss`, `MM-DD-YY hh:mm`, `YYYY-MM-DD hh:mm:ss`, etc.
|
||||
- A string with a date, but no time component: `YYYY`, `YYYYMM`, `YYYY*MM`, `MM/DD/YYYY`, `MM-DD-YY` etc.
|
||||
- A string with a day and time: `DD`, `DD hh`, `DD hh:mm`. In this case `YYYY-MM` are substituted as `2000-01`.
|
||||
- A string that includes the date and time along with time zone offset information: `YYYY-MM-DD hh:mm:ss ±h:mm`, etc. For example, `2020-12-12 17:36:00 -5:00`.
|
||||
|
||||
**Returned value**
|
||||
|
||||
- `time_string` converted to the `DateTime` data type.
|
||||
|
||||
**Examples**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT parseDateTimeBestEffortUS('09/12/2020 12:12:57')
|
||||
AS parseDateTimeBestEffortUS;
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─parseDateTimeBestEffortUS─┐
|
||||
│ 2020-09-12 12:12:57 │
|
||||
└─────────────────────────——┘
|
||||
```
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT parseDateTimeBestEffortUS('09-12-2020 12:12:57')
|
||||
AS parseDateTimeBestEffortUS;
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─parseDateTimeBestEffortUS─┐
|
||||
│ 2020-09-12 12:12:57 │
|
||||
└─────────────────────────——┘
|
||||
```
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT parseDateTimeBestEffortUS('09.12.2020 12:12:57')
|
||||
AS parseDateTimeBestEffortUS;
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─parseDateTimeBestEffortUS─┐
|
||||
│ 2020-09-12 12:12:57 │
|
||||
└─────────────────────────——┘
|
||||
```
|
||||
|
||||
## parseDateTimeBestEffortOrNull {#parsedatetimebesteffortornull}
|
||||
|
||||
Same as for [parseDateTimeBestEffort](#parsedatetimebesteffort) except that it returns null when it encounters a date format that cannot be processed.
|
||||
|
@ -1,6 +1,19 @@
|
||||
---
|
||||
toc_folder_title: Statements
|
||||
toc_priority: 31
|
||||
toc_hidden: true
|
||||
---
|
||||
|
||||
# ClickHouse SQL Statements
|
||||
|
||||
Statements represent various kinds of action you can perform using SQL queries. Each kind of statement has it's own syntax and usage details that are described separately:
|
||||
|
||||
- [SELECT](select/index.md)
|
||||
- [INSERT INTO](insert-into.md)
|
||||
- [CREATE](create.md)
|
||||
- [ALTER](alter.md)
|
||||
- [SYSTEM](system.md)
|
||||
- [SHOW](show.md)
|
||||
- [GRANT](grant.md)
|
||||
- [REVOKE](revoke.md)
|
||||
- [Other](misc.md)
|
||||
|
39
docs/en/sql-reference/table-functions/cluster.md
Normal file
39
docs/en/sql-reference/table-functions/cluster.md
Normal file
@ -0,0 +1,39 @@
|
||||
---
|
||||
toc_priority: 50
|
||||
toc_title: cluster
|
||||
---
|
||||
|
||||
# cluster, clusterAllReplicas {#cluster-clusterallreplicas}
|
||||
|
||||
Allows to access all shards in an existing cluster which configured in `remote_servers` section without creating a [Distributed](../../engines/table-engines/special/distributed.md) table. One replica of each shard is queried.
|
||||
`clusterAllReplicas` - same as `cluster` but all replicas are queried. Each replica in a cluster is used as separate shard/connection.
|
||||
|
||||
!!! note "Note"
|
||||
All available clusters are listed in the `system.clusters` table.
|
||||
|
||||
|
||||
Signatures:
|
||||
|
||||
``` sql
|
||||
cluster('cluster_name', db.table)
|
||||
cluster('cluster_name', db, table)
|
||||
clusterAllReplicas('cluster_name', db.table)
|
||||
clusterAllReplicas('cluster_name', db, table)
|
||||
```
|
||||
|
||||
`cluster_name` – Name of a cluster that is used to build a set of addresses and connection parameters to remote and local servers.
|
||||
|
||||
Using the `cluster` and `clusterAllReplicas` table functions are less efficient than creating a `Distributed` table because in this case, the server connection is re-established for every request. When processing a large number of queries, please always create the `Distributed` table ahead of time, and don’t use the `cluster` and `clusterAllReplicas` table functions.
|
||||
|
||||
The `cluster` and `clusterAllReplicas` table functions can be useful in the following cases:
|
||||
|
||||
- Accessing a specific cluster for data comparison, debugging, and testing.
|
||||
- Queries to various ClickHouse clusters and replicas for research purposes.
|
||||
- Infrequent distributed requests that are made manually.
|
||||
|
||||
Connection settings like `host`, `port`, `user`, `password`, `compression`, `secure` are taken from `<remote_servers>` config section. See details in [Distributed engine](../../engines/table-engines/special/distributed.md).
|
||||
|
||||
**See Also**
|
||||
|
||||
- [skip\_unavailable\_shards](../../operations/settings/settings.md#settings-skip_unavailable_shards)
|
||||
- [load\_balancing](../../operations/settings/settings.md#settings-load_balancing)
|
@ -12,6 +12,8 @@ Signatures:
|
||||
``` sql
|
||||
remote('addresses_expr', db, table[, 'user'[, 'password']])
|
||||
remote('addresses_expr', db.table[, 'user'[, 'password']])
|
||||
remoteSecure('addresses_expr', db, table[, 'user'[, 'password']])
|
||||
remoteSecure('addresses_expr', db.table[, 'user'[, 'password']])
|
||||
```
|
||||
|
||||
`addresses_expr` – An expression that generates addresses of remote servers. This may be just one server address. The server address is `host:port`, or just `host`. The host can be specified as the server name, or as the IPv4 or IPv6 address. An IPv6 address is specified in square brackets. The port is the TCP port on the remote server. If the port is omitted, it uses `tcp_port` from the server’s config file (by default, 9000).
|
||||
|
@ -59,7 +59,6 @@ Ver también la descripción de [Método de codificación de datos:](#settings_m
|
||||
## ¿Qué puedes encontrar en Neodigit {#max-rows-to-read}
|
||||
|
||||
Las siguientes restricciones se pueden verificar en cada bloque (en lugar de en cada fila). Es decir, las restricciones se pueden romper un poco.
|
||||
Al ejecutar una consulta en varios subprocesos, las siguientes restricciones se aplican a cada subproceso por separado.
|
||||
|
||||
Un número máximo de filas que se pueden leer de una tabla al ejecutar una consulta.
|
||||
|
||||
|
@ -1,8 +1,33 @@
|
||||
---
|
||||
machine_translated: true
|
||||
machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd
|
||||
toc_folder_title: Dominio
|
||||
toc_priority: 56
|
||||
toc_folder_title: Dominio
|
||||
toc_title: "Descripci\xF3n"
|
||||
---
|
||||
|
||||
# Dominio {#domains}
|
||||
|
||||
Los dominios son tipos de propósito especial que agregan algunas características adicionales encima del tipo base existente, pero dejando intacto el formato en cable y en disco del tipo de datos subyacente. Por el momento, ClickHouse no admite dominios definidos por el usuario.
|
||||
|
||||
Puede usar dominios en cualquier lugar que se pueda usar el tipo base correspondiente, por ejemplo:
|
||||
|
||||
- Crear una columna de un tipo de dominio
|
||||
- Leer/escribir valores desde/a la columna de dominio
|
||||
- Úselo como un índice si un tipo base se puede usar como un índice
|
||||
- Funciones de llamada con valores de la columna de dominio
|
||||
|
||||
### Características adicionales de los dominios {#extra-features-of-domains}
|
||||
|
||||
- Nombre de tipo de columna explícito en `SHOW CREATE TABLE` o `DESCRIBE TABLE`
|
||||
- Entrada del formato humano-amistoso con `INSERT INTO domain_table(domain_column) VALUES(...)`
|
||||
- Salida al formato humano-amistoso para `SELECT domain_column FROM domain_table`
|
||||
- Carga de datos desde una fuente externa en el formato de uso humano: `INSERT INTO domain_table FORMAT CSV ...`
|
||||
|
||||
### Limitacion {#limitations}
|
||||
|
||||
- No se puede convertir la columna de índice del tipo base al tipo de dominio a través de `ALTER TABLE`.
|
||||
- No se pueden convertir implícitamente valores de cadena en valores de dominio al insertar datos de otra columna o tabla.
|
||||
- Domain no agrega restricciones en los valores almacenados.
|
||||
|
||||
[Artículo Original](https://clickhouse.tech/docs/en/data_types/domains/) <!--hide-->
|
||||
|
@ -1,32 +0,0 @@
|
||||
---
|
||||
machine_translated: true
|
||||
machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd
|
||||
toc_priority: 58
|
||||
toc_title: "Descripci\xF3n"
|
||||
---
|
||||
|
||||
# Dominio {#domains}
|
||||
|
||||
Los dominios son tipos de propósito especial que agregan algunas características adicionales encima del tipo base existente, pero dejando intacto el formato en cable y en disco del tipo de datos subyacente. Por el momento, ClickHouse no admite dominios definidos por el usuario.
|
||||
|
||||
Puede usar dominios en cualquier lugar que se pueda usar el tipo base correspondiente, por ejemplo:
|
||||
|
||||
- Crear una columna de un tipo de dominio
|
||||
- Leer/escribir valores desde/a la columna de dominio
|
||||
- Úselo como un índice si un tipo base se puede usar como un índice
|
||||
- Funciones de llamada con valores de la columna de dominio
|
||||
|
||||
### Características adicionales de los dominios {#extra-features-of-domains}
|
||||
|
||||
- Nombre de tipo de columna explícito en `SHOW CREATE TABLE` o `DESCRIBE TABLE`
|
||||
- Entrada del formato humano-amistoso con `INSERT INTO domain_table(domain_column) VALUES(...)`
|
||||
- Salida al formato humano-amistoso para `SELECT domain_column FROM domain_table`
|
||||
- Carga de datos desde una fuente externa en el formato de uso humano: `INSERT INTO domain_table FORMAT CSV ...`
|
||||
|
||||
### Limitacion {#limitations}
|
||||
|
||||
- No se puede convertir la columna de índice del tipo base al tipo de dominio a través de `ALTER TABLE`.
|
||||
- No se pueden convertir implícitamente valores de cadena en valores de dominio al insertar datos de otra columna o tabla.
|
||||
- Domain no agrega restricciones en los valores almacenados.
|
||||
|
||||
[Artículo Original](https://clickhouse.tech/docs/en/data_types/domains/overview) <!--hide-->
|
@ -14,6 +14,8 @@ Firma:
|
||||
``` sql
|
||||
remote('addresses_expr', db, table[, 'user'[, 'password']])
|
||||
remote('addresses_expr', db.table[, 'user'[, 'password']])
|
||||
remoteSecure('addresses_expr', db, table[, 'user'[, 'password']])
|
||||
remoteSecure('addresses_expr', db.table[, 'user'[, 'password']])
|
||||
```
|
||||
|
||||
`addresses_expr` – An expression that generates addresses of remote servers. This may be just one server address. The server address is `host:port` o simplemente `host`. El host se puede especificar como nombre de servidor o como dirección IPv4 o IPv6. Una dirección IPv6 se especifica entre corchetes. El puerto es el puerto TCP del servidor remoto. Si se omite el puerto, utiliza `tcp_port` del archivo de configuración del servidor (por defecto, 9000).
|
||||
|
@ -1,8 +1,33 @@
|
||||
---
|
||||
machine_translated: true
|
||||
machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd
|
||||
toc_folder_title: "\u062F\u0627\u0645\u0646\u0647"
|
||||
toc_priority: 56
|
||||
toc_folder_title: "\u062F\u0627\u0645\u0646\u0647"
|
||||
toc_title: "\u0628\u0631\u0631\u0633\u06CC \u0627\u062C\u0645\u0627\u0644\u06CC"
|
||||
---
|
||||
|
||||
# دامنه {#domains}
|
||||
|
||||
دامنه انواع خاصی است که اضافه کردن برخی از ویژگی های اضافی در بالای نوع پایه موجود, اما ترک بر روی سیم و بر روی دیسک فرمت از نوع داده اساسی دست نخورده. درحال حاضر, تاتر می کند دامنه تعریف شده توسط کاربر را پشتیبانی نمی کند.
|
||||
|
||||
شما می توانید دامنه در هر نقطه نوع پایه مربوطه استفاده می شود, مثلا:
|
||||
|
||||
- ایجاد یک ستون از یک نوع دامنه
|
||||
- خواندن / نوشتن مقادیر از / به ستون دامنه
|
||||
- اگر یک نوع پایه می تواند به عنوان یک شاخص استفاده می شود به عنوان شاخص استفاده می شود
|
||||
- توابع تماس با مقادیر ستون دامنه
|
||||
|
||||
### ویژگی های اضافی از دامنه {#extra-features-of-domains}
|
||||
|
||||
- صریح نام نوع ستون در `SHOW CREATE TABLE` یا `DESCRIBE TABLE`
|
||||
- ورودی از فرمت انسان دوستانه با `INSERT INTO domain_table(domain_column) VALUES(...)`
|
||||
- خروجی به فرمت انسان دوستانه برای `SELECT domain_column FROM domain_table`
|
||||
- بارگیری داده ها از یک منبع خارجی در قالب انسان دوستانه: `INSERT INTO domain_table FORMAT CSV ...`
|
||||
|
||||
### محدودیت ها {#limitations}
|
||||
|
||||
- می توانید ستون شاخص از نوع پایه به نوع دامنه از طریق تبدیل کنید `ALTER TABLE`.
|
||||
- نمی تواند به طور ضمنی تبدیل مقادیر رشته به ارزش دامنه در هنگام قرار دادن داده ها از ستون یا جدول دیگر.
|
||||
- دامنه می افزاید: هیچ محدودیتی در مقادیر ذخیره شده.
|
||||
|
||||
[مقاله اصلی](https://clickhouse.tech/docs/en/data_types/domains/overview) <!--hide-->
|
||||
|
@ -1,32 +0,0 @@
|
||||
---
|
||||
machine_translated: true
|
||||
machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd
|
||||
toc_priority: 58
|
||||
toc_title: "\u0628\u0631\u0631\u0633\u06CC \u0627\u062C\u0645\u0627\u0644\u06CC"
|
||||
---
|
||||
|
||||
# دامنه {#domains}
|
||||
|
||||
دامنه انواع خاصی است که اضافه کردن برخی از ویژگی های اضافی در بالای نوع پایه موجود, اما ترک بر روی سیم و بر روی دیسک فرمت از نوع داده اساسی دست نخورده. درحال حاضر, تاتر می کند دامنه تعریف شده توسط کاربر را پشتیبانی نمی کند.
|
||||
|
||||
شما می توانید دامنه در هر نقطه نوع پایه مربوطه استفاده می شود, مثلا:
|
||||
|
||||
- ایجاد یک ستون از یک نوع دامنه
|
||||
- خواندن / نوشتن مقادیر از / به ستون دامنه
|
||||
- اگر یک نوع پایه می تواند به عنوان یک شاخص استفاده می شود به عنوان شاخص استفاده می شود
|
||||
- توابع تماس با مقادیر ستون دامنه
|
||||
|
||||
### ویژگی های اضافی از دامنه {#extra-features-of-domains}
|
||||
|
||||
- صریح نام نوع ستون در `SHOW CREATE TABLE` یا `DESCRIBE TABLE`
|
||||
- ورودی از فرمت انسان دوستانه با `INSERT INTO domain_table(domain_column) VALUES(...)`
|
||||
- خروجی به فرمت انسان دوستانه برای `SELECT domain_column FROM domain_table`
|
||||
- بارگیری داده ها از یک منبع خارجی در قالب انسان دوستانه: `INSERT INTO domain_table FORMAT CSV ...`
|
||||
|
||||
### محدودیت ها {#limitations}
|
||||
|
||||
- می توانید ستون شاخص از نوع پایه به نوع دامنه از طریق تبدیل کنید `ALTER TABLE`.
|
||||
- نمی تواند به طور ضمنی تبدیل مقادیر رشته به ارزش دامنه در هنگام قرار دادن داده ها از ستون یا جدول دیگر.
|
||||
- دامنه می افزاید: هیچ محدودیتی در مقادیر ذخیره شده.
|
||||
|
||||
[مقاله اصلی](https://clickhouse.tech/docs/en/data_types/domains/overview) <!--hide-->
|
@ -14,6 +14,8 @@ toc_title: "\u062F\u0648\u0631"
|
||||
``` sql
|
||||
remote('addresses_expr', db, table[, 'user'[, 'password']])
|
||||
remote('addresses_expr', db.table[, 'user'[, 'password']])
|
||||
remoteSecure('addresses_expr', db, table[, 'user'[, 'password']])
|
||||
remoteSecure('addresses_expr', db.table[, 'user'[, 'password']])
|
||||
```
|
||||
|
||||
`addresses_expr` – An expression that generates addresses of remote servers. This may be just one server address. The server address is `host:port` یا فقط `host`. میزبان را می توان به عنوان نام سرور مشخص, و یا به عنوان ایپو4 یا ایپو6 نشانی. نشانی اینترنتی6 در براکت مربع مشخص شده است. پورت پورت تی سی پی بر روی سرور از راه دور است. اگر پورت حذف شده است, با استفاده از `tcp_port` از فایل پیکربندی سرور (به طور پیش فرض, 9000).
|
||||
|
@ -3,6 +3,31 @@ machine_translated: true
|
||||
machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd
|
||||
toc_folder_title: Domaine
|
||||
toc_priority: 56
|
||||
toc_title: "Aper\xE7u"
|
||||
---
|
||||
|
||||
# Domaine {#domains}
|
||||
|
||||
Les domaines sont des types spéciaux qui ajoutent des fonctionnalités supplémentaires au sommet du type de base existant, mais en laissant le format on-wire et on-disc du type de données sous-jacent intact. À l'heure actuelle, ClickHouse ne prend pas en charge les domaines définis par l'utilisateur.
|
||||
|
||||
Vous pouvez utiliser des domaines partout type de base correspondant peut être utilisé, par exemple:
|
||||
|
||||
- Créer une colonne d'un type de domaine
|
||||
- Valeurs de lecture / écriture depuis / vers la colonne de domaine
|
||||
- L'utiliser comme un indice si un type de base peut être utilisée comme un indice
|
||||
- Fonctions d'appel avec des valeurs de colonne de domaine
|
||||
|
||||
### Fonctionnalités supplémentaires des domaines {#extra-features-of-domains}
|
||||
|
||||
- Nom de type de colonne explicite dans `SHOW CREATE TABLE` ou `DESCRIBE TABLE`
|
||||
- Entrée du format convivial avec `INSERT INTO domain_table(domain_column) VALUES(...)`
|
||||
- Sortie au format convivial pour `SELECT domain_column FROM domain_table`
|
||||
- Chargement de données à partir d'une source externe dans un format convivial: `INSERT INTO domain_table FORMAT CSV ...`
|
||||
|
||||
### Limitation {#limitations}
|
||||
|
||||
- Impossible de convertir la colonne d'index du type de base en type de domaine via `ALTER TABLE`.
|
||||
- Impossible de convertir implicitement des valeurs de chaîne en valeurs de domaine lors de l'insertion de données d'une autre colonne ou table.
|
||||
- Le domaine n'ajoute aucune contrainte sur les valeurs stockées.
|
||||
|
||||
[Article Original](https://clickhouse.tech/docs/en/data_types/domains/overview) <!--hide-->
|
||||
|
@ -1,32 +0,0 @@
|
||||
---
|
||||
machine_translated: true
|
||||
machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd
|
||||
toc_priority: 58
|
||||
toc_title: "Aper\xE7u"
|
||||
---
|
||||
|
||||
# Domaine {#domains}
|
||||
|
||||
Les domaines sont des types spéciaux qui ajoutent des fonctionnalités supplémentaires au sommet du type de base existant, mais en laissant le format on-wire et on-disc du type de données sous-jacent intact. À l'heure actuelle, ClickHouse ne prend pas en charge les domaines définis par l'utilisateur.
|
||||
|
||||
Vous pouvez utiliser des domaines partout type de base correspondant peut être utilisé, par exemple:
|
||||
|
||||
- Créer une colonne d'un type de domaine
|
||||
- Valeurs de lecture / écriture depuis / vers la colonne de domaine
|
||||
- L'utiliser comme un indice si un type de base peut être utilisée comme un indice
|
||||
- Fonctions d'appel avec des valeurs de colonne de domaine
|
||||
|
||||
### Fonctionnalités supplémentaires des domaines {#extra-features-of-domains}
|
||||
|
||||
- Nom de type de colonne explicite dans `SHOW CREATE TABLE` ou `DESCRIBE TABLE`
|
||||
- Entrée du format convivial avec `INSERT INTO domain_table(domain_column) VALUES(...)`
|
||||
- Sortie au format convivial pour `SELECT domain_column FROM domain_table`
|
||||
- Chargement de données à partir d'une source externe dans un format convivial: `INSERT INTO domain_table FORMAT CSV ...`
|
||||
|
||||
### Limitation {#limitations}
|
||||
|
||||
- Impossible de convertir la colonne d'index du type de base en type de domaine via `ALTER TABLE`.
|
||||
- Impossible de convertir implicitement des valeurs de chaîne en valeurs de domaine lors de l'insertion de données d'une autre colonne ou table.
|
||||
- Le domaine n'ajoute aucune contrainte sur les valeurs stockées.
|
||||
|
||||
[Article Original](https://clickhouse.tech/docs/en/data_types/domains/overview) <!--hide-->
|
@ -14,6 +14,8 @@ Signature:
|
||||
``` sql
|
||||
remote('addresses_expr', db, table[, 'user'[, 'password']])
|
||||
remote('addresses_expr', db.table[, 'user'[, 'password']])
|
||||
remoteSecure('addresses_expr', db, table[, 'user'[, 'password']])
|
||||
remoteSecure('addresses_expr', db.table[, 'user'[, 'password']])
|
||||
```
|
||||
|
||||
`addresses_expr` – An expression that generates addresses of remote servers. This may be just one server address. The server address is `host:port` ou juste `host`. L'hôte peut être spécifié comme nom de serveur ou l'adresse IPv4 ou IPv6. Une adresse IPv6 est indiquée entre crochets. Le port est le port TCP sur le serveur distant. Si le port est omis, il utilise `tcp_port` à partir du fichier de configuration du serveur (par défaut, 9000).
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user