mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-22 15:42:02 +00:00
Merge branch 'master' into uniq-theta-sketch
This commit is contained in:
commit
043e7ccd37
17
.github/ISSUE_TEMPLATE/40_bug-report.md
vendored
17
.github/ISSUE_TEMPLATE/40_bug-report.md
vendored
@ -10,12 +10,26 @@ assignees: ''
|
|||||||
You have to provide the following information whenever possible.
|
You have to provide the following information whenever possible.
|
||||||
|
|
||||||
**Describe the bug**
|
**Describe the bug**
|
||||||
|
|
||||||
A clear and concise description of what works not as it is supposed to.
|
A clear and concise description of what works not as it is supposed to.
|
||||||
|
|
||||||
**Does it reproduce on recent release?**
|
**Does it reproduce on recent release?**
|
||||||
|
|
||||||
[The list of releases](https://github.com/ClickHouse/ClickHouse/blob/master/utils/list-versions/version_date.tsv)
|
[The list of releases](https://github.com/ClickHouse/ClickHouse/blob/master/utils/list-versions/version_date.tsv)
|
||||||
|
|
||||||
|
**Enable crash reporting**
|
||||||
|
|
||||||
|
If possible, change "enabled" to true in "send_crash_reports" section in `config.xml`:
|
||||||
|
|
||||||
|
```
|
||||||
|
<send_crash_reports>
|
||||||
|
<!-- Changing <enabled> to true allows sending crash reports to -->
|
||||||
|
<!-- the ClickHouse core developers team via Sentry https://sentry.io -->
|
||||||
|
<enabled>false</enabled>
|
||||||
|
```
|
||||||
|
|
||||||
**How to reproduce**
|
**How to reproduce**
|
||||||
|
|
||||||
* Which ClickHouse server version to use
|
* Which ClickHouse server version to use
|
||||||
* Which interface to use, if matters
|
* Which interface to use, if matters
|
||||||
* Non-default settings, if any
|
* Non-default settings, if any
|
||||||
@ -24,10 +38,13 @@ A clear and concise description of what works not as it is supposed to.
|
|||||||
* Queries to run that lead to unexpected result
|
* Queries to run that lead to unexpected result
|
||||||
|
|
||||||
**Expected behavior**
|
**Expected behavior**
|
||||||
|
|
||||||
A clear and concise description of what you expected to happen.
|
A clear and concise description of what you expected to happen.
|
||||||
|
|
||||||
**Error message and/or stacktrace**
|
**Error message and/or stacktrace**
|
||||||
|
|
||||||
If applicable, add screenshots to help explain your problem.
|
If applicable, add screenshots to help explain your problem.
|
||||||
|
|
||||||
**Additional context**
|
**Additional context**
|
||||||
|
|
||||||
Add any other context about the problem here.
|
Add any other context about the problem here.
|
||||||
|
5
.gitignore
vendored
5
.gitignore
vendored
@ -14,6 +14,11 @@
|
|||||||
/build-*
|
/build-*
|
||||||
/tests/venv
|
/tests/venv
|
||||||
|
|
||||||
|
# logs
|
||||||
|
*.log
|
||||||
|
*.stderr
|
||||||
|
*.stdout
|
||||||
|
|
||||||
/docs/build
|
/docs/build
|
||||||
/docs/publish
|
/docs/publish
|
||||||
/docs/edit
|
/docs/edit
|
||||||
|
3
.gitmodules
vendored
3
.gitmodules
vendored
@ -228,3 +228,6 @@
|
|||||||
[submodule "contrib/datasketches-cpp"]
|
[submodule "contrib/datasketches-cpp"]
|
||||||
path = contrib/datasketches-cpp
|
path = contrib/datasketches-cpp
|
||||||
url = https://github.com/ClickHouse-Extras/datasketches-cpp.git
|
url = https://github.com/ClickHouse-Extras/datasketches-cpp.git
|
||||||
|
[submodule "contrib/yaml-cpp"]
|
||||||
|
path = contrib/yaml-cpp
|
||||||
|
url = https://github.com/ClickHouse-Extras/yaml-cpp.git
|
||||||
|
129
CHANGELOG.md
129
CHANGELOG.md
@ -1,3 +1,131 @@
|
|||||||
|
### ClickHouse release 21.6, 2021-06-05
|
||||||
|
|
||||||
|
#### Upgrade Notes
|
||||||
|
|
||||||
|
* One bug has been found after release: [#25187](https://github.com/ClickHouse/ClickHouse/issues/25187).
|
||||||
|
* Do not upgrade if you have partition key with `UUID`.
|
||||||
|
* `zstd` compression library is updated to v1.5.0. You may get messages about "checksum does not match" in replication. These messages are expected due to update of compression algorithm and you can ignore them. These messages are informational and do not indicate any kinds of undesired behaviour.
|
||||||
|
* The setting `compile_expressions` is enabled by default. Although it has been heavily tested on variety of scenarios, if you find some undesired behaviour on your servers, you can try turning this setting off.
|
||||||
|
* Values of `UUID` type cannot be compared with integer. For example, instead of writing `uuid != 0` type `uuid != '00000000-0000-0000-0000-000000000000'`.
|
||||||
|
|
||||||
|
#### New Feature
|
||||||
|
|
||||||
|
* Add Postgres-like cast operator (`::`). E.g.: `[1, 2]::Array(UInt8)`, `0.1::Decimal(4, 4)`, `number::UInt16`. [#23871](https://github.com/ClickHouse/ClickHouse/pull/23871) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Make big integers production ready. Add support for `UInt128` data type. Fix known issues with the `Decimal256` data type. Support big integers in dictionaries. Support `gcd`/`lcm` functions for big integers. Support big integers in array search and conditional functions. Support `LowCardinality(UUID)`. Support big integers in `generateRandom` table function and `clickhouse-obfuscator`. Fix error with returning `UUID` from scalar subqueries. This fixes [#7834](https://github.com/ClickHouse/ClickHouse/issues/7834). This fixes [#23936](https://github.com/ClickHouse/ClickHouse/issues/23936). This fixes [#4176](https://github.com/ClickHouse/ClickHouse/issues/4176). This fixes [#24018](https://github.com/ClickHouse/ClickHouse/issues/24018). Backward incompatible change: values of `UUID` type cannot be compared with integer. For example, instead of writing `uuid != 0` type `uuid != '00000000-0000-0000-0000-000000000000'`. [#23631](https://github.com/ClickHouse/ClickHouse/pull/23631) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Support `Array` data type for inserting and selecting data in `Arrow`, `Parquet` and `ORC` formats. [#21770](https://github.com/ClickHouse/ClickHouse/pull/21770) ([taylor12805](https://github.com/taylor12805)).
|
||||||
|
* Implement table comments. Closes [#23225](https://github.com/ClickHouse/ClickHouse/issues/23225). [#23548](https://github.com/ClickHouse/ClickHouse/pull/23548) ([flynn](https://github.com/ucasFL)).
|
||||||
|
* Support creating dictionaries with DDL queries in `clickhouse-local`. Closes [#22354](https://github.com/ClickHouse/ClickHouse/issues/22354). Added support for `DETACH DICTIONARY PERMANENTLY`. Added support for `EXCHANGE DICTIONARIES` for `Atomic` database engine. Added support for moving dictionaries between databases using `RENAME DICTIONARY`. [#23436](https://github.com/ClickHouse/ClickHouse/pull/23436) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Add aggregate function `uniqTheta` to support [Theta Sketch](https://datasketches.apache.org/docs/Theta/ThetaSketchFramework.html) in ClickHouse. [#23894](https://github.com/ClickHouse/ClickHouse/pull/23894). [#22609](https://github.com/ClickHouse/ClickHouse/pull/22609) ([Ping Yu](https://github.com/pingyu)).
|
||||||
|
* Add function `splitByRegexp`. [#24077](https://github.com/ClickHouse/ClickHouse/pull/24077) ([abel-cheng](https://github.com/abel-cheng)).
|
||||||
|
* Add function `arrayProduct` which accept an array as the parameter, and return the product of all the elements in array. Closes [#21613](https://github.com/ClickHouse/ClickHouse/issues/21613). [#23782](https://github.com/ClickHouse/ClickHouse/pull/23782) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Add `thread_name` column in `system.stack_trace`. This closes [#23256](https://github.com/ClickHouse/ClickHouse/issues/23256). [#24124](https://github.com/ClickHouse/ClickHouse/pull/24124) ([abel-cheng](https://github.com/abel-cheng)).
|
||||||
|
* If `insert_null_as_default` = 1, insert default values instead of NULL in `INSERT ... SELECT` and `INSERT ... SELECT ... UNION ALL ...` queries. Closes [#22832](https://github.com/ClickHouse/ClickHouse/issues/22832). [#23524](https://github.com/ClickHouse/ClickHouse/pull/23524) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Add support for progress indication in `clickhouse-local` with `--progress` option. [#23196](https://github.com/ClickHouse/ClickHouse/pull/23196) ([Egor Savin](https://github.com/Amesaru)).
|
||||||
|
* Add support for HTTP compression (determined by `Content-Encoding` HTTP header) in `http` dictionary source. This fixes [#8912](https://github.com/ClickHouse/ClickHouse/issues/8912). [#23946](https://github.com/ClickHouse/ClickHouse/pull/23946) ([FArthur-cmd](https://github.com/FArthur-cmd)).
|
||||||
|
* Added `SYSTEM QUERY RELOAD MODEL`, `SYSTEM QUERY RELOAD MODELS`. Closes [#18722](https://github.com/ClickHouse/ClickHouse/issues/18722). [#23182](https://github.com/ClickHouse/ClickHouse/pull/23182) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Add setting `json` (boolean, 0 by default) for `EXPLAIN PLAN` query. When enabled, query output will be a single `JSON` row. It is recommended to use `TSVRaw` format to avoid unnecessary escaping. [#23082](https://github.com/ClickHouse/ClickHouse/pull/23082) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Add setting `indexes` (boolean, disabled by default) to `EXPLAIN PIPELINE` query. When enabled, shows used indexes, number of filtered parts and granules for every index applied. Supported for `MergeTree*` tables. [#22352](https://github.com/ClickHouse/ClickHouse/pull/22352) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* LDAP: implemented user DN detection functionality to use when mapping Active Directory groups to ClickHouse roles. [#22228](https://github.com/ClickHouse/ClickHouse/pull/22228) ([Denis Glazachev](https://github.com/traceon)).
|
||||||
|
* New aggregate function `deltaSumTimestamp` for summing the difference between consecutive rows while maintaining ordering during merge by storing timestamps. [#21888](https://github.com/ClickHouse/ClickHouse/pull/21888) ([Russ Frank](https://github.com/rf)).
|
||||||
|
* Added less secure IMDS credentials provider for S3 which works under docker correctly. [#21852](https://github.com/ClickHouse/ClickHouse/pull/21852) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
* Add back `indexHint` function. This is for [#21238](https://github.com/ClickHouse/ClickHouse/issues/21238). This reverts [#9542](https://github.com/ClickHouse/ClickHouse/pull/9542). This fixes [#9540](https://github.com/ClickHouse/ClickHouse/issues/9540). [#21304](https://github.com/ClickHouse/ClickHouse/pull/21304) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
|
||||||
|
#### Experimental Feature
|
||||||
|
|
||||||
|
* Add `PROJECTION` support for `MergeTree*` tables. [#20202](https://github.com/ClickHouse/ClickHouse/pull/20202) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
|
||||||
|
#### Performance Improvement
|
||||||
|
|
||||||
|
* Enable `compile_expressions` setting by default. When this setting enabled, compositions of simple functions and operators will be compiled to native code with LLVM at runtime. [#8482](https://github.com/ClickHouse/ClickHouse/pull/8482) ([Maksim Kita](https://github.com/kitaisreal), [alexey-milovidov](https://github.com/alexey-milovidov)). Note: if you feel in trouble, turn this option off.
|
||||||
|
* Update `re2` library. Performance of regular expressions matching is improved. Also this PR adds compatibility with gcc-11. [#24196](https://github.com/ClickHouse/ClickHouse/pull/24196) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
* ORC input format reading by stripe instead of reading entire table into memory by once which is cost memory when file size is huge. [#23102](https://github.com/ClickHouse/ClickHouse/pull/23102) ([Chao Ma](https://github.com/godliness)).
|
||||||
|
* Fusion of aggregate functions `sum`, `count` and `avg` in a query into single aggregate function. The optimization is controlled with the `optimize_fuse_sum_count_avg` setting. This is implemented with a new aggregate function `sumCount`. This function returns a tuple of two fields: `sum` and `count`. [#21337](https://github.com/ClickHouse/ClickHouse/pull/21337) ([hexiaoting](https://github.com/hexiaoting)).
|
||||||
|
* Update `zstd` to v1.5.0. The performance of compression is improved for single digits percentage. [#24135](https://github.com/ClickHouse/ClickHouse/pull/24135) ([Raúl Marín](https://github.com/Algunenano)). Note: you may get messages about "checksum does not match" in replication. These messages are expected due to update of compression algorithm and you can ignore them.
|
||||||
|
* Improved performance of `Buffer` tables: do not acquire lock for total_bytes/total_rows for `Buffer` engine. [#24066](https://github.com/ClickHouse/ClickHouse/pull/24066) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Preallocate support for hashed/sparse_hashed dictionaries is returned. [#23979](https://github.com/ClickHouse/ClickHouse/pull/23979) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Enable `async_socket_for_remote` by default (lower amount of threads in querying Distributed tables with large fanout). [#23683](https://github.com/ClickHouse/ClickHouse/pull/23683) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
|
||||||
|
* Add `_partition_value` virtual column to MergeTree table family. It can be used to prune partition in a deterministic way. It's needed to implement partition matcher for mutations. [#23673](https://github.com/ClickHouse/ClickHouse/pull/23673) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Added `region` parameter for S3 storage and disk. [#23846](https://github.com/ClickHouse/ClickHouse/pull/23846) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
* Allow configuring different log levels for different logging channels. Closes [#19569](https://github.com/ClickHouse/ClickHouse/issues/19569). [#23857](https://github.com/ClickHouse/ClickHouse/pull/23857) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* Keep default timezone on `DateTime` operations if it was not provided explicitly. For example, if you add one second to a value of `DateTime` type without timezone it will remain `DateTime` without timezone. In previous versions the value of default timezone was placed to the returned data type explicitly so it becomes DateTime('something'). This closes [#4854](https://github.com/ClickHouse/ClickHouse/issues/4854). [#23392](https://github.com/ClickHouse/ClickHouse/pull/23392) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Allow user to specify empty string instead of database name for `MySQL` storage. Default database will be used for queries. In previous versions it was working for SELECT queries and not support for INSERT was also added. This closes [#19281](https://github.com/ClickHouse/ClickHouse/issues/19281). This can be useful working with `Sphinx` or other MySQL-compatible foreign databases. [#23319](https://github.com/ClickHouse/ClickHouse/pull/23319) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fixed `quantile(s)TDigest`. Added special handling of singleton centroids according to tdunning/t-digest 3.2+. Also a bug with over-compression of centroids in implementation of earlier version of the algorithm was fixed. [#23314](https://github.com/ClickHouse/ClickHouse/pull/23314) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
* Function `now64` now supports optional timezone argument. [#24091](https://github.com/ClickHouse/ClickHouse/pull/24091) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||||
|
* Fix the case when a progress bar in interactive mode in clickhouse-client that appear in the middle of the data may rewrite some parts of visible data in terminal. This closes [#19283](https://github.com/ClickHouse/ClickHouse/issues/19283). [#23050](https://github.com/ClickHouse/ClickHouse/pull/23050) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix crash when memory allocation fails in simdjson. https://github.com/simdjson/simdjson/pull/1567 . Mark as improvement because it's a very rare bug. [#24147](https://github.com/ClickHouse/ClickHouse/pull/24147) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Preserve dictionaries until storage shutdown (this will avoid possible `external dictionary 'DICT' not found` errors at server shutdown during final flush of the `Buffer` engine). [#24068](https://github.com/ClickHouse/ClickHouse/pull/24068) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Flush `Buffer` tables before shutting down tables (within one database), to avoid discarding blocks due to underlying table had been already detached (and `Destination table default.a_data_01870 doesn't exist. Block of data is discarded` error in the log). [#24067](https://github.com/ClickHouse/ClickHouse/pull/24067) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Now `prefer_column_name_to_alias = 1` will also favor column names for `group by`, `having` and `order by`. This fixes [#23882](https://github.com/ClickHouse/ClickHouse/issues/23882). [#24022](https://github.com/ClickHouse/ClickHouse/pull/24022) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Add support for `ORDER BY WITH FILL` with `DateTime64`. [#24016](https://github.com/ClickHouse/ClickHouse/pull/24016) ([kevin wan](https://github.com/MaxWk)).
|
||||||
|
* Enable `DateTime64` to be a version column in `ReplacingMergeTree`. [#23992](https://github.com/ClickHouse/ClickHouse/pull/23992) ([kevin wan](https://github.com/MaxWk)).
|
||||||
|
* Log information about OS name, kernel version and CPU architecture on server startup. [#23988](https://github.com/ClickHouse/ClickHouse/pull/23988) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Support specifying table schema for `postgresql` dictionary source. Closes [#23958](https://github.com/ClickHouse/ClickHouse/issues/23958). [#23980](https://github.com/ClickHouse/ClickHouse/pull/23980) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Add hints for names of `Enum` elements (suggest names in case of typos). Closes [#17112](https://github.com/ClickHouse/ClickHouse/issues/17112). [#23919](https://github.com/ClickHouse/ClickHouse/pull/23919) ([flynn](https://github.com/ucasFL)).
|
||||||
|
* Measure found rate (the percentage for which the value was found) for dictionaries (see `found_rate` in `system.dictionaries`). [#23916](https://github.com/ClickHouse/ClickHouse/pull/23916) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Allow to add specific queue settings via table settng `rabbitmq_queue_settings_list`. (Closes [#23737](https://github.com/ClickHouse/ClickHouse/issues/23737) and [#23918](https://github.com/ClickHouse/ClickHouse/issues/23918)). Allow user to control all RabbitMQ setup: if table setting `rabbitmq_queue_consume` is set to `1` - RabbitMQ table engine will only connect to specified queue and will not perform any RabbitMQ consumer-side setup like declaring exchange, queues, bindings. (Closes [#21757](https://github.com/ClickHouse/ClickHouse/issues/21757)). Add proper cleanup when RabbitMQ table is dropped - delete queues, which the table has declared and all bound exchanges - if they were created by the table. [#23887](https://github.com/ClickHouse/ClickHouse/pull/23887) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Add `broken_data_files`/`broken_data_compressed_bytes` into `system.distribution_queue`. Add metric for number of files for asynchronous insertion into Distributed tables that has been marked as broken (`BrokenDistributedFilesToInsert`). [#23885](https://github.com/ClickHouse/ClickHouse/pull/23885) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Querying `system.tables` does not go to ZooKeeper anymore. [#23793](https://github.com/ClickHouse/ClickHouse/pull/23793) ([Fuwang Hu](https://github.com/fuwhu)).
|
||||||
|
* Respect `lock_acquire_timeout_for_background_operations` for `OPTIMIZE` queries. [#23623](https://github.com/ClickHouse/ClickHouse/pull/23623) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Possibility to change `S3` disk settings in runtime via new `SYSTEM RESTART DISK` SQL command. [#23429](https://github.com/ClickHouse/ClickHouse/pull/23429) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* If user applied a misconfiguration by mistakenly setting `max_distributed_connections` to value zero, every query to a `Distributed` table will throw exception with a message containing "logical error". But it's really an expected behaviour, not a logical error, so the exception message was slightly incorrect. It also triggered checks in our CI enviroment that ensures that no logical errors ever happen. Instead we will treat `max_distributed_connections` misconfigured to zero as the minimum possible value (one). [#23348](https://github.com/ClickHouse/ClickHouse/pull/23348) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Disable `min_bytes_to_use_mmap_io` by default. [#23322](https://github.com/ClickHouse/ClickHouse/pull/23322) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Support `LowCardinality` nullability with `join_use_nulls`, close [#15101](https://github.com/ClickHouse/ClickHouse/issues/15101). [#23237](https://github.com/ClickHouse/ClickHouse/pull/23237) ([vdimir](https://github.com/vdimir)).
|
||||||
|
* Added possibility to restore `MergeTree` parts to `detached` directory for `S3` disk. [#23112](https://github.com/ClickHouse/ClickHouse/pull/23112) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* Retries on HTTP connection drops in S3. [#22988](https://github.com/ClickHouse/ClickHouse/pull/22988) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
* Add settings `external_storage_max_read_rows` and `external_storage_max_read_rows` for MySQL table engine, dictionary source and MaterializeMySQL minor data fetches. [#22697](https://github.com/ClickHouse/ClickHouse/pull/22697) ([TCeason](https://github.com/TCeason)).
|
||||||
|
* `MaterializeMySQL` (experimental feature): Previously, MySQL 5.7.9 was not supported due to SQL incompatibility. Now leave MySQL parameter verification to the MaterializeMySQL. [#23413](https://github.com/ClickHouse/ClickHouse/pull/23413) ([TCeason](https://github.com/TCeason)).
|
||||||
|
* Enable reading of subcolumns for distributed tables. [#24472](https://github.com/ClickHouse/ClickHouse/pull/24472) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix usage of tuples in `CREATE .. AS SELECT` queries. [#24464](https://github.com/ClickHouse/ClickHouse/pull/24464) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Support for `Parquet` format in `Kafka` tables. [#23412](https://github.com/ClickHouse/ClickHouse/pull/23412) ([Chao Ma](https://github.com/godliness)).
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Use old modulo function version when used in partition key and primary key. Closes [#23508](https://github.com/ClickHouse/ClickHouse/issues/23508). [#24157](https://github.com/ClickHouse/ClickHouse/pull/24157) ([Kseniia Sumarokova](https://github.com/kssenii)). It was a source of backward incompatibility in previous releases.
|
||||||
|
* Fixed the behavior when query `SYSTEM RESTART REPLICA` or `SYSTEM SYNC REPLICA` is being processed infinitely. This was detected on server with extremely little amount of RAM. [#24457](https://github.com/ClickHouse/ClickHouse/pull/24457) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Fix incorrect monotonicity of `toWeek` function. This fixes [#24422](https://github.com/ClickHouse/ClickHouse/issues/24422) . This bug was introduced in [#5212](https://github.com/ClickHouse/ClickHouse/pull/5212), and was exposed later by smarter partition pruner. [#24446](https://github.com/ClickHouse/ClickHouse/pull/24446) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix drop partition with intersect fake parts. In rare cases there might be parts with mutation version greater than current block number. [#24321](https://github.com/ClickHouse/ClickHouse/pull/24321) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fixed a bug in moving Materialized View from Ordinary to Atomic database (`RENAME TABLE` query). Now inner table is moved to new database together with Materialized View. Fixes [#23926](https://github.com/ClickHouse/ClickHouse/issues/23926). [#24309](https://github.com/ClickHouse/ClickHouse/pull/24309) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Allow empty HTTP headers in client requests. Fixes [#23901](https://github.com/ClickHouse/ClickHouse/issues/23901). [#24285](https://github.com/ClickHouse/ClickHouse/pull/24285) ([Ivan](https://github.com/abyss7)).
|
||||||
|
* Set `max_threads = 1` to fix mutation fail of `Memory` tables. Closes [#24274](https://github.com/ClickHouse/ClickHouse/issues/24274). [#24275](https://github.com/ClickHouse/ClickHouse/pull/24275) ([flynn](https://github.com/ucasFL)).
|
||||||
|
* Fix typo in implementation of `Memory` tables, this bug was introduced at [#15127](https://github.com/ClickHouse/ClickHouse/issues/15127). Closes [#24192](https://github.com/ClickHouse/ClickHouse/issues/24192). [#24193](https://github.com/ClickHouse/ClickHouse/pull/24193) ([张中南](https://github.com/plugine)).
|
||||||
|
* Fix abnormal server termination due to `HDFS` becoming not accessible during query execution. Closes [#24117](https://github.com/ClickHouse/ClickHouse/issues/24117). [#24191](https://github.com/ClickHouse/ClickHouse/pull/24191) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Fix crash on updating of `Nested` column with const condition. [#24183](https://github.com/ClickHouse/ClickHouse/pull/24183) ([hexiaoting](https://github.com/hexiaoting)).
|
||||||
|
* Fix race condition which could happen in RBAC under a heavy load. This PR fixes [#24090](https://github.com/ClickHouse/ClickHouse/issues/24090), [#24134](https://github.com/ClickHouse/ClickHouse/issues/24134),. [#24176](https://github.com/ClickHouse/ClickHouse/pull/24176) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* Fix a rare bug that could lead to a partially initialized table that can serve write requests (insert/alter/so on). Now such tables will be in readonly mode. [#24122](https://github.com/ClickHouse/ClickHouse/pull/24122) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix an issue: `EXPLAIN PIPELINE` with `SELECT xxx FINAL` showed a wrong pipeline. ([hexiaoting](https://github.com/hexiaoting)).
|
||||||
|
* Fixed using const `DateTime` value vs `DateTime64` column in `WHERE`. [#24100](https://github.com/ClickHouse/ClickHouse/pull/24100) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||||
|
* Fix crash in merge JOIN, closes [#24010](https://github.com/ClickHouse/ClickHouse/issues/24010). [#24013](https://github.com/ClickHouse/ClickHouse/pull/24013) ([vdimir](https://github.com/vdimir)).
|
||||||
|
* Some `ALTER PARTITION` queries might cause `Part A intersects previous part B` and `Unexpected merged part C intersecting drop range D` errors in replication queue. It's fixed. Fixes [#23296](https://github.com/ClickHouse/ClickHouse/issues/23296). [#23997](https://github.com/ClickHouse/ClickHouse/pull/23997) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix SIGSEGV for external GROUP BY and overflow row (i.e. queries like `SELECT FROM GROUP BY WITH TOTALS SETTINGS max_bytes_before_external_group_by>0, max_rows_to_group_by>0, group_by_overflow_mode='any', totals_mode='before_having'`). [#23962](https://github.com/ClickHouse/ClickHouse/pull/23962) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix keys metrics accounting for `CACHE` dictionary with duplicates in the source (leads to `DictCacheKeysRequestedMiss` overflows). [#23929](https://github.com/ClickHouse/ClickHouse/pull/23929) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix implementation of connection pool of `PostgreSQL` engine. Closes [#23897](https://github.com/ClickHouse/ClickHouse/issues/23897). [#23909](https://github.com/ClickHouse/ClickHouse/pull/23909) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Fix `distributed_group_by_no_merge = 2` with `GROUP BY` and aggregate function wrapped into regular function (had been broken in [#23546](https://github.com/ClickHouse/ClickHouse/issues/23546)). Throw exception in case of someone trying to use `distributed_group_by_no_merge = 2` with window functions. Disable `optimize_distributed_group_by_sharding_key` for queries with window functions. [#23906](https://github.com/ClickHouse/ClickHouse/pull/23906) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* A fix for `s3` table function: better handling of HTTP errors. Response bodies of HTTP errors were being ignored earlier. [#23844](https://github.com/ClickHouse/ClickHouse/pull/23844) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
* A fix for `s3` table function: better handling of URI's. Fixed an incompatibility with URLs containing `+` symbol, data with such keys could not be read previously. [#23822](https://github.com/ClickHouse/ClickHouse/pull/23822) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
* Fix error `Can't initialize pipeline with empty pipe` for queries with `GLOBAL IN/JOIN` and `use_hedged_requests`. Fixes [#23431](https://github.com/ClickHouse/ClickHouse/issues/23431). [#23805](https://github.com/ClickHouse/ClickHouse/pull/23805) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Fix `CLEAR COLUMN` does not work when it is referenced by materialized view. Close [#23764](https://github.com/ClickHouse/ClickHouse/issues/23764). [#23781](https://github.com/ClickHouse/ClickHouse/pull/23781) ([flynn](https://github.com/ucasFL)).
|
||||||
|
* Fix heap use after free when reading from HDFS if `Values` format is used. [#23761](https://github.com/ClickHouse/ClickHouse/pull/23761) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Avoid possible "Cannot schedule a task" error (in case some exception had been occurred) on INSERT into Distributed. [#23744](https://github.com/ClickHouse/ClickHouse/pull/23744) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fixed a bug in recovery of staled `ReplicatedMergeTree` replica. Some metadata updates could be ignored by staled replica if `ALTER` query was executed during downtime of the replica. [#23742](https://github.com/ClickHouse/ClickHouse/pull/23742) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix a bug with `Join` and `WITH TOTALS`, close [#17718](https://github.com/ClickHouse/ClickHouse/issues/17718). [#23549](https://github.com/ClickHouse/ClickHouse/pull/23549) ([vdimir](https://github.com/vdimir)).
|
||||||
|
* Fix possible `Block structure mismatch` error for queries with `UNION` which could possibly happen after filter-pushdown optimization. Fixes [#23029](https://github.com/ClickHouse/ClickHouse/issues/23029). [#23359](https://github.com/ClickHouse/ClickHouse/pull/23359) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Add type conversion when the setting `optimize_skip_unused_shards_rewrite_in` is enabled. This fixes MSan report. [#23219](https://github.com/ClickHouse/ClickHouse/pull/23219) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Add a missing check when updating nested subcolumns, close issue: [#22353](https://github.com/ClickHouse/ClickHouse/issues/22353). [#22503](https://github.com/ClickHouse/ClickHouse/pull/22503) ([hexiaoting](https://github.com/hexiaoting)).
|
||||||
|
|
||||||
|
#### Build/Testing/Packaging Improvement
|
||||||
|
|
||||||
|
* Support building on Illumos. [#24144](https://github.com/ClickHouse/ClickHouse/pull/24144). Adds support for building on Solaris-derived operating systems. [#23746](https://github.com/ClickHouse/ClickHouse/pull/23746) ([bnaecker](https://github.com/bnaecker)).
|
||||||
|
* Add more benchmarks for hash tables, including the Swiss Table from Google (that appeared to be slower than ClickHouse hash map in our specific usage scenario). [#24111](https://github.com/ClickHouse/ClickHouse/pull/24111) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Update librdkafka 1.6.0-RC3 to 1.6.1. [#23874](https://github.com/ClickHouse/ClickHouse/pull/23874) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* Always enable `asynchronous-unwind-tables` explicitly. It may fix query profiler on AArch64. [#23602](https://github.com/ClickHouse/ClickHouse/pull/23602) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Avoid possible build dependency on locale and filesystem order. This allows reproducible builds. [#23600](https://github.com/ClickHouse/ClickHouse/pull/23600) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Remove a source of nondeterminism from build. Now builds at different point of time will produce byte-identical binaries. Partially addressed [#22113](https://github.com/ClickHouse/ClickHouse/issues/22113). [#23559](https://github.com/ClickHouse/ClickHouse/pull/23559) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add simple tool for benchmarking (Zoo)Keeper. [#23038](https://github.com/ClickHouse/ClickHouse/pull/23038) ([alesapin](https://github.com/alesapin)).
|
||||||
|
|
||||||
|
|
||||||
## ClickHouse release 21.5, 2021-05-20
|
## ClickHouse release 21.5, 2021-05-20
|
||||||
|
|
||||||
#### Backward Incompatible Change
|
#### Backward Incompatible Change
|
||||||
@ -637,6 +765,7 @@
|
|||||||
* Allow using extended integer types (`Int128`, `Int256`, `UInt256`) in `avg` and `avgWeighted` functions. Also allow using different types (integer, decimal, floating point) for value and for weight in `avgWeighted` function. This is a backward-incompatible change: now the `avg` and `avgWeighted` functions always return `Float64` (as documented). Before this change the return type for `Decimal` arguments was also `Decimal`. [#15419](https://github.com/ClickHouse/ClickHouse/pull/15419) ([Mike](https://github.com/myrrc)).
|
* Allow using extended integer types (`Int128`, `Int256`, `UInt256`) in `avg` and `avgWeighted` functions. Also allow using different types (integer, decimal, floating point) for value and for weight in `avgWeighted` function. This is a backward-incompatible change: now the `avg` and `avgWeighted` functions always return `Float64` (as documented). Before this change the return type for `Decimal` arguments was also `Decimal`. [#15419](https://github.com/ClickHouse/ClickHouse/pull/15419) ([Mike](https://github.com/myrrc)).
|
||||||
* Expression `toUUID(N)` no longer works. Replace with `toUUID('00000000-0000-0000-0000-000000000000')`. This change is motivated by non-obvious results of `toUUID(N)` where N is non zero.
|
* Expression `toUUID(N)` no longer works. Replace with `toUUID('00000000-0000-0000-0000-000000000000')`. This change is motivated by non-obvious results of `toUUID(N)` where N is non zero.
|
||||||
* SSL Certificates with incorrect "key usage" are rejected. In previous versions they are used to work. See [#19262](https://github.com/ClickHouse/ClickHouse/issues/19262).
|
* SSL Certificates with incorrect "key usage" are rejected. In previous versions they are used to work. See [#19262](https://github.com/ClickHouse/ClickHouse/issues/19262).
|
||||||
|
* `incl` references to substitutions file (`/etc/metrika.xml`) were removed from the default config (`<remote_servers>`, `<zookeeper>`, `<macros>`, `<compression>`, `<networks>`). If you were using substitutions file and were relying on those implicit references, you should put them back manually and explicitly by adding corresponding sections with `incl="..."` attributes before the update. See [#18740](https://github.com/ClickHouse/ClickHouse/pull/18740) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
|
||||||
#### New Feature
|
#### New Feature
|
||||||
|
|
||||||
|
@ -36,7 +36,7 @@ option(FAIL_ON_UNSUPPORTED_OPTIONS_COMBINATION
|
|||||||
if(FAIL_ON_UNSUPPORTED_OPTIONS_COMBINATION)
|
if(FAIL_ON_UNSUPPORTED_OPTIONS_COMBINATION)
|
||||||
set(RECONFIGURE_MESSAGE_LEVEL FATAL_ERROR)
|
set(RECONFIGURE_MESSAGE_LEVEL FATAL_ERROR)
|
||||||
else()
|
else()
|
||||||
set(RECONFIGURE_MESSAGE_LEVEL STATUS)
|
set(RECONFIGURE_MESSAGE_LEVEL WARNING)
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
enable_language(C CXX ASM)
|
enable_language(C CXX ASM)
|
||||||
@ -504,7 +504,6 @@ include (cmake/find/libuv.cmake) # for amqpcpp and cassandra
|
|||||||
include (cmake/find/amqpcpp.cmake)
|
include (cmake/find/amqpcpp.cmake)
|
||||||
include (cmake/find/capnp.cmake)
|
include (cmake/find/capnp.cmake)
|
||||||
include (cmake/find/llvm.cmake)
|
include (cmake/find/llvm.cmake)
|
||||||
include (cmake/find/termcap.cmake) # for external static llvm
|
|
||||||
include (cmake/find/h3.cmake)
|
include (cmake/find/h3.cmake)
|
||||||
include (cmake/find/libxml2.cmake)
|
include (cmake/find/libxml2.cmake)
|
||||||
include (cmake/find/brotli.cmake)
|
include (cmake/find/brotli.cmake)
|
||||||
@ -527,7 +526,7 @@ include (cmake/find/nanodbc.cmake)
|
|||||||
include (cmake/find/rocksdb.cmake)
|
include (cmake/find/rocksdb.cmake)
|
||||||
include (cmake/find/libpqxx.cmake)
|
include (cmake/find/libpqxx.cmake)
|
||||||
include (cmake/find/nuraft.cmake)
|
include (cmake/find/nuraft.cmake)
|
||||||
|
include (cmake/find/yaml-cpp.cmake)
|
||||||
|
|
||||||
if(NOT USE_INTERNAL_PARQUET_LIBRARY)
|
if(NOT USE_INTERNAL_PARQUET_LIBRARY)
|
||||||
set (ENABLE_ORC OFF CACHE INTERNAL "")
|
set (ENABLE_ORC OFF CACHE INTERNAL "")
|
||||||
|
@ -13,3 +13,6 @@ ClickHouse® is an open-source column-oriented database management system that a
|
|||||||
* [Code Browser](https://clickhouse.tech/codebrowser/html_report/ClickHouse/index.html) with syntax highlight and navigation.
|
* [Code Browser](https://clickhouse.tech/codebrowser/html_report/ClickHouse/index.html) with syntax highlight and navigation.
|
||||||
* [Contacts](https://clickhouse.tech/#contacts) can help to get your questions answered if there are any.
|
* [Contacts](https://clickhouse.tech/#contacts) can help to get your questions answered if there are any.
|
||||||
* You can also [fill this form](https://clickhouse.tech/#meet) to meet Yandex ClickHouse team in person.
|
* You can also [fill this form](https://clickhouse.tech/#meet) to meet Yandex ClickHouse team in person.
|
||||||
|
|
||||||
|
## Upcoming Events
|
||||||
|
* [SF Bay Area ClickHouse Community Meetup (online)](https://www.meetup.com/San-Francisco-Bay-Area-ClickHouse-Meetup/events/278144089/) on 16 June 2021.
|
||||||
|
@ -3,5 +3,11 @@ add_library (bridge
|
|||||||
)
|
)
|
||||||
|
|
||||||
target_include_directories (daemon PUBLIC ..)
|
target_include_directories (daemon PUBLIC ..)
|
||||||
target_link_libraries (bridge PRIVATE daemon dbms Poco::Data Poco::Data::ODBC)
|
target_link_libraries (bridge
|
||||||
|
PRIVATE
|
||||||
|
daemon
|
||||||
|
dbms
|
||||||
|
Poco::Data
|
||||||
|
Poco::Data::ODBC
|
||||||
|
)
|
||||||
|
|
||||||
|
@ -26,8 +26,6 @@
|
|||||||
#include <Poco/Observer.h>
|
#include <Poco/Observer.h>
|
||||||
#include <Poco/AutoPtr.h>
|
#include <Poco/AutoPtr.h>
|
||||||
#include <Poco/PatternFormatter.h>
|
#include <Poco/PatternFormatter.h>
|
||||||
#include <Poco/File.h>
|
|
||||||
#include <Poco/Path.h>
|
|
||||||
#include <Poco/Message.h>
|
#include <Poco/Message.h>
|
||||||
#include <Poco/Util/Application.h>
|
#include <Poco/Util/Application.h>
|
||||||
#include <Poco/Exception.h>
|
#include <Poco/Exception.h>
|
||||||
@ -59,6 +57,7 @@
|
|||||||
#include <Common/getExecutablePath.h>
|
#include <Common/getExecutablePath.h>
|
||||||
#include <Common/getHashOfLoadedBinary.h>
|
#include <Common/getHashOfLoadedBinary.h>
|
||||||
#include <Common/Elf.h>
|
#include <Common/Elf.h>
|
||||||
|
#include <filesystem>
|
||||||
|
|
||||||
#if !defined(ARCADIA_BUILD)
|
#if !defined(ARCADIA_BUILD)
|
||||||
# include <Common/config_version.h>
|
# include <Common/config_version.h>
|
||||||
@ -70,6 +69,7 @@
|
|||||||
#endif
|
#endif
|
||||||
#include <ucontext.h>
|
#include <ucontext.h>
|
||||||
|
|
||||||
|
namespace fs = std::filesystem;
|
||||||
|
|
||||||
DB::PipeFDs signal_pipe;
|
DB::PipeFDs signal_pipe;
|
||||||
|
|
||||||
@ -437,11 +437,11 @@ static void sanitizerDeathCallback()
|
|||||||
|
|
||||||
static std::string createDirectory(const std::string & file)
|
static std::string createDirectory(const std::string & file)
|
||||||
{
|
{
|
||||||
auto path = Poco::Path(file).makeParent();
|
fs::path path = fs::path(file).parent_path();
|
||||||
if (path.toString().empty())
|
if (path.empty())
|
||||||
return "";
|
return "";
|
||||||
Poco::File(path).createDirectories();
|
fs::create_directories(path);
|
||||||
return path.toString();
|
return path;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
||||||
@ -449,7 +449,7 @@ static bool tryCreateDirectories(Poco::Logger * logger, const std::string & path
|
|||||||
{
|
{
|
||||||
try
|
try
|
||||||
{
|
{
|
||||||
Poco::File(path).createDirectories();
|
fs::create_directories(path);
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
catch (...)
|
catch (...)
|
||||||
@ -470,7 +470,7 @@ void BaseDaemon::reloadConfiguration()
|
|||||||
*/
|
*/
|
||||||
config_path = config().getString("config-file", getDefaultConfigFileName());
|
config_path = config().getString("config-file", getDefaultConfigFileName());
|
||||||
DB::ConfigProcessor config_processor(config_path, false, true);
|
DB::ConfigProcessor config_processor(config_path, false, true);
|
||||||
config_processor.setConfigPath(Poco::Path(config_path).makeParent().toString());
|
config_processor.setConfigPath(fs::path(config_path).parent_path());
|
||||||
loaded_config = config_processor.loadConfig(/* allow_zk_includes = */ true);
|
loaded_config = config_processor.loadConfig(/* allow_zk_includes = */ true);
|
||||||
|
|
||||||
if (last_configuration != nullptr)
|
if (last_configuration != nullptr)
|
||||||
@ -524,18 +524,20 @@ std::string BaseDaemon::getDefaultConfigFileName() const
|
|||||||
void BaseDaemon::closeFDs()
|
void BaseDaemon::closeFDs()
|
||||||
{
|
{
|
||||||
#if defined(OS_FREEBSD) || defined(OS_DARWIN)
|
#if defined(OS_FREEBSD) || defined(OS_DARWIN)
|
||||||
Poco::File proc_path{"/dev/fd"};
|
fs::path proc_path{"/dev/fd"};
|
||||||
#else
|
#else
|
||||||
Poco::File proc_path{"/proc/self/fd"};
|
fs::path proc_path{"/proc/self/fd"};
|
||||||
#endif
|
#endif
|
||||||
if (proc_path.isDirectory()) /// Hooray, proc exists
|
if (fs::is_directory(proc_path)) /// Hooray, proc exists
|
||||||
{
|
{
|
||||||
std::vector<std::string> fds;
|
/// in /proc/self/fd directory filenames are numeric file descriptors.
|
||||||
/// in /proc/self/fd directory filenames are numeric file descriptors
|
/// Iterate directory separately from closing fds to avoid closing iterated directory fd.
|
||||||
proc_path.list(fds);
|
std::vector<int> fds;
|
||||||
for (const auto & fd_str : fds)
|
for (const auto & path : fs::directory_iterator(proc_path))
|
||||||
|
fds.push_back(DB::parse<int>(path.path().filename()));
|
||||||
|
|
||||||
|
for (const auto & fd : fds)
|
||||||
{
|
{
|
||||||
int fd = DB::parse<int>(fd_str);
|
|
||||||
if (fd > 2 && fd != signal_pipe.fds_rw[0] && fd != signal_pipe.fds_rw[1])
|
if (fd > 2 && fd != signal_pipe.fds_rw[0] && fd != signal_pipe.fds_rw[1])
|
||||||
::close(fd);
|
::close(fd);
|
||||||
}
|
}
|
||||||
@ -597,7 +599,7 @@ void BaseDaemon::initialize(Application & self)
|
|||||||
{
|
{
|
||||||
/** When creating pid file and looking for config, will search for paths relative to the working path of the program when started.
|
/** When creating pid file and looking for config, will search for paths relative to the working path of the program when started.
|
||||||
*/
|
*/
|
||||||
std::string path = Poco::Path(config().getString("application.path")).setFileName("").toString();
|
std::string path = fs::path(config().getString("application.path")).replace_filename("");
|
||||||
if (0 != chdir(path.c_str()))
|
if (0 != chdir(path.c_str()))
|
||||||
throw Poco::Exception("Cannot change directory to " + path);
|
throw Poco::Exception("Cannot change directory to " + path);
|
||||||
}
|
}
|
||||||
@ -645,7 +647,7 @@ void BaseDaemon::initialize(Application & self)
|
|||||||
|
|
||||||
std::string log_path = config().getString("logger.log", "");
|
std::string log_path = config().getString("logger.log", "");
|
||||||
if (!log_path.empty())
|
if (!log_path.empty())
|
||||||
log_path = Poco::Path(log_path).setFileName("").toString();
|
log_path = fs::path(log_path).replace_filename("");
|
||||||
|
|
||||||
/** Redirect stdout, stderr to separate files in the log directory (or in the specified file).
|
/** Redirect stdout, stderr to separate files in the log directory (or in the specified file).
|
||||||
* Some libraries write to stderr in case of errors in debug mode,
|
* Some libraries write to stderr in case of errors in debug mode,
|
||||||
@ -708,8 +710,7 @@ void BaseDaemon::initialize(Application & self)
|
|||||||
|
|
||||||
tryCreateDirectories(&logger(), core_path);
|
tryCreateDirectories(&logger(), core_path);
|
||||||
|
|
||||||
Poco::File cores = core_path;
|
if (!(fs::exists(core_path) && fs::is_directory(core_path)))
|
||||||
if (!(cores.exists() && cores.isDirectory()))
|
|
||||||
{
|
{
|
||||||
core_path = !log_path.empty() ? log_path : "/opt/";
|
core_path = !log_path.empty() ? log_path : "/opt/";
|
||||||
tryCreateDirectories(&logger(), core_path);
|
tryCreateDirectories(&logger(), core_path);
|
||||||
|
@ -1,6 +1,5 @@
|
|||||||
#include <daemon/SentryWriter.h>
|
#include <daemon/SentryWriter.h>
|
||||||
|
|
||||||
#include <Poco/File.h>
|
|
||||||
#include <Poco/Util/Application.h>
|
#include <Poco/Util/Application.h>
|
||||||
#include <Poco/Util/LayeredConfiguration.h>
|
#include <Poco/Util/LayeredConfiguration.h>
|
||||||
|
|
||||||
@ -25,6 +24,7 @@
|
|||||||
# include <stdio.h>
|
# include <stdio.h>
|
||||||
# include <filesystem>
|
# include <filesystem>
|
||||||
|
|
||||||
|
namespace fs = std::filesystem;
|
||||||
|
|
||||||
namespace
|
namespace
|
||||||
{
|
{
|
||||||
@ -53,8 +53,7 @@ void setExtras()
|
|||||||
sentry_set_extra("physical_cpu_cores", sentry_value_new_int32(getNumberOfPhysicalCPUCores()));
|
sentry_set_extra("physical_cpu_cores", sentry_value_new_int32(getNumberOfPhysicalCPUCores()));
|
||||||
|
|
||||||
if (!server_data_path.empty())
|
if (!server_data_path.empty())
|
||||||
sentry_set_extra("disk_free_space", sentry_value_new_string(formatReadableSizeWithBinarySuffix(
|
sentry_set_extra("disk_free_space", sentry_value_new_string(formatReadableSizeWithBinarySuffix(fs::space(server_data_path).free).c_str()));
|
||||||
Poco::File(server_data_path).freeSpace()).c_str()));
|
|
||||||
}
|
}
|
||||||
|
|
||||||
void sentry_logger(sentry_level_e level, const char * message, va_list args, void *)
|
void sentry_logger(sentry_level_e level, const char * message, va_list args, void *)
|
||||||
@ -110,12 +109,12 @@ void SentryWriter::initialize(Poco::Util::LayeredConfiguration & config)
|
|||||||
if (enabled)
|
if (enabled)
|
||||||
{
|
{
|
||||||
server_data_path = config.getString("path", "");
|
server_data_path = config.getString("path", "");
|
||||||
const std::filesystem::path & default_tmp_path = std::filesystem::path(config.getString("tmp_path", Poco::Path::temp())) / "sentry";
|
const std::filesystem::path & default_tmp_path = fs::path(config.getString("tmp_path", fs::temp_directory_path())) / "sentry";
|
||||||
const std::string & endpoint
|
const std::string & endpoint
|
||||||
= config.getString("send_crash_reports.endpoint");
|
= config.getString("send_crash_reports.endpoint");
|
||||||
const std::string & temp_folder_path
|
const std::string & temp_folder_path
|
||||||
= config.getString("send_crash_reports.tmp_path", default_tmp_path);
|
= config.getString("send_crash_reports.tmp_path", default_tmp_path);
|
||||||
Poco::File(temp_folder_path).createDirectories();
|
fs::create_directories(temp_folder_path);
|
||||||
|
|
||||||
sentry_options_t * options = sentry_options_new(); /// will be freed by sentry_init or sentry_shutdown
|
sentry_options_t * options = sentry_options_new(); /// will be freed by sentry_init or sentry_shutdown
|
||||||
sentry_options_set_release(options, VERSION_STRING_SHORT);
|
sentry_options_set_release(options, VERSION_STRING_SHORT);
|
||||||
|
@ -6,10 +6,11 @@
|
|||||||
#include "OwnFormattingChannel.h"
|
#include "OwnFormattingChannel.h"
|
||||||
#include "OwnPatternFormatter.h"
|
#include "OwnPatternFormatter.h"
|
||||||
#include <Poco/ConsoleChannel.h>
|
#include <Poco/ConsoleChannel.h>
|
||||||
#include <Poco/File.h>
|
|
||||||
#include <Poco/Logger.h>
|
#include <Poco/Logger.h>
|
||||||
#include <Poco/Net/RemoteSyslogChannel.h>
|
#include <Poco/Net/RemoteSyslogChannel.h>
|
||||||
#include <Poco/Path.h>
|
#include <filesystem>
|
||||||
|
|
||||||
|
namespace fs = std::filesystem;
|
||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
@ -20,11 +21,11 @@ namespace DB
|
|||||||
// TODO: move to libcommon
|
// TODO: move to libcommon
|
||||||
static std::string createDirectory(const std::string & file)
|
static std::string createDirectory(const std::string & file)
|
||||||
{
|
{
|
||||||
auto path = Poco::Path(file).makeParent();
|
auto path = fs::path(file).parent_path();
|
||||||
if (path.toString().empty())
|
if (path.empty())
|
||||||
return "";
|
return "";
|
||||||
Poco::File(path).createDirectories();
|
fs::create_directories(path);
|
||||||
return path.toString();
|
return path;
|
||||||
};
|
};
|
||||||
|
|
||||||
void Loggers::setTextLog(std::shared_ptr<DB::TextLog> log, int max_priority)
|
void Loggers::setTextLog(std::shared_ptr<DB::TextLog> log, int max_priority)
|
||||||
@ -70,7 +71,7 @@ void Loggers::buildLoggers(Poco::Util::AbstractConfiguration & config, Poco::Log
|
|||||||
|
|
||||||
// Set up two channel chains.
|
// Set up two channel chains.
|
||||||
log_file = new Poco::FileChannel;
|
log_file = new Poco::FileChannel;
|
||||||
log_file->setProperty(Poco::FileChannel::PROP_PATH, Poco::Path(log_path).absolute().toString());
|
log_file->setProperty(Poco::FileChannel::PROP_PATH, fs::weakly_canonical(log_path));
|
||||||
log_file->setProperty(Poco::FileChannel::PROP_ROTATION, config.getRawString("logger.size", "100M"));
|
log_file->setProperty(Poco::FileChannel::PROP_ROTATION, config.getRawString("logger.size", "100M"));
|
||||||
log_file->setProperty(Poco::FileChannel::PROP_ARCHIVE, "number");
|
log_file->setProperty(Poco::FileChannel::PROP_ARCHIVE, "number");
|
||||||
log_file->setProperty(Poco::FileChannel::PROP_COMPRESS, config.getRawString("logger.compress", "true"));
|
log_file->setProperty(Poco::FileChannel::PROP_COMPRESS, config.getRawString("logger.compress", "true"));
|
||||||
@ -102,7 +103,7 @@ void Loggers::buildLoggers(Poco::Util::AbstractConfiguration & config, Poco::Log
|
|||||||
std::cerr << "Logging errors to " << errorlog_path << std::endl;
|
std::cerr << "Logging errors to " << errorlog_path << std::endl;
|
||||||
|
|
||||||
error_log_file = new Poco::FileChannel;
|
error_log_file = new Poco::FileChannel;
|
||||||
error_log_file->setProperty(Poco::FileChannel::PROP_PATH, Poco::Path(errorlog_path).absolute().toString());
|
error_log_file->setProperty(Poco::FileChannel::PROP_PATH, fs::weakly_canonical(errorlog_path));
|
||||||
error_log_file->setProperty(Poco::FileChannel::PROP_ROTATION, config.getRawString("logger.size", "100M"));
|
error_log_file->setProperty(Poco::FileChannel::PROP_ROTATION, config.getRawString("logger.size", "100M"));
|
||||||
error_log_file->setProperty(Poco::FileChannel::PROP_ARCHIVE, "number");
|
error_log_file->setProperty(Poco::FileChannel::PROP_ARCHIVE, "number");
|
||||||
error_log_file->setProperty(Poco::FileChannel::PROP_COMPRESS, config.getRawString("logger.compress", "true"));
|
error_log_file->setProperty(Poco::FileChannel::PROP_COMPRESS, config.getRawString("logger.compress", "true"));
|
||||||
|
@ -4,12 +4,14 @@
|
|||||||
#include <Core/Block.h>
|
#include <Core/Block.h>
|
||||||
#include <Interpreters/InternalTextLogsQueue.h>
|
#include <Interpreters/InternalTextLogsQueue.h>
|
||||||
#include <Interpreters/TextLog.h>
|
#include <Interpreters/TextLog.h>
|
||||||
|
#include <IO/WriteBufferFromFileDescriptor.h>
|
||||||
#include <sys/time.h>
|
#include <sys/time.h>
|
||||||
#include <Poco/Message.h>
|
#include <Poco/Message.h>
|
||||||
#include <Common/CurrentThread.h>
|
#include <Common/CurrentThread.h>
|
||||||
#include <Common/DNSResolver.h>
|
#include <Common/DNSResolver.h>
|
||||||
#include <common/getThreadId.h>
|
#include <common/getThreadId.h>
|
||||||
#include <Common/SensitiveDataMasker.h>
|
#include <Common/SensitiveDataMasker.h>
|
||||||
|
#include <Common/IO.h>
|
||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
@ -26,16 +28,48 @@ void OwnSplitChannel::log(const Poco::Message & msg)
|
|||||||
auto matches = masker->wipeSensitiveData(message_text);
|
auto matches = masker->wipeSensitiveData(message_text);
|
||||||
if (matches > 0)
|
if (matches > 0)
|
||||||
{
|
{
|
||||||
logSplit({msg, message_text}); // we will continue with the copy of original message with text modified
|
tryLogSplit({msg, message_text}); // we will continue with the copy of original message with text modified
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
logSplit(msg);
|
tryLogSplit(msg);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
void OwnSplitChannel::tryLogSplit(const Poco::Message & msg)
|
||||||
|
{
|
||||||
|
try
|
||||||
|
{
|
||||||
|
logSplit(msg);
|
||||||
|
}
|
||||||
|
/// It is better to catch the errors here in order to avoid
|
||||||
|
/// breaking some functionality because of unexpected "File not
|
||||||
|
/// found" (or similar) error.
|
||||||
|
///
|
||||||
|
/// For example StorageDistributedDirectoryMonitor will mark batch
|
||||||
|
/// as broken, some MergeTree code can also be affected.
|
||||||
|
///
|
||||||
|
/// Also note, that we cannot log the exception here, since this
|
||||||
|
/// will lead to recursion, using regular tryLogCurrentException().
|
||||||
|
/// but let's log it into the stderr at least.
|
||||||
|
catch (...)
|
||||||
|
{
|
||||||
|
MemoryTracker::LockExceptionInThread lock_memory_tracker(VariableContext::Global);
|
||||||
|
|
||||||
|
const std::string & exception_message = getCurrentExceptionMessage(true);
|
||||||
|
const std::string & message = msg.getText();
|
||||||
|
|
||||||
|
/// NOTE: errors are ignored, since nothing can be done.
|
||||||
|
writeRetry(STDERR_FILENO, "Cannot add message to the log: ");
|
||||||
|
writeRetry(STDERR_FILENO, message.data(), message.size());
|
||||||
|
writeRetry(STDERR_FILENO, "\n");
|
||||||
|
writeRetry(STDERR_FILENO, exception_message.data(), exception_message.size());
|
||||||
|
writeRetry(STDERR_FILENO, "\n");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
void OwnSplitChannel::logSplit(const Poco::Message & msg)
|
void OwnSplitChannel::logSplit(const Poco::Message & msg)
|
||||||
{
|
{
|
||||||
ExtendedLogMessage msg_ext = ExtendedLogMessage::getFrom(msg);
|
ExtendedLogMessage msg_ext = ExtendedLogMessage::getFrom(msg);
|
||||||
|
@ -24,6 +24,7 @@ public:
|
|||||||
|
|
||||||
private:
|
private:
|
||||||
void logSplit(const Poco::Message & msg);
|
void logSplit(const Poco::Message & msg);
|
||||||
|
void tryLogSplit(const Poco::Message & msg);
|
||||||
|
|
||||||
using ChannelPtr = Poco::AutoPtr<Poco::Channel>;
|
using ChannelPtr = Poco::AutoPtr<Poco::Channel>;
|
||||||
/// Handler and its pointer casted to extended interface
|
/// Handler and its pointer casted to extended interface
|
||||||
|
@ -1,102 +1,34 @@
|
|||||||
if (APPLE OR SPLIT_SHARED_LIBRARIES OR NOT ARCH_AMD64)
|
if (APPLE OR SPLIT_SHARED_LIBRARIES OR NOT ARCH_AMD64 OR SANITIZE STREQUAL "undefined")
|
||||||
set (ENABLE_EMBEDDED_COMPILER OFF CACHE INTERNAL "")
|
set (ENABLE_EMBEDDED_COMPILER OFF CACHE INTERNAL "")
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
option (ENABLE_EMBEDDED_COMPILER "Enable support for 'compile_expressions' option for query execution" ON)
|
option (ENABLE_EMBEDDED_COMPILER "Enable support for 'compile_expressions' option for query execution" ON)
|
||||||
# Broken in macos. TODO: update clang, re-test, enable on Apple
|
|
||||||
if (ENABLE_EMBEDDED_COMPILER AND NOT SPLIT_SHARED_LIBRARIES AND ARCH_AMD64 AND NOT (SANITIZE STREQUAL "undefined"))
|
|
||||||
option (USE_INTERNAL_LLVM_LIBRARY "Use bundled or system LLVM library." ${NOT_UNBUNDLED})
|
|
||||||
endif()
|
|
||||||
|
|
||||||
if (NOT ENABLE_EMBEDDED_COMPILER)
|
if (NOT ENABLE_EMBEDDED_COMPILER)
|
||||||
if(USE_INTERNAL_LLVM_LIBRARY)
|
set (USE_EMBEDDED_COMPILER 0)
|
||||||
message (${RECONFIGURE_MESSAGE_LEVEL} "Cannot use internal LLVM library with ENABLE_EMBEDDED_COMPILER=OFF")
|
|
||||||
endif()
|
|
||||||
return()
|
return()
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/llvm/llvm/CMakeLists.txt")
|
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/llvm/llvm/CMakeLists.txt")
|
||||||
if (USE_INTERNAL_LLVM_LIBRARY)
|
message (${RECONFIGURE_MESSAGE_LEVEL} "submodule /contrib/llvm is missing. to fix try run: \n git submodule update --init --recursive")
|
||||||
message (WARNING "submodule contrib/llvm is missing. to fix try run: \n git submodule update --init --recursive")
|
|
||||||
message (${RECONFIGURE_MESSAGE_LEVEL} "Can't fidd internal LLVM library")
|
|
||||||
endif()
|
|
||||||
set (MISSING_INTERNAL_LLVM_LIBRARY 1)
|
|
||||||
endif ()
|
endif ()
|
||||||
|
|
||||||
if (NOT USE_INTERNAL_LLVM_LIBRARY)
|
|
||||||
set (LLVM_PATHS "/usr/local/lib/llvm" "/usr/lib/llvm")
|
|
||||||
|
|
||||||
foreach(llvm_v 11.1 11)
|
|
||||||
if (NOT LLVM_FOUND)
|
|
||||||
find_package (LLVM ${llvm_v} CONFIG PATHS ${LLVM_PATHS})
|
|
||||||
endif ()
|
|
||||||
endforeach ()
|
|
||||||
|
|
||||||
if (LLVM_FOUND)
|
|
||||||
# Remove dynamically-linked zlib and libedit from LLVM's dependencies:
|
|
||||||
set_target_properties(LLVMSupport PROPERTIES INTERFACE_LINK_LIBRARIES "-lpthread;LLVMDemangle;${ZLIB_LIBRARIES}")
|
|
||||||
set_target_properties(LLVMLineEditor PROPERTIES INTERFACE_LINK_LIBRARIES "LLVMSupport")
|
|
||||||
|
|
||||||
option(LLVM_HAS_RTTI "Enable if LLVM was build with RTTI enabled" ON)
|
|
||||||
set (USE_EMBEDDED_COMPILER 1)
|
set (USE_EMBEDDED_COMPILER 1)
|
||||||
else()
|
|
||||||
message (${RECONFIGURE_MESSAGE_LEVEL} "Can't find system LLVM")
|
|
||||||
set (USE_EMBEDDED_COMPILER 0)
|
|
||||||
endif()
|
|
||||||
|
|
||||||
if (LLVM_FOUND AND OS_LINUX AND USE_LIBCXX AND NOT FORCE_LLVM_WITH_LIBCXX)
|
|
||||||
message(WARNING "Option USE_INTERNAL_LLVM_LIBRARY is not set but the LLVM library from OS packages "
|
|
||||||
"in Linux is incompatible with libc++ ABI. LLVM Will be disabled. Force: -DFORCE_LLVM_WITH_LIBCXX=ON")
|
|
||||||
message (${RECONFIGURE_MESSAGE_LEVEL} "Unsupported LLVM configuration, cannot enable LLVM")
|
|
||||||
set (LLVM_FOUND 0)
|
|
||||||
set (USE_EMBEDDED_COMPILER 0)
|
|
||||||
endif ()
|
|
||||||
endif()
|
|
||||||
|
|
||||||
if(NOT LLVM_FOUND AND NOT MISSING_INTERNAL_LLVM_LIBRARY)
|
|
||||||
if (CMAKE_CURRENT_SOURCE_DIR STREQUAL CMAKE_CURRENT_BINARY_DIR)
|
|
||||||
message(WARNING "Option ENABLE_EMBEDDED_COMPILER is set but internal LLVM library cannot build if build directory is the same as source directory.")
|
|
||||||
set (LLVM_FOUND 0)
|
|
||||||
set (USE_EMBEDDED_COMPILER 0)
|
|
||||||
elseif (SPLIT_SHARED_LIBRARIES)
|
|
||||||
# llvm-tablegen cannot find shared libraries that we build. Probably can be easily fixed.
|
|
||||||
message(WARNING "Option USE_INTERNAL_LLVM_LIBRARY is not compatible with SPLIT_SHARED_LIBRARIES. Build of LLVM will be disabled.")
|
|
||||||
set (LLVM_FOUND 0)
|
|
||||||
set (USE_EMBEDDED_COMPILER 0)
|
|
||||||
elseif (NOT ARCH_AMD64)
|
|
||||||
# It's not supported yet, but you can help.
|
|
||||||
message(WARNING "Option USE_INTERNAL_LLVM_LIBRARY is only available for x86_64. Build of LLVM will be disabled.")
|
|
||||||
set (LLVM_FOUND 0)
|
|
||||||
set (USE_EMBEDDED_COMPILER 0)
|
|
||||||
elseif (SANITIZE STREQUAL "undefined")
|
|
||||||
# llvm-tblgen, that is used during LLVM build, doesn't work with UBSan.
|
|
||||||
message(WARNING "Option USE_INTERNAL_LLVM_LIBRARY does not work with UBSan, because 'llvm-tblgen' tool from LLVM has undefined behaviour. Build of LLVM will be disabled.")
|
|
||||||
set (LLVM_FOUND 0)
|
|
||||||
set (USE_EMBEDDED_COMPILER 0)
|
|
||||||
else ()
|
|
||||||
set (USE_INTERNAL_LLVM_LIBRARY ON)
|
|
||||||
set (LLVM_FOUND 1)
|
set (LLVM_FOUND 1)
|
||||||
set (USE_EMBEDDED_COMPILER 1)
|
set (LLVM_VERSION "12.0.0bundled")
|
||||||
set (LLVM_VERSION "9.0.0bundled")
|
|
||||||
set (LLVM_INCLUDE_DIRS
|
set (LLVM_INCLUDE_DIRS
|
||||||
"${ClickHouse_SOURCE_DIR}/contrib/llvm/llvm/include"
|
"${ClickHouse_SOURCE_DIR}/contrib/llvm/llvm/include"
|
||||||
"${ClickHouse_BINARY_DIR}/contrib/llvm/llvm/include"
|
"${ClickHouse_BINARY_DIR}/contrib/llvm/llvm/include"
|
||||||
)
|
)
|
||||||
set (LLVM_LIBRARY_DIRS "${ClickHouse_BINARY_DIR}/contrib/llvm/llvm")
|
set (LLVM_LIBRARY_DIRS "${ClickHouse_BINARY_DIR}/contrib/llvm/llvm")
|
||||||
endif()
|
|
||||||
endif()
|
|
||||||
|
|
||||||
if (LLVM_FOUND)
|
|
||||||
message(STATUS "LLVM include Directory: ${LLVM_INCLUDE_DIRS}")
|
message(STATUS "LLVM include Directory: ${LLVM_INCLUDE_DIRS}")
|
||||||
message(STATUS "LLVM library Directory: ${LLVM_LIBRARY_DIRS}")
|
message(STATUS "LLVM library Directory: ${LLVM_LIBRARY_DIRS}")
|
||||||
message(STATUS "LLVM C++ compiler flags: ${LLVM_CXXFLAGS}")
|
message(STATUS "LLVM C++ compiler flags: ${LLVM_CXXFLAGS}")
|
||||||
else()
|
|
||||||
message (${RECONFIGURE_MESSAGE_LEVEL} "Can't enable LLVM")
|
|
||||||
endif()
|
|
||||||
|
|
||||||
# This list was generated by listing all LLVM libraries, compiling the binary and removing all libraries while it still compiles.
|
# This list was generated by listing all LLVM libraries, compiling the binary and removing all libraries while it still compiles.
|
||||||
set (REQUIRED_LLVM_LIBRARIES
|
set (REQUIRED_LLVM_LIBRARIES
|
||||||
LLVMOrcJIT
|
|
||||||
LLVMExecutionEngine
|
LLVMExecutionEngine
|
||||||
LLVMRuntimeDyld
|
LLVMRuntimeDyld
|
||||||
LLVMX86CodeGen
|
LLVMX86CodeGen
|
||||||
|
@ -1,17 +0,0 @@
|
|||||||
if (ENABLE_EMBEDDED_COMPILER AND NOT USE_INTERNAL_LLVM_LIBRARY AND USE_STATIC_LIBRARIES)
|
|
||||||
find_library (TERMCAP_LIBRARY tinfo)
|
|
||||||
if (NOT TERMCAP_LIBRARY)
|
|
||||||
find_library (TERMCAP_LIBRARY ncurses)
|
|
||||||
endif()
|
|
||||||
if (NOT TERMCAP_LIBRARY)
|
|
||||||
find_library (TERMCAP_LIBRARY termcap)
|
|
||||||
endif()
|
|
||||||
|
|
||||||
if (NOT TERMCAP_LIBRARY)
|
|
||||||
message (FATAL_ERROR "Statically Linking external LLVM requires termcap")
|
|
||||||
endif()
|
|
||||||
|
|
||||||
target_link_libraries(LLVMSupport INTERFACE ${TERMCAP_LIBRARY})
|
|
||||||
|
|
||||||
message (STATUS "Using termcap: ${TERMCAP_LIBRARY}")
|
|
||||||
endif()
|
|
9
cmake/find/yaml-cpp.cmake
Normal file
9
cmake/find/yaml-cpp.cmake
Normal file
@ -0,0 +1,9 @@
|
|||||||
|
option(USE_YAML_CPP "Enable yaml-cpp" ${ENABLE_LIBRARIES})
|
||||||
|
|
||||||
|
if (NOT USE_YAML_CPP)
|
||||||
|
return()
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/yaml-cpp/README.md")
|
||||||
|
message (ERROR "submodule contrib/yaml-cpp is missing. to fix try run: \n git submodule update --init --recursive")
|
||||||
|
endif()
|
10
contrib/CMakeLists.txt
vendored
10
contrib/CMakeLists.txt
vendored
@ -50,6 +50,10 @@ add_subdirectory (replxx-cmake)
|
|||||||
add_subdirectory (unixodbc-cmake)
|
add_subdirectory (unixodbc-cmake)
|
||||||
add_subdirectory (nanodbc-cmake)
|
add_subdirectory (nanodbc-cmake)
|
||||||
|
|
||||||
|
if (USE_YAML_CPP)
|
||||||
|
add_subdirectory (yaml-cpp-cmake)
|
||||||
|
endif()
|
||||||
|
|
||||||
if (USE_INTERNAL_XZ_LIBRARY)
|
if (USE_INTERNAL_XZ_LIBRARY)
|
||||||
add_subdirectory (xz)
|
add_subdirectory (xz)
|
||||||
endif()
|
endif()
|
||||||
@ -57,7 +61,6 @@ endif()
|
|||||||
add_subdirectory (poco-cmake)
|
add_subdirectory (poco-cmake)
|
||||||
add_subdirectory (croaring-cmake)
|
add_subdirectory (croaring-cmake)
|
||||||
|
|
||||||
|
|
||||||
# TODO: refactor the contrib libraries below this comment.
|
# TODO: refactor the contrib libraries below this comment.
|
||||||
|
|
||||||
if (USE_INTERNAL_ZSTD_LIBRARY)
|
if (USE_INTERNAL_ZSTD_LIBRARY)
|
||||||
@ -205,11 +208,12 @@ elseif(GTEST_SRC_DIR)
|
|||||||
target_compile_definitions(gtest INTERFACE GTEST_HAS_POSIX_RE=0)
|
target_compile_definitions(gtest INTERFACE GTEST_HAS_POSIX_RE=0)
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
if (USE_EMBEDDED_COMPILER AND USE_INTERNAL_LLVM_LIBRARY)
|
if (USE_EMBEDDED_COMPILER)
|
||||||
# ld: unknown option: --color-diagnostics
|
# ld: unknown option: --color-diagnostics
|
||||||
if (APPLE)
|
if (APPLE)
|
||||||
set (LINKER_SUPPORTS_COLOR_DIAGNOSTICS 0 CACHE INTERNAL "")
|
set (LINKER_SUPPORTS_COLOR_DIAGNOSTICS 0 CACHE INTERNAL "")
|
||||||
endif ()
|
endif ()
|
||||||
|
|
||||||
set (LLVM_ENABLE_EH 1 CACHE INTERNAL "")
|
set (LLVM_ENABLE_EH 1 CACHE INTERNAL "")
|
||||||
set (LLVM_ENABLE_RTTI 1 CACHE INTERNAL "")
|
set (LLVM_ENABLE_RTTI 1 CACHE INTERNAL "")
|
||||||
set (LLVM_ENABLE_PIC 0 CACHE INTERNAL "")
|
set (LLVM_ENABLE_PIC 0 CACHE INTERNAL "")
|
||||||
@ -224,8 +228,6 @@ if (USE_EMBEDDED_COMPILER AND USE_INTERNAL_LLVM_LIBRARY)
|
|||||||
|
|
||||||
set (CMAKE_CXX_STANDARD ${CMAKE_CXX_STANDARD_bak})
|
set (CMAKE_CXX_STANDARD ${CMAKE_CXX_STANDARD_bak})
|
||||||
unset (CMAKE_CXX_STANDARD_bak)
|
unset (CMAKE_CXX_STANDARD_bak)
|
||||||
|
|
||||||
target_include_directories(LLVMSupport SYSTEM BEFORE PRIVATE ${ZLIB_INCLUDE_DIR})
|
|
||||||
endif ()
|
endif ()
|
||||||
|
|
||||||
if (USE_INTERNAL_LIBGSASL_LIBRARY)
|
if (USE_INTERNAL_LIBGSASL_LIBRARY)
|
||||||
|
2
contrib/NuRaft
vendored
2
contrib/NuRaft
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 95d6bbba579b3a4e4c2dede954f541ff6f3dba51
|
Subproject commit 2a1bf7d87b4a03561fc66fbb49cee8a288983c5d
|
2
contrib/avro
vendored
2
contrib/avro
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 92caca2d42fc9a97e34e95f963593539d32ed331
|
Subproject commit e43c46e87fd32eafdc09471e95344555454c5ef8
|
2
contrib/cassandra
vendored
2
contrib/cassandra
vendored
@ -1 +1 @@
|
|||||||
Subproject commit c097fb5c7e63cc430016d9a8b240d8e63fbefa52
|
Subproject commit eb9b68dadbb4417a2c132ad4a1c2fa76e65e6fc1
|
2
contrib/croaring
vendored
2
contrib/croaring
vendored
@ -1 +1 @@
|
|||||||
Subproject commit d8402939b5c9fc134fd4fcf058fe0f7006d2b129
|
Subproject commit 2c867e9f9c9e2a3a7032791f94c4c7ae3013f6e0
|
@ -1,6 +1,6 @@
|
|||||||
if (SANITIZE OR NOT (
|
if (SANITIZE OR NOT (
|
||||||
((OS_LINUX OR OS_FREEBSD) AND (ARCH_AMD64 OR ARCH_ARM OR ARCH_PPC64LE)) OR
|
((OS_LINUX OR OS_FREEBSD) AND (ARCH_AMD64 OR ARCH_ARM OR ARCH_PPC64LE)) OR
|
||||||
(OS_DARWIN AND CMAKE_BUILD_TYPE STREQUAL "RelWithDebInfo")
|
(OS_DARWIN AND (CMAKE_BUILD_TYPE STREQUAL "RelWithDebInfo" OR CMAKE_BUILD_TYPE STREQUAL "Debug"))
|
||||||
))
|
))
|
||||||
if (ENABLE_JEMALLOC)
|
if (ENABLE_JEMALLOC)
|
||||||
message (${RECONFIGURE_MESSAGE_LEVEL}
|
message (${RECONFIGURE_MESSAGE_LEVEL}
|
||||||
|
2
contrib/libunwind
vendored
2
contrib/libunwind
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 8fe25d7dc70f2a4ea38c3e5a33fa9d4199b67a5a
|
Subproject commit a491c27b33109a842d577c0f7ac5f5f218859181
|
2
contrib/llvm
vendored
2
contrib/llvm
vendored
@ -1 +1 @@
|
|||||||
Subproject commit cfaf365cf96918999d09d976ec736b4518cf5d02
|
Subproject commit e5751459412bce1391fb7a2e9bbc01e131bf72f1
|
1
contrib/yaml-cpp
vendored
Submodule
1
contrib/yaml-cpp
vendored
Submodule
@ -0,0 +1 @@
|
|||||||
|
Subproject commit 0c86adac6d117ee2b4afcedb8ade19036ca0327d
|
39
contrib/yaml-cpp-cmake/CMakeLists.txt
Normal file
39
contrib/yaml-cpp-cmake/CMakeLists.txt
Normal file
@ -0,0 +1,39 @@
|
|||||||
|
set (LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/yaml-cpp)
|
||||||
|
|
||||||
|
set (SRCS
|
||||||
|
${LIBRARY_DIR}/src/binary.cpp
|
||||||
|
${LIBRARY_DIR}/src/emitterutils.cpp
|
||||||
|
${LIBRARY_DIR}/src/null.cpp
|
||||||
|
${LIBRARY_DIR}/src/scantoken.cpp
|
||||||
|
${LIBRARY_DIR}/src/convert.cpp
|
||||||
|
${LIBRARY_DIR}/src/exceptions.cpp
|
||||||
|
${LIBRARY_DIR}/src/ostream_wrapper.cpp
|
||||||
|
${LIBRARY_DIR}/src/simplekey.cpp
|
||||||
|
${LIBRARY_DIR}/src/depthguard.cpp
|
||||||
|
${LIBRARY_DIR}/src/exp.cpp
|
||||||
|
${LIBRARY_DIR}/src/parse.cpp
|
||||||
|
${LIBRARY_DIR}/src/singledocparser.cpp
|
||||||
|
${LIBRARY_DIR}/src/directives.cpp
|
||||||
|
${LIBRARY_DIR}/src/memory.cpp
|
||||||
|
${LIBRARY_DIR}/src/parser.cpp
|
||||||
|
${LIBRARY_DIR}/src/stream.cpp
|
||||||
|
${LIBRARY_DIR}/src/emit.cpp
|
||||||
|
${LIBRARY_DIR}/src/nodebuilder.cpp
|
||||||
|
${LIBRARY_DIR}/src/regex_yaml.cpp
|
||||||
|
${LIBRARY_DIR}/src/tag.cpp
|
||||||
|
${LIBRARY_DIR}/src/emitfromevents.cpp
|
||||||
|
${LIBRARY_DIR}/src/node.cpp
|
||||||
|
${LIBRARY_DIR}/src/scanner.cpp
|
||||||
|
${LIBRARY_DIR}/src/emitter.cpp
|
||||||
|
${LIBRARY_DIR}/src/node_data.cpp
|
||||||
|
${LIBRARY_DIR}/src/scanscalar.cpp
|
||||||
|
${LIBRARY_DIR}/src/emitterstate.cpp
|
||||||
|
${LIBRARY_DIR}/src/nodeevents.cpp
|
||||||
|
${LIBRARY_DIR}/src/scantag.cpp
|
||||||
|
)
|
||||||
|
|
||||||
|
add_library (yaml-cpp ${SRCS})
|
||||||
|
|
||||||
|
|
||||||
|
target_include_directories(yaml-cpp PRIVATE ${LIBRARY_DIR}/include/yaml-cpp)
|
||||||
|
target_include_directories(yaml-cpp SYSTEM BEFORE PUBLIC ${LIBRARY_DIR}/include)
|
2
debian/clickhouse-server.cron.d
vendored
2
debian/clickhouse-server.cron.d
vendored
@ -1 +1 @@
|
|||||||
#*/10 * * * * root (which service > /dev/null 2>&1 && (service clickhouse-server condstart ||:)) || /etc/init.d/clickhouse-server condstart > /dev/null 2>&1
|
#*/10 * * * * root ((which service > /dev/null 2>&1 && (service clickhouse-server condstart ||:)) || /etc/init.d/clickhouse-server condstart) > /dev/null 2>&1
|
||||||
|
1
debian/clickhouse-server.init
vendored
1
debian/clickhouse-server.init
vendored
@ -229,6 +229,7 @@ status()
|
|||||||
case "$1" in
|
case "$1" in
|
||||||
status)
|
status)
|
||||||
status
|
status
|
||||||
|
exit 0
|
||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
|
|
||||||
|
@ -154,6 +154,10 @@ def parse_env_variables(build_type, compiler, sanitizer, package_type, image_typ
|
|||||||
|
|
||||||
if clang_tidy:
|
if clang_tidy:
|
||||||
cmake_flags.append('-DENABLE_CLANG_TIDY=1')
|
cmake_flags.append('-DENABLE_CLANG_TIDY=1')
|
||||||
|
cmake_flags.append('-DENABLE_UTILS=1')
|
||||||
|
cmake_flags.append('-DUSE_GTEST=1')
|
||||||
|
cmake_flags.append('-DENABLE_TESTS=1')
|
||||||
|
cmake_flags.append('-DENABLE_EXAMPLES=1')
|
||||||
# Don't stop on first error to find more clang-tidy errors in one run.
|
# Don't stop on first error to find more clang-tidy errors in one run.
|
||||||
result.append('NINJA_FLAGS=-k0')
|
result.append('NINJA_FLAGS=-k0')
|
||||||
|
|
||||||
|
@ -34,7 +34,7 @@ fi
|
|||||||
CLICKHOUSE_CONFIG="${CLICKHOUSE_CONFIG:-/etc/clickhouse-server/config.xml}"
|
CLICKHOUSE_CONFIG="${CLICKHOUSE_CONFIG:-/etc/clickhouse-server/config.xml}"
|
||||||
|
|
||||||
if ! $gosu test -f "$CLICKHOUSE_CONFIG" -a -r "$CLICKHOUSE_CONFIG"; then
|
if ! $gosu test -f "$CLICKHOUSE_CONFIG" -a -r "$CLICKHOUSE_CONFIG"; then
|
||||||
echo "Configuration file '$dir' isn't readable by user with id '$USER'"
|
echo "Configuration file '$CLICKHOUSE_CONFIG' isn't readable by user with id '$USER'"
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
@ -374,9 +374,13 @@ function run_tests
|
|||||||
01801_s3_cluster
|
01801_s3_cluster
|
||||||
|
|
||||||
# Depends on LLVM JIT
|
# Depends on LLVM JIT
|
||||||
|
01072_nullable_jit
|
||||||
01852_jit_if
|
01852_jit_if
|
||||||
01865_jit_comparison_constant_result
|
01865_jit_comparison_constant_result
|
||||||
01871_merge_tree_compile_expressions
|
01871_merge_tree_compile_expressions
|
||||||
|
|
||||||
|
# needs psql
|
||||||
|
01889_postgresql_protocol_null_fields
|
||||||
)
|
)
|
||||||
|
|
||||||
time clickhouse-test --hung-check -j 8 --order=random --use-skip-list \
|
time clickhouse-test --hung-check -j 8 --order=random --use-skip-list \
|
||||||
|
@ -56,17 +56,19 @@ function watchdog
|
|||||||
sleep 3600
|
sleep 3600
|
||||||
|
|
||||||
echo "Fuzzing run has timed out"
|
echo "Fuzzing run has timed out"
|
||||||
killall clickhouse-client ||:
|
|
||||||
for _ in {1..10}
|
for _ in {1..10}
|
||||||
do
|
do
|
||||||
if ! pgrep -f clickhouse-client
|
# Only kill by pid the particular client that runs the fuzzing, or else
|
||||||
|
# we can kill some clickhouse-client processes this script starts later,
|
||||||
|
# e.g. for checking server liveness.
|
||||||
|
if ! kill $fuzzer_pid
|
||||||
then
|
then
|
||||||
break
|
break
|
||||||
fi
|
fi
|
||||||
sleep 1
|
sleep 1
|
||||||
done
|
done
|
||||||
|
|
||||||
killall -9 clickhouse-client ||:
|
kill -9 -- $fuzzer_pid ||:
|
||||||
}
|
}
|
||||||
|
|
||||||
function filter_exists
|
function filter_exists
|
||||||
@ -85,7 +87,7 @@ function fuzz
|
|||||||
{
|
{
|
||||||
# Obtain the list of newly added tests. They will be fuzzed in more extreme way than other tests.
|
# Obtain the list of newly added tests. They will be fuzzed in more extreme way than other tests.
|
||||||
# Don't overwrite the NEW_TESTS_OPT so that it can be set from the environment.
|
# Don't overwrite the NEW_TESTS_OPT so that it can be set from the environment.
|
||||||
NEW_TESTS="$(grep -P 'tests/queries/0_stateless/.*\.sql' ci-changed-files.txt | sed -r -e 's!^!ch/!' | sort -R)"
|
NEW_TESTS="$(sed -n 's!\(^tests/queries/0_stateless/.*\.sql\)$!ch/\1!p' ci-changed-files.txt | sort -R)"
|
||||||
# ci-changed-files.txt contains also files that has been deleted/renamed, filter them out.
|
# ci-changed-files.txt contains also files that has been deleted/renamed, filter them out.
|
||||||
NEW_TESTS="$(filter_exists $NEW_TESTS)"
|
NEW_TESTS="$(filter_exists $NEW_TESTS)"
|
||||||
if [[ -n "$NEW_TESTS" ]]
|
if [[ -n "$NEW_TESTS" ]]
|
||||||
@ -95,14 +97,10 @@ function fuzz
|
|||||||
NEW_TESTS_OPT="${NEW_TESTS_OPT:-}"
|
NEW_TESTS_OPT="${NEW_TESTS_OPT:-}"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
export CLICKHOUSE_WATCHDOG_ENABLE=0 # interferes with gdb
|
||||||
clickhouse-server --config-file db/config.xml -- --path db 2>&1 | tail -100000 > server.log &
|
clickhouse-server --config-file db/config.xml -- --path db 2>&1 | tail -100000 > server.log &
|
||||||
|
|
||||||
server_pid=$!
|
server_pid=$!
|
||||||
kill -0 $server_pid
|
kill -0 $server_pid
|
||||||
while ! clickhouse-client --query "select 1" && kill -0 $server_pid ; do echo . ; sleep 1 ; done
|
|
||||||
clickhouse-client --query "select 1"
|
|
||||||
kill -0 $server_pid
|
|
||||||
echo Server started
|
|
||||||
|
|
||||||
echo "
|
echo "
|
||||||
handle all noprint
|
handle all noprint
|
||||||
@ -113,19 +111,70 @@ thread apply all backtrace
|
|||||||
continue
|
continue
|
||||||
" > script.gdb
|
" > script.gdb
|
||||||
|
|
||||||
gdb -batch -command script.gdb -p "$(pidof clickhouse-server)" &
|
gdb -batch -command script.gdb -p $server_pid &
|
||||||
|
|
||||||
|
# Check connectivity after we attach gdb, because it might cause the server
|
||||||
|
# to freeze and the fuzzer will fail.
|
||||||
|
for _ in {1..60}
|
||||||
|
do
|
||||||
|
sleep 1
|
||||||
|
if clickhouse-client --query "select 1"
|
||||||
|
then
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
clickhouse-client --query "select 1" # This checks that the server is responding
|
||||||
|
kill -0 $server_pid # This checks that it is our server that is started and not some other one
|
||||||
|
echo Server started and responded
|
||||||
|
|
||||||
fuzzer_exit_code=0
|
|
||||||
# SC2012: Use find instead of ls to better handle non-alphanumeric filenames. They are all alphanumeric.
|
# SC2012: Use find instead of ls to better handle non-alphanumeric filenames. They are all alphanumeric.
|
||||||
# SC2046: Quote this to prevent word splitting. Actually I need word splitting.
|
# SC2046: Quote this to prevent word splitting. Actually I need word splitting.
|
||||||
# shellcheck disable=SC2012,SC2046
|
# shellcheck disable=SC2012,SC2046
|
||||||
clickhouse-client --query-fuzzer-runs=1000 --queries-file $(ls -1 ch/tests/queries/0_stateless/*.sql | sort -R) $NEW_TESTS_OPT \
|
clickhouse-client \
|
||||||
|
--receive_timeout=10 \
|
||||||
|
--receive_data_timeout_ms=10000 \
|
||||||
|
--query-fuzzer-runs=1000 \
|
||||||
|
--queries-file $(ls -1 ch/tests/queries/0_stateless/*.sql | sort -R) \
|
||||||
|
$NEW_TESTS_OPT \
|
||||||
> >(tail -n 100000 > fuzzer.log) \
|
> >(tail -n 100000 > fuzzer.log) \
|
||||||
2>&1 \
|
2>&1 &
|
||||||
|| fuzzer_exit_code=$?
|
fuzzer_pid=$!
|
||||||
|
echo "Fuzzer pid is $fuzzer_pid"
|
||||||
|
|
||||||
|
# Start a watchdog that should kill the fuzzer on timeout.
|
||||||
|
# The shell won't kill the child sleep when we kill it, so we have to put it
|
||||||
|
# into a separate process group so that we can kill them all.
|
||||||
|
set -m
|
||||||
|
watchdog &
|
||||||
|
watchdog_pid=$!
|
||||||
|
set +m
|
||||||
|
# Check that the watchdog has started.
|
||||||
|
kill -0 $watchdog_pid
|
||||||
|
|
||||||
|
# Wait for the fuzzer to complete.
|
||||||
|
# Note that the 'wait || ...' thing is required so that the script doesn't
|
||||||
|
# exit because of 'set -e' when 'wait' returns nonzero code.
|
||||||
|
fuzzer_exit_code=0
|
||||||
|
wait "$fuzzer_pid" || fuzzer_exit_code=$?
|
||||||
echo "Fuzzer exit code is $fuzzer_exit_code"
|
echo "Fuzzer exit code is $fuzzer_exit_code"
|
||||||
|
|
||||||
|
kill -- -$watchdog_pid ||:
|
||||||
|
|
||||||
|
# If the server dies, most often the fuzzer returns code 210: connetion
|
||||||
|
# refused, and sometimes also code 32: attempt to read after eof. For
|
||||||
|
# simplicity, check again whether the server is accepting connections, using
|
||||||
|
# clickhouse-client. We don't check for existence of server process, because
|
||||||
|
# the process is still present while the server is terminating and not
|
||||||
|
# accepting the connections anymore.
|
||||||
|
if clickhouse-client --query "select 1 format Null"
|
||||||
|
then
|
||||||
|
server_died=0
|
||||||
|
else
|
||||||
|
echo "Server live check returns $?"
|
||||||
|
server_died=1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Stop the server.
|
||||||
clickhouse-client --query "select elapsed, query from system.processes" ||:
|
clickhouse-client --query "select elapsed, query from system.processes" ||:
|
||||||
killall clickhouse-server ||:
|
killall clickhouse-server ||:
|
||||||
for _ in {1..10}
|
for _ in {1..10}
|
||||||
@ -137,6 +186,45 @@ continue
|
|||||||
sleep 1
|
sleep 1
|
||||||
done
|
done
|
||||||
killall -9 clickhouse-server ||:
|
killall -9 clickhouse-server ||:
|
||||||
|
|
||||||
|
# Debug.
|
||||||
|
date
|
||||||
|
sleep 10
|
||||||
|
jobs
|
||||||
|
pstree -aspgT
|
||||||
|
|
||||||
|
# Make files with status and description we'll show for this check on Github.
|
||||||
|
task_exit_code=$fuzzer_exit_code
|
||||||
|
if [ "$server_died" == 1 ]
|
||||||
|
then
|
||||||
|
# The server has died.
|
||||||
|
task_exit_code=210
|
||||||
|
echo "failure" > status.txt
|
||||||
|
if ! grep -ao "Received signal.*\|Logical error.*\|Assertion.*failed\|Failed assertion.*\|.*runtime error: .*\|.*is located.*\|SUMMARY: AddressSanitizer:.*\|SUMMARY: MemorySanitizer:.*\|SUMMARY: ThreadSanitizer:.*\|.*_LIBCPP_ASSERT.*" server.log > description.txt
|
||||||
|
then
|
||||||
|
echo "Lost connection to server. See the logs." > description.txt
|
||||||
|
fi
|
||||||
|
elif [ "$fuzzer_exit_code" == "143" ] || [ "$fuzzer_exit_code" == "0" ]
|
||||||
|
then
|
||||||
|
# Variants of a normal run:
|
||||||
|
# 0 -- fuzzing ended earlier than timeout.
|
||||||
|
# 143 -- SIGTERM -- the fuzzer was killed by timeout.
|
||||||
|
task_exit_code=0
|
||||||
|
echo "success" > status.txt
|
||||||
|
echo "OK" > description.txt
|
||||||
|
else
|
||||||
|
# The server was alive, but the fuzzer returned some error. This might
|
||||||
|
# be some client-side error detected by fuzzing, or a problem in the
|
||||||
|
# fuzzer itself. Don't grep the server log in this case, because we will
|
||||||
|
# find a message about normal server termination (Received signal 15),
|
||||||
|
# which is confusing.
|
||||||
|
task_exit_code=$fuzzer_exit_code
|
||||||
|
echo "failure" > status.txt
|
||||||
|
{ grep -o "Found error:.*" fuzzer.log \
|
||||||
|
|| grep -o "Exception.*" fuzzer.log \
|
||||||
|
|| echo "Fuzzer failed ($fuzzer_exit_code). See the logs." ; } \
|
||||||
|
| tail -1 > description.txt
|
||||||
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
case "$stage" in
|
case "$stage" in
|
||||||
@ -165,50 +253,7 @@ case "$stage" in
|
|||||||
time configure
|
time configure
|
||||||
;&
|
;&
|
||||||
"fuzz")
|
"fuzz")
|
||||||
# Start a watchdog that should kill the fuzzer on timeout.
|
time fuzz
|
||||||
# The shell won't kill the child sleep when we kill it, so we have to put it
|
|
||||||
# into a separate process group so that we can kill them all.
|
|
||||||
set -m
|
|
||||||
watchdog &
|
|
||||||
watchdog_pid=$!
|
|
||||||
set +m
|
|
||||||
# Check that the watchdog has started
|
|
||||||
kill -0 $watchdog_pid
|
|
||||||
|
|
||||||
fuzzer_exit_code=0
|
|
||||||
time fuzz || fuzzer_exit_code=$?
|
|
||||||
kill -- -$watchdog_pid ||:
|
|
||||||
|
|
||||||
# Debug
|
|
||||||
date
|
|
||||||
sleep 10
|
|
||||||
jobs
|
|
||||||
pstree -aspgT
|
|
||||||
|
|
||||||
# Make files with status and description we'll show for this check on Github
|
|
||||||
task_exit_code=$fuzzer_exit_code
|
|
||||||
if [ "$fuzzer_exit_code" == 143 ]
|
|
||||||
then
|
|
||||||
# SIGTERM -- the fuzzer was killed by timeout, which means a normal run.
|
|
||||||
echo "success" > status.txt
|
|
||||||
echo "OK" > description.txt
|
|
||||||
task_exit_code=0
|
|
||||||
elif [ "$fuzzer_exit_code" == 210 ]
|
|
||||||
then
|
|
||||||
# Lost connection to the server. This probably means that the server died
|
|
||||||
# with abort.
|
|
||||||
echo "failure" > status.txt
|
|
||||||
if ! grep -ao "Received signal.*\|Logical error.*\|Assertion.*failed\|Failed assertion.*\|.*runtime error: .*\|.*is located.*\|SUMMARY: AddressSanitizer:.*\|SUMMARY: MemorySanitizer:.*\|SUMMARY: ThreadSanitizer:.*\|.*_LIBCPP_ASSERT.*" server.log > description.txt
|
|
||||||
then
|
|
||||||
echo "Lost connection to server. See the logs." > description.txt
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
# Something different -- maybe the fuzzer itself died? Don't grep the
|
|
||||||
# server log in this case, because we will find a message about normal
|
|
||||||
# server termination (Received signal 15), which is confusing.
|
|
||||||
echo "failure" > status.txt
|
|
||||||
echo "Fuzzer failed ($fuzzer_exit_code). See the logs." > description.txt
|
|
||||||
fi
|
|
||||||
;&
|
;&
|
||||||
"report")
|
"report")
|
||||||
cat > report.html <<EOF ||:
|
cat > report.html <<EOF ||:
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
# docker build -t yandex/clickhouse-integration-tests-runner .
|
# docker build -t yandex/clickhouse-integration-tests-runner .
|
||||||
FROM ubuntu:18.04
|
FROM ubuntu:20.04
|
||||||
|
|
||||||
RUN apt-get update \
|
RUN apt-get update \
|
||||||
&& env DEBIAN_FRONTEND=noninteractive apt-get install --yes \
|
&& env DEBIAN_FRONTEND=noninteractive apt-get install --yes \
|
||||||
@ -14,7 +14,6 @@ RUN apt-get update \
|
|||||||
wget \
|
wget \
|
||||||
git \
|
git \
|
||||||
iproute2 \
|
iproute2 \
|
||||||
module-init-tools \
|
|
||||||
cgroupfs-mount \
|
cgroupfs-mount \
|
||||||
python3-pip \
|
python3-pip \
|
||||||
tzdata \
|
tzdata \
|
||||||
@ -42,7 +41,6 @@ ENV TZ=Europe/Moscow
|
|||||||
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
|
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
|
||||||
|
|
||||||
ENV DOCKER_CHANNEL stable
|
ENV DOCKER_CHANNEL stable
|
||||||
ENV DOCKER_VERSION 5:19.03.13~3-0~ubuntu-bionic
|
|
||||||
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
|
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
|
||||||
RUN add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -c -s) ${DOCKER_CHANNEL}"
|
RUN add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -c -s) ${DOCKER_CHANNEL}"
|
||||||
|
|
||||||
@ -66,25 +64,28 @@ RUN python3 -m pip install \
|
|||||||
dict2xml \
|
dict2xml \
|
||||||
dicttoxml \
|
dicttoxml \
|
||||||
docker \
|
docker \
|
||||||
docker-compose==1.22.0 \
|
docker-compose==1.28.2 \
|
||||||
grpcio \
|
grpcio \
|
||||||
grpcio-tools \
|
grpcio-tools \
|
||||||
kafka-python \
|
kafka-python \
|
||||||
kazoo \
|
kazoo \
|
||||||
minio \
|
minio \
|
||||||
protobuf \
|
protobuf \
|
||||||
psycopg2-binary==2.7.5 \
|
psycopg2-binary==2.8.6 \
|
||||||
pymongo \
|
pymongo \
|
||||||
pytest \
|
pytest \
|
||||||
pytest-timeout \
|
pytest-timeout \
|
||||||
|
pytest-xdist \
|
||||||
redis \
|
redis \
|
||||||
tzlocal \
|
tzlocal \
|
||||||
urllib3 \
|
urllib3 \
|
||||||
requests-kerberos
|
requests-kerberos \
|
||||||
|
pyhdfs
|
||||||
|
|
||||||
COPY modprobe.sh /usr/local/bin/modprobe
|
COPY modprobe.sh /usr/local/bin/modprobe
|
||||||
COPY dockerd-entrypoint.sh /usr/local/bin/
|
COPY dockerd-entrypoint.sh /usr/local/bin/
|
||||||
COPY compose/ /compose/
|
COPY compose/ /compose/
|
||||||
|
COPY misc/ /misc/
|
||||||
|
|
||||||
RUN set -x \
|
RUN set -x \
|
||||||
&& addgroup --system dockremap \
|
&& addgroup --system dockremap \
|
||||||
@ -93,7 +94,6 @@ RUN set -x \
|
|||||||
&& echo 'dockremap:165536:65536' >> /etc/subuid \
|
&& echo 'dockremap:165536:65536' >> /etc/subuid \
|
||||||
&& echo 'dockremap:165536:65536' >> /etc/subgid
|
&& echo 'dockremap:165536:65536' >> /etc/subgid
|
||||||
|
|
||||||
VOLUME /var/lib/docker
|
|
||||||
EXPOSE 2375
|
EXPOSE 2375
|
||||||
ENTRYPOINT ["dockerd-entrypoint.sh"]
|
ENTRYPOINT ["dockerd-entrypoint.sh"]
|
||||||
CMD ["sh", "-c", "pytest $PYTEST_OPTS"]
|
CMD ["sh", "-c", "pytest $PYTEST_OPTS"]
|
||||||
|
@ -1,7 +1,5 @@
|
|||||||
version: '2.3'
|
version: '2.3'
|
||||||
services:
|
services:
|
||||||
cassandra1:
|
cassandra1:
|
||||||
image: cassandra
|
image: cassandra:4.0
|
||||||
restart: always
|
restart: always
|
||||||
ports:
|
|
||||||
- 9043:9042
|
|
||||||
|
@ -5,6 +5,10 @@ services:
|
|||||||
hostname: hdfs1
|
hostname: hdfs1
|
||||||
restart: always
|
restart: always
|
||||||
ports:
|
ports:
|
||||||
- 50075:50075
|
- ${HDFS_NAME_EXTERNAL_PORT}:${HDFS_NAME_INTERNAL_PORT} #50070
|
||||||
- 50070:50070
|
- ${HDFS_DATA_EXTERNAL_PORT}:${HDFS_DATA_INTERNAL_PORT} #50075
|
||||||
entrypoint: /etc/bootstrap.sh -d
|
entrypoint: /etc/bootstrap.sh -d
|
||||||
|
volumes:
|
||||||
|
- type: ${HDFS_FS:-tmpfs}
|
||||||
|
source: ${HDFS_LOGS:-}
|
||||||
|
target: /usr/local/hadoop/logs
|
@ -15,10 +15,11 @@ services:
|
|||||||
image: confluentinc/cp-kafka:5.2.0
|
image: confluentinc/cp-kafka:5.2.0
|
||||||
hostname: kafka1
|
hostname: kafka1
|
||||||
ports:
|
ports:
|
||||||
- "9092:9092"
|
- ${KAFKA_EXTERNAL_PORT}:${KAFKA_EXTERNAL_PORT}
|
||||||
environment:
|
environment:
|
||||||
KAFKA_ADVERTISED_LISTENERS: INSIDE://localhost:9092,OUTSIDE://kafka1:19092
|
KAFKA_ADVERTISED_LISTENERS: INSIDE://localhost:${KAFKA_EXTERNAL_PORT},OUTSIDE://kafka1:19092
|
||||||
KAFKA_LISTENERS: INSIDE://:9092,OUTSIDE://:19092
|
KAFKA_ADVERTISED_HOST_NAME: kafka1
|
||||||
|
KAFKA_LISTENERS: INSIDE://0.0.0.0:${KAFKA_EXTERNAL_PORT},OUTSIDE://0.0.0.0:19092
|
||||||
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
|
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
|
||||||
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
|
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
|
||||||
KAFKA_BROKER_ID: 1
|
KAFKA_BROKER_ID: 1
|
||||||
@ -34,7 +35,7 @@ services:
|
|||||||
image: confluentinc/cp-schema-registry:5.2.0
|
image: confluentinc/cp-schema-registry:5.2.0
|
||||||
hostname: schema-registry
|
hostname: schema-registry
|
||||||
ports:
|
ports:
|
||||||
- "8081:8081"
|
- ${SCHEMA_REGISTRY_EXTERNAL_PORT}:${SCHEMA_REGISTRY_INTERNAL_PORT}
|
||||||
environment:
|
environment:
|
||||||
SCHEMA_REGISTRY_HOST_NAME: schema-registry
|
SCHEMA_REGISTRY_HOST_NAME: schema-registry
|
||||||
SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL: PLAINTEXT
|
SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL: PLAINTEXT
|
||||||
|
@ -11,16 +11,18 @@ services:
|
|||||||
- ${KERBERIZED_HDFS_DIR}/../../hdfs_configs/bootstrap.sh:/etc/bootstrap.sh:ro
|
- ${KERBERIZED_HDFS_DIR}/../../hdfs_configs/bootstrap.sh:/etc/bootstrap.sh:ro
|
||||||
- ${KERBERIZED_HDFS_DIR}/secrets:/usr/local/hadoop/etc/hadoop/conf
|
- ${KERBERIZED_HDFS_DIR}/secrets:/usr/local/hadoop/etc/hadoop/conf
|
||||||
- ${KERBERIZED_HDFS_DIR}/secrets/krb_long.conf:/etc/krb5.conf:ro
|
- ${KERBERIZED_HDFS_DIR}/secrets/krb_long.conf:/etc/krb5.conf:ro
|
||||||
|
- type: ${KERBERIZED_HDFS_FS:-tmpfs}
|
||||||
|
source: ${KERBERIZED_HDFS_LOGS:-}
|
||||||
|
target: /var/log/hadoop-hdfs
|
||||||
ports:
|
ports:
|
||||||
- 1006:1006
|
- ${KERBERIZED_HDFS_NAME_EXTERNAL_PORT}:${KERBERIZED_HDFS_NAME_INTERNAL_PORT} #50070
|
||||||
- 50070:50070
|
- ${KERBERIZED_HDFS_DATA_EXTERNAL_PORT}:${KERBERIZED_HDFS_DATA_INTERNAL_PORT} #1006
|
||||||
- 9010:9010
|
|
||||||
depends_on:
|
depends_on:
|
||||||
- hdfskerberos
|
- hdfskerberos
|
||||||
entrypoint: /etc/bootstrap.sh -d
|
entrypoint: /etc/bootstrap.sh -d
|
||||||
|
|
||||||
hdfskerberos:
|
hdfskerberos:
|
||||||
image: yandex/clickhouse-kerberos-kdc:${DOCKER_KERBEROS_KDC_TAG}
|
image: yandex/clickhouse-kerberos-kdc:${DOCKER_KERBEROS_KDC_TAG:-latest}
|
||||||
hostname: hdfskerberos
|
hostname: hdfskerberos
|
||||||
volumes:
|
volumes:
|
||||||
- ${KERBERIZED_HDFS_DIR}/secrets:/tmp/keytab
|
- ${KERBERIZED_HDFS_DIR}/secrets:/tmp/keytab
|
||||||
|
@ -23,13 +23,13 @@ services:
|
|||||||
# restart: always
|
# restart: always
|
||||||
hostname: kerberized_kafka1
|
hostname: kerberized_kafka1
|
||||||
ports:
|
ports:
|
||||||
- "9092:9092"
|
- ${KERBERIZED_KAFKA_EXTERNAL_PORT}:${KERBERIZED_KAFKA_EXTERNAL_PORT}
|
||||||
- "9093:9093"
|
|
||||||
environment:
|
environment:
|
||||||
KAFKA_LISTENERS: OUTSIDE://:19092,UNSECURED_OUTSIDE://:19093,UNSECURED_INSIDE://:9093
|
KAFKA_LISTENERS: OUTSIDE://:19092,UNSECURED_OUTSIDE://:19093,UNSECURED_INSIDE://0.0.0.0:${KERBERIZED_KAFKA_EXTERNAL_PORT}
|
||||||
KAFKA_ADVERTISED_LISTENERS: OUTSIDE://kerberized_kafka1:19092,UNSECURED_OUTSIDE://kerberized_kafka1:19093,UNSECURED_INSIDE://localhost:9093
|
KAFKA_ADVERTISED_LISTENERS: OUTSIDE://kerberized_kafka1:19092,UNSECURED_OUTSIDE://kerberized_kafka1:19093,UNSECURED_INSIDE://localhost:${KERBERIZED_KAFKA_EXTERNAL_PORT}
|
||||||
# KAFKA_LISTENERS: INSIDE://kerberized_kafka1:9092,OUTSIDE://kerberized_kafka1:19092
|
# KAFKA_LISTENERS: INSIDE://kerberized_kafka1:9092,OUTSIDE://kerberized_kafka1:19092
|
||||||
# KAFKA_ADVERTISED_LISTENERS: INSIDE://localhost:9092,OUTSIDE://kerberized_kafka1:19092
|
# KAFKA_ADVERTISED_LISTENERS: INSIDE://localhost:9092,OUTSIDE://kerberized_kafka1:19092
|
||||||
|
KAFKA_ADVERTISED_HOST_NAME: kerberized_kafka1
|
||||||
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: GSSAPI
|
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: GSSAPI
|
||||||
KAFKA_SASL_ENABLED_MECHANISMS: GSSAPI
|
KAFKA_SASL_ENABLED_MECHANISMS: GSSAPI
|
||||||
KAFKA_SASL_KERBEROS_SERVICE_NAME: kafka
|
KAFKA_SASL_KERBEROS_SERVICE_NAME: kafka
|
||||||
|
@ -6,8 +6,8 @@ services:
|
|||||||
volumes:
|
volumes:
|
||||||
- data1-1:/data1
|
- data1-1:/data1
|
||||||
- ${MINIO_CERTS_DIR:-}:/certs
|
- ${MINIO_CERTS_DIR:-}:/certs
|
||||||
ports:
|
expose:
|
||||||
- "9001:9001"
|
- ${MINIO_PORT}
|
||||||
environment:
|
environment:
|
||||||
MINIO_ACCESS_KEY: minio
|
MINIO_ACCESS_KEY: minio
|
||||||
MINIO_SECRET_KEY: minio123
|
MINIO_SECRET_KEY: minio123
|
||||||
@ -20,14 +20,14 @@ services:
|
|||||||
# HTTP proxies for Minio.
|
# HTTP proxies for Minio.
|
||||||
proxy1:
|
proxy1:
|
||||||
image: yandex/clickhouse-s3-proxy
|
image: yandex/clickhouse-s3-proxy
|
||||||
ports:
|
expose:
|
||||||
- "8080" # Redirect proxy port
|
- "8080" # Redirect proxy port
|
||||||
- "80" # Reverse proxy port
|
- "80" # Reverse proxy port
|
||||||
- "443" # Reverse proxy port (secure)
|
- "443" # Reverse proxy port (secure)
|
||||||
|
|
||||||
proxy2:
|
proxy2:
|
||||||
image: yandex/clickhouse-s3-proxy
|
image: yandex/clickhouse-s3-proxy
|
||||||
ports:
|
expose:
|
||||||
- "8080"
|
- "8080"
|
||||||
- "80"
|
- "80"
|
||||||
- "443"
|
- "443"
|
||||||
@ -35,7 +35,7 @@ services:
|
|||||||
# Empty container to run proxy resolver.
|
# Empty container to run proxy resolver.
|
||||||
resolver:
|
resolver:
|
||||||
image: yandex/clickhouse-python-bottle
|
image: yandex/clickhouse-python-bottle
|
||||||
ports:
|
expose:
|
||||||
- "8080"
|
- "8080"
|
||||||
tty: true
|
tty: true
|
||||||
depends_on:
|
depends_on:
|
||||||
|
@ -7,5 +7,5 @@ services:
|
|||||||
MONGO_INITDB_ROOT_USERNAME: root
|
MONGO_INITDB_ROOT_USERNAME: root
|
||||||
MONGO_INITDB_ROOT_PASSWORD: clickhouse
|
MONGO_INITDB_ROOT_PASSWORD: clickhouse
|
||||||
ports:
|
ports:
|
||||||
- 27018:27017
|
- ${MONGO_EXTERNAL_PORT}:${MONGO_INTERNAL_PORT}
|
||||||
command: --profile=2 --verbose
|
command: --profile=2 --verbose
|
||||||
|
@ -1,10 +1,24 @@
|
|||||||
version: '2.3'
|
version: '2.3'
|
||||||
services:
|
services:
|
||||||
mysql1:
|
mysql57:
|
||||||
image: mysql:5.7
|
image: mysql:5.7
|
||||||
restart: always
|
restart: always
|
||||||
environment:
|
environment:
|
||||||
MYSQL_ROOT_PASSWORD: clickhouse
|
MYSQL_ROOT_PASSWORD: clickhouse
|
||||||
ports:
|
MYSQL_ROOT_HOST: ${MYSQL_ROOT_HOST}
|
||||||
- 3308:3306
|
DATADIR: /mysql/
|
||||||
command: --server_id=100 --log-bin='mysql-bin-1.log' --default-time-zone='+3:00' --gtid-mode="ON" --enforce-gtid-consistency
|
expose:
|
||||||
|
- ${MYSQL_PORT}
|
||||||
|
command: --server_id=100
|
||||||
|
--log-bin='mysql-bin-1.log'
|
||||||
|
--default-time-zone='+3:00'
|
||||||
|
--gtid-mode="ON"
|
||||||
|
--enforce-gtid-consistency
|
||||||
|
--log-error-verbosity=3
|
||||||
|
--log-error=/mysql/error.log
|
||||||
|
--general-log=ON
|
||||||
|
--general-log-file=/mysql/general.log
|
||||||
|
volumes:
|
||||||
|
- type: ${MYSQL_LOGS_FS:-tmpfs}
|
||||||
|
source: ${MYSQL_LOGS:-}
|
||||||
|
target: /mysql/
|
@ -12,3 +12,10 @@ services:
|
|||||||
--gtid-mode="ON"
|
--gtid-mode="ON"
|
||||||
--enforce-gtid-consistency
|
--enforce-gtid-consistency
|
||||||
--log-error-verbosity=3
|
--log-error-verbosity=3
|
||||||
|
--log-error=/var/log/mysqld/error.log
|
||||||
|
--general-log=ON
|
||||||
|
--general-log-file=/var/log/mysqld/general.log
|
||||||
|
volumes:
|
||||||
|
- type: ${MYSQL_LOGS_FS:-tmpfs}
|
||||||
|
source: ${MYSQL_LOGS:-}
|
||||||
|
target: /var/log/mysqld/
|
@ -0,0 +1,23 @@
|
|||||||
|
version: '2.3'
|
||||||
|
services:
|
||||||
|
mysql80:
|
||||||
|
image: mysql:8.0
|
||||||
|
restart: always
|
||||||
|
environment:
|
||||||
|
MYSQL_ROOT_PASSWORD: clickhouse
|
||||||
|
MYSQL_ROOT_HOST: ${MYSQL_ROOT_HOST}
|
||||||
|
DATADIR: /mysql/
|
||||||
|
expose:
|
||||||
|
- ${MYSQL8_PORT}
|
||||||
|
command: --server_id=100 --log-bin='mysql-bin-1.log'
|
||||||
|
--default_authentication_plugin='mysql_native_password'
|
||||||
|
--default-time-zone='+3:00' --gtid-mode="ON"
|
||||||
|
--enforce-gtid-consistency
|
||||||
|
--log-error-verbosity=3
|
||||||
|
--log-error=/mysql/error.log
|
||||||
|
--general-log=ON
|
||||||
|
--general-log-file=/mysql/general.log
|
||||||
|
volumes:
|
||||||
|
- type: ${MYSQL8_LOGS_FS:-tmpfs}
|
||||||
|
source: ${MYSQL8_LOGS:-}
|
||||||
|
target: /mysql/
|
@ -1,15 +0,0 @@
|
|||||||
version: '2.3'
|
|
||||||
services:
|
|
||||||
mysql8_0:
|
|
||||||
image: mysql:8.0
|
|
||||||
restart: 'no'
|
|
||||||
environment:
|
|
||||||
MYSQL_ROOT_PASSWORD: clickhouse
|
|
||||||
ports:
|
|
||||||
- 3309:3306
|
|
||||||
command: --server_id=100 --log-bin='mysql-bin-1.log'
|
|
||||||
--default_authentication_plugin='mysql_native_password'
|
|
||||||
--default-time-zone='+3:00'
|
|
||||||
--gtid-mode="ON"
|
|
||||||
--enforce-gtid-consistency
|
|
||||||
--log-error-verbosity=3
|
|
@ -1,6 +1,6 @@
|
|||||||
version: '2.3'
|
version: '2.3'
|
||||||
services:
|
services:
|
||||||
mysql1:
|
mysql_client:
|
||||||
image: mysql:5.7
|
image: mysql:5.7
|
||||||
restart: always
|
restart: always
|
||||||
environment:
|
environment:
|
||||||
|
@ -5,19 +5,64 @@ services:
|
|||||||
restart: always
|
restart: always
|
||||||
environment:
|
environment:
|
||||||
MYSQL_ROOT_PASSWORD: clickhouse
|
MYSQL_ROOT_PASSWORD: clickhouse
|
||||||
ports:
|
MYSQL_ROOT_HOST: ${MYSQL_CLUSTER_ROOT_HOST}
|
||||||
- 3348:3306
|
DATADIR: /mysql/
|
||||||
|
expose:
|
||||||
|
- ${MYSQL_CLUSTER_PORT}
|
||||||
|
command: --server_id=100
|
||||||
|
--log-bin='mysql-bin-2.log'
|
||||||
|
--default-time-zone='+3:00'
|
||||||
|
--gtid-mode="ON"
|
||||||
|
--enforce-gtid-consistency
|
||||||
|
--log-error-verbosity=3
|
||||||
|
--log-error=/mysql/2_error.log
|
||||||
|
--general-log=ON
|
||||||
|
--general-log-file=/mysql/2_general.log
|
||||||
|
volumes:
|
||||||
|
- type: ${MYSQL_CLUSTER_LOGS_FS:-tmpfs}
|
||||||
|
source: ${MYSQL_CLUSTER_LOGS:-}
|
||||||
|
target: /mysql/
|
||||||
mysql3:
|
mysql3:
|
||||||
image: mysql:5.7
|
image: mysql:5.7
|
||||||
restart: always
|
restart: always
|
||||||
environment:
|
environment:
|
||||||
MYSQL_ROOT_PASSWORD: clickhouse
|
MYSQL_ROOT_PASSWORD: clickhouse
|
||||||
ports:
|
MYSQL_ROOT_HOST: ${MYSQL_CLUSTER_ROOT_HOST}
|
||||||
- 3388:3306
|
DATADIR: /mysql/
|
||||||
|
expose:
|
||||||
|
- ${MYSQL_CLUSTER_PORT}
|
||||||
|
command: --server_id=100
|
||||||
|
--log-bin='mysql-bin-3.log'
|
||||||
|
--default-time-zone='+3:00'
|
||||||
|
--gtid-mode="ON"
|
||||||
|
--enforce-gtid-consistency
|
||||||
|
--log-error-verbosity=3
|
||||||
|
--log-error=/mysql/3_error.log
|
||||||
|
--general-log=ON
|
||||||
|
--general-log-file=/mysql/3_general.log
|
||||||
|
volumes:
|
||||||
|
- type: ${MYSQL_CLUSTER_LOGS_FS:-tmpfs}
|
||||||
|
source: ${MYSQL_CLUSTER_LOGS:-}
|
||||||
|
target: /mysql/
|
||||||
mysql4:
|
mysql4:
|
||||||
image: mysql:5.7
|
image: mysql:5.7
|
||||||
restart: always
|
restart: always
|
||||||
environment:
|
environment:
|
||||||
MYSQL_ROOT_PASSWORD: clickhouse
|
MYSQL_ROOT_PASSWORD: clickhouse
|
||||||
ports:
|
MYSQL_ROOT_HOST: ${MYSQL_CLUSTER_ROOT_HOST}
|
||||||
- 3368:3306
|
DATADIR: /mysql/
|
||||||
|
expose:
|
||||||
|
- ${MYSQL_CLUSTER_PORT}
|
||||||
|
command: --server_id=100
|
||||||
|
--log-bin='mysql-bin-4.log'
|
||||||
|
--default-time-zone='+3:00'
|
||||||
|
--gtid-mode="ON"
|
||||||
|
--enforce-gtid-consistency
|
||||||
|
--log-error-verbosity=3
|
||||||
|
--log-error=/mysql/4_error.log
|
||||||
|
--general-log=ON
|
||||||
|
--general-log-file=/mysql/4_general.log
|
||||||
|
volumes:
|
||||||
|
- type: ${MYSQL_CLUSTER_LOGS_FS:-tmpfs}
|
||||||
|
source: ${MYSQL_CLUSTER_LOGS:-}
|
||||||
|
target: /mysql/
|
@ -2,12 +2,24 @@ version: '2.3'
|
|||||||
services:
|
services:
|
||||||
postgres1:
|
postgres1:
|
||||||
image: postgres
|
image: postgres
|
||||||
|
command: ["postgres", "-c", "logging_collector=on", "-c", "log_directory=/postgres/logs", "-c", "log_filename=postgresql.log", "-c", "log_statement=all"]
|
||||||
restart: always
|
restart: always
|
||||||
environment:
|
expose:
|
||||||
POSTGRES_PASSWORD: mysecretpassword
|
- ${POSTGRES_PORT}
|
||||||
ports:
|
healthcheck:
|
||||||
- 5432:5432
|
test: ["CMD-SHELL", "pg_isready -U postgres"]
|
||||||
|
interval: 10s
|
||||||
|
timeout: 5s
|
||||||
|
retries: 5
|
||||||
networks:
|
networks:
|
||||||
default:
|
default:
|
||||||
aliases:
|
aliases:
|
||||||
- postgre-sql.local
|
- postgre-sql.local
|
||||||
|
environment:
|
||||||
|
POSTGRES_HOST_AUTH_METHOD: "trust"
|
||||||
|
POSTGRES_PASSWORD: mysecretpassword
|
||||||
|
PGDATA: /postgres/data
|
||||||
|
volumes:
|
||||||
|
- type: ${POSTGRES_LOGS_FS:-tmpfs}
|
||||||
|
source: ${POSTGRES_DIR:-}
|
||||||
|
target: /postgres/
|
@ -2,22 +2,43 @@ version: '2.3'
|
|||||||
services:
|
services:
|
||||||
postgres2:
|
postgres2:
|
||||||
image: postgres
|
image: postgres
|
||||||
|
command: ["postgres", "-c", "logging_collector=on", "-c", "log_directory=/postgres/logs", "-c", "log_filename=postgresql.log", "-c", "log_statement=all"]
|
||||||
restart: always
|
restart: always
|
||||||
environment:
|
environment:
|
||||||
|
POSTGRES_HOST_AUTH_METHOD: "trust"
|
||||||
POSTGRES_PASSWORD: mysecretpassword
|
POSTGRES_PASSWORD: mysecretpassword
|
||||||
ports:
|
PGDATA: /postgres/data
|
||||||
- 5421:5432
|
expose:
|
||||||
|
- ${POSTGRES_PORT}
|
||||||
|
volumes:
|
||||||
|
- type: ${POSTGRES_LOGS_FS:-tmpfs}
|
||||||
|
source: ${POSTGRES2_DIR:-}
|
||||||
|
target: /postgres/
|
||||||
postgres3:
|
postgres3:
|
||||||
image: postgres
|
image: postgres
|
||||||
|
command: ["postgres", "-c", "logging_collector=on", "-c", "log_directory=/postgres/logs", "-c", "log_filename=postgresql.log", "-c", "log_statement=all"]
|
||||||
restart: always
|
restart: always
|
||||||
environment:
|
environment:
|
||||||
|
POSTGRES_HOST_AUTH_METHOD: "trust"
|
||||||
POSTGRES_PASSWORD: mysecretpassword
|
POSTGRES_PASSWORD: mysecretpassword
|
||||||
ports:
|
PGDATA: /postgres/data
|
||||||
- 5441:5432
|
expose:
|
||||||
|
- ${POSTGRES_PORT}
|
||||||
|
volumes:
|
||||||
|
- type: ${POSTGRES_LOGS_FS:-tmpfs}
|
||||||
|
source: ${POSTGRES3_DIR:-}
|
||||||
|
target: /postgres/
|
||||||
postgres4:
|
postgres4:
|
||||||
image: postgres
|
image: postgres
|
||||||
|
command: ["postgres", "-c", "logging_collector=on", "-c", "log_directory=/postgres/logs", "-c", "log_filename=postgresql.log", "-c", "log_statement=all"]
|
||||||
restart: always
|
restart: always
|
||||||
environment:
|
environment:
|
||||||
|
POSTGRES_HOST_AUTH_METHOD: "trust"
|
||||||
POSTGRES_PASSWORD: mysecretpassword
|
POSTGRES_PASSWORD: mysecretpassword
|
||||||
ports:
|
PGDATA: /postgres/data
|
||||||
- 5461:5432
|
expose:
|
||||||
|
- ${POSTGRES_PORT}
|
||||||
|
volumes:
|
||||||
|
- type: ${POSTGRES_LOGS_FS:-tmpfs}
|
||||||
|
source: ${POSTGRES4_DIR:-}
|
||||||
|
target: /postgres/
|
@ -2,11 +2,15 @@ version: '2.3'
|
|||||||
|
|
||||||
services:
|
services:
|
||||||
rabbitmq1:
|
rabbitmq1:
|
||||||
image: rabbitmq:3-management
|
image: rabbitmq:3-management-alpine
|
||||||
hostname: rabbitmq1
|
hostname: rabbitmq1
|
||||||
ports:
|
expose:
|
||||||
- "5672:5672"
|
- ${RABBITMQ_PORT}
|
||||||
- "15672:15672"
|
|
||||||
environment:
|
environment:
|
||||||
RABBITMQ_DEFAULT_USER: "root"
|
RABBITMQ_DEFAULT_USER: "root"
|
||||||
RABBITMQ_DEFAULT_PASS: "clickhouse"
|
RABBITMQ_DEFAULT_PASS: "clickhouse"
|
||||||
|
RABBITMQ_LOG_BASE: /rabbitmq_logs/
|
||||||
|
volumes:
|
||||||
|
- type: ${RABBITMQ_LOGS_FS:-tmpfs}
|
||||||
|
source: ${RABBITMQ_LOGS:-}
|
||||||
|
target: /rabbitmq_logs/
|
||||||
|
@ -4,5 +4,5 @@ services:
|
|||||||
image: redis
|
image: redis
|
||||||
restart: always
|
restart: always
|
||||||
ports:
|
ports:
|
||||||
- 6380:6379
|
- ${REDIS_EXTERNAL_PORT}:${REDIS_INTERNAL_PORT}
|
||||||
command: redis-server --requirepass "clickhouse" --databases 32
|
command: redis-server --requirepass "clickhouse" --databases 32
|
||||||
|
@ -0,0 +1,75 @@
|
|||||||
|
version: '2.3'
|
||||||
|
services:
|
||||||
|
zoo1:
|
||||||
|
image: zookeeper:3.6.2
|
||||||
|
restart: always
|
||||||
|
environment:
|
||||||
|
ZOO_TICK_TIME: 500
|
||||||
|
ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
|
||||||
|
ZOO_MY_ID: 1
|
||||||
|
JVMFLAGS: -Dzookeeper.forceSync=no
|
||||||
|
ZOO_SECURE_CLIENT_PORT: $ZOO_SECURE_CLIENT_PORT
|
||||||
|
command: ["zkServer.sh", "start-foreground"]
|
||||||
|
entrypoint: /zookeeper-ssl-entrypoint.sh
|
||||||
|
volumes:
|
||||||
|
- type: bind
|
||||||
|
source: /misc/zookeeper-ssl-entrypoint.sh
|
||||||
|
target: /zookeeper-ssl-entrypoint.sh
|
||||||
|
- type: bind
|
||||||
|
source: /misc/client.crt
|
||||||
|
target: /clickhouse-config/client.crt
|
||||||
|
- type: ${ZK_FS:-tmpfs}
|
||||||
|
source: ${ZK_DATA1:-}
|
||||||
|
target: /data
|
||||||
|
- type: ${ZK_FS:-tmpfs}
|
||||||
|
source: ${ZK_DATA_LOG1:-}
|
||||||
|
target: /datalog
|
||||||
|
zoo2:
|
||||||
|
image: zookeeper:3.6.2
|
||||||
|
restart: always
|
||||||
|
environment:
|
||||||
|
ZOO_TICK_TIME: 500
|
||||||
|
ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888
|
||||||
|
ZOO_MY_ID: 2
|
||||||
|
JVMFLAGS: -Dzookeeper.forceSync=no
|
||||||
|
ZOO_SECURE_CLIENT_PORT: $ZOO_SECURE_CLIENT_PORT
|
||||||
|
|
||||||
|
command: ["zkServer.sh", "start-foreground"]
|
||||||
|
entrypoint: /zookeeper-ssl-entrypoint.sh
|
||||||
|
volumes:
|
||||||
|
- type: bind
|
||||||
|
source: /misc/zookeeper-ssl-entrypoint.sh
|
||||||
|
target: /zookeeper-ssl-entrypoint.sh
|
||||||
|
- type: bind
|
||||||
|
source: /misc/client.crt
|
||||||
|
target: /clickhouse-config/client.crt
|
||||||
|
- type: ${ZK_FS:-tmpfs}
|
||||||
|
source: ${ZK_DATA2:-}
|
||||||
|
target: /data
|
||||||
|
- type: ${ZK_FS:-tmpfs}
|
||||||
|
source: ${ZK_DATA_LOG2:-}
|
||||||
|
target: /datalog
|
||||||
|
zoo3:
|
||||||
|
image: zookeeper:3.6.2
|
||||||
|
restart: always
|
||||||
|
environment:
|
||||||
|
ZOO_TICK_TIME: 500
|
||||||
|
ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
|
||||||
|
ZOO_MY_ID: 3
|
||||||
|
JVMFLAGS: -Dzookeeper.forceSync=no
|
||||||
|
ZOO_SECURE_CLIENT_PORT: $ZOO_SECURE_CLIENT_PORT
|
||||||
|
command: ["zkServer.sh", "start-foreground"]
|
||||||
|
entrypoint: /zookeeper-ssl-entrypoint.sh
|
||||||
|
volumes:
|
||||||
|
- type: bind
|
||||||
|
source: /misc/zookeeper-ssl-entrypoint.sh
|
||||||
|
target: /zookeeper-ssl-entrypoint.sh
|
||||||
|
- type: bind
|
||||||
|
source: /misc/client.crt
|
||||||
|
target: /clickhouse-config/client.crt
|
||||||
|
- type: ${ZK_FS:-tmpfs}
|
||||||
|
source: ${ZK_DATA3:-}
|
||||||
|
target: /data
|
||||||
|
- type: ${ZK_FS:-tmpfs}
|
||||||
|
source: ${ZK_DATA_LOG3:-}
|
||||||
|
target: /datalog
|
@ -2,17 +2,17 @@
|
|||||||
set -e
|
set -e
|
||||||
|
|
||||||
mkdir -p /etc/docker/
|
mkdir -p /etc/docker/
|
||||||
cat > /etc/docker/daemon.json << EOF
|
echo '{
|
||||||
{
|
|
||||||
"ipv6": true,
|
"ipv6": true,
|
||||||
"fixed-cidr-v6": "fd00::/8",
|
"fixed-cidr-v6": "fd00::/8",
|
||||||
"ip-forward": true,
|
"ip-forward": true,
|
||||||
|
"log-level": "debug",
|
||||||
|
"storage-driver": "overlay2",
|
||||||
"insecure-registries" : ["dockerhub-proxy.sas.yp-c.yandex.net:5000"],
|
"insecure-registries" : ["dockerhub-proxy.sas.yp-c.yandex.net:5000"],
|
||||||
"registry-mirrors" : ["http://dockerhub-proxy.sas.yp-c.yandex.net:5000"]
|
"registry-mirrors" : ["http://dockerhub-proxy.sas.yp-c.yandex.net:5000"]
|
||||||
}
|
}' | dd of=/etc/docker/daemon.json
|
||||||
EOF
|
|
||||||
|
|
||||||
dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 &>/var/log/somefile &
|
dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 --default-address-pool base=172.17.0.0/12,size=24 &>/ClickHouse/tests/integration/dockerd.log &
|
||||||
|
|
||||||
set +e
|
set +e
|
||||||
reties=0
|
reties=0
|
||||||
@ -27,6 +27,10 @@ while true; do
|
|||||||
done
|
done
|
||||||
set -e
|
set -e
|
||||||
|
|
||||||
|
# cleanup for retry run if volume is not recreated
|
||||||
|
docker kill "$(docker ps -aq)" || true
|
||||||
|
docker rm "$(docker ps -aq)" || true
|
||||||
|
|
||||||
echo "Start tests"
|
echo "Start tests"
|
||||||
export CLICKHOUSE_TESTS_SERVER_BIN_PATH=/clickhouse
|
export CLICKHOUSE_TESTS_SERVER_BIN_PATH=/clickhouse
|
||||||
export CLICKHOUSE_TESTS_CLIENT_BIN_PATH=/clickhouse
|
export CLICKHOUSE_TESTS_CLIENT_BIN_PATH=/clickhouse
|
||||||
|
19
docker/test/integration/runner/misc/client.crt
Normal file
19
docker/test/integration/runner/misc/client.crt
Normal file
@ -0,0 +1,19 @@
|
|||||||
|
-----BEGIN CERTIFICATE-----
|
||||||
|
MIIC/TCCAeWgAwIBAgIJANjx1QSR77HBMA0GCSqGSIb3DQEBCwUAMBQxEjAQBgNV
|
||||||
|
BAMMCWxvY2FsaG9zdDAgFw0xODA3MzAxODE2MDhaGA8yMjkyMDUxNDE4MTYwOFow
|
||||||
|
FDESMBAGA1UEAwwJbG9jYWxob3N0MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB
|
||||||
|
CgKCAQEAs9uSo6lJG8o8pw0fbVGVu0tPOljSWcVSXH9uiJBwlZLQnhN4SFSFohfI
|
||||||
|
4K8U1tBDTnxPLUo/V1K9yzoLiRDGMkwVj6+4+hE2udS2ePTQv5oaMeJ9wrs+5c9T
|
||||||
|
4pOtlq3pLAdm04ZMB1nbrEysceVudHRkQbGHzHp6VG29Fw7Ga6YpqyHQihRmEkTU
|
||||||
|
7UCYNA+Vk7aDPdMS/khweyTpXYZimaK9f0ECU3/VOeG3fH6Sp2X6FN4tUj/aFXEj
|
||||||
|
sRmU5G2TlYiSIUMF2JPdhSihfk1hJVALrHPTU38SOL+GyyBRWdNcrIwVwbpvsvPg
|
||||||
|
pryMSNxnpr0AK0dFhjwnupIv5hJIOQIDAQABo1AwTjAdBgNVHQ4EFgQUjPLb3uYC
|
||||||
|
kcamyZHK4/EV8jAP0wQwHwYDVR0jBBgwFoAUjPLb3uYCkcamyZHK4/EV8jAP0wQw
|
||||||
|
DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAM/ocuDvfPus/KpMVD51j
|
||||||
|
4IdlU8R0vmnYLQ+ygzOAo7+hUWP5j0yvq4ILWNmQX6HNvUggCgFv9bjwDFhb/5Vr
|
||||||
|
85ieWfTd9+LTjrOzTw4avdGwpX9G+6jJJSSq15tw5ElOIFb/qNA9O4dBiu8vn03C
|
||||||
|
L/zRSXrARhSqTW5w/tZkUcSTT+M5h28+Lgn9ysx4Ff5vi44LJ1NnrbJbEAIYsAAD
|
||||||
|
+UA+4MBFKx1r6hHINULev8+lCfkpwIaeS8RL+op4fr6kQPxnULw8wT8gkuc8I4+L
|
||||||
|
P9gg/xDHB44T3ADGZ5Ib6O0DJaNiToO6rnoaaxs0KkotbvDWvRoxEytSbXKoYjYp
|
||||||
|
0g==
|
||||||
|
-----END CERTIFICATE-----
|
@ -81,8 +81,8 @@ if [[ ! -f "$ZOO_DATA_DIR/myid" ]]; then
|
|||||||
echo "${ZOO_MY_ID:-1}" > "$ZOO_DATA_DIR/myid"
|
echo "${ZOO_MY_ID:-1}" > "$ZOO_DATA_DIR/myid"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
mkdir -p $(dirname $ZOO_SSL_KEYSTORE_LOCATION)
|
mkdir -p "$(dirname $ZOO_SSL_KEYSTORE_LOCATION)"
|
||||||
mkdir -p $(dirname $ZOO_SSL_TRUSTSTORE_LOCATION)
|
mkdir -p "$(dirname $ZOO_SSL_TRUSTSTORE_LOCATION)"
|
||||||
|
|
||||||
if [[ ! -f "$ZOO_SSL_KEYSTORE_LOCATION" ]]; then
|
if [[ ! -f "$ZOO_SSL_KEYSTORE_LOCATION" ]]; then
|
||||||
keytool -genkeypair -alias zookeeper -keyalg RSA -validity 365 -keysize 2048 -dname "cn=zookeeper" -keypass password -keystore $ZOO_SSL_KEYSTORE_LOCATION -storepass password -deststoretype pkcs12
|
keytool -genkeypair -alias zookeeper -keyalg RSA -validity 365 -keysize 2048 -dname "cn=zookeeper" -keypass password -keystore $ZOO_SSL_KEYSTORE_LOCATION -storepass password -deststoretype pkcs12
|
@ -552,6 +552,66 @@ create table query_metric_stats_denorm engine File(TSVWithNamesAndTypes,
|
|||||||
order by test, query_index, metric_name
|
order by test, query_index, metric_name
|
||||||
;
|
;
|
||||||
" 2> >(tee -a analyze/errors.log 1>&2)
|
" 2> >(tee -a analyze/errors.log 1>&2)
|
||||||
|
|
||||||
|
# Fetch historical query variability thresholds from the CI database
|
||||||
|
if [ -v CHPC_DATABASE_URL ]
|
||||||
|
then
|
||||||
|
set +x # Don't show password in the log
|
||||||
|
client=(clickhouse-client
|
||||||
|
# Surprisingly, clickhouse-client doesn't understand --host 127.0.0.1:9000
|
||||||
|
# so I have to extract host and port with clickhouse-local. I tried to use
|
||||||
|
# Poco URI parser to support this in the client, but it's broken and can't
|
||||||
|
# parse host:port.
|
||||||
|
$(clickhouse-local --query "with '${CHPC_DATABASE_URL}' as url select '--host ' || domain(url) || ' --port ' || toString(port(url)) format TSV")
|
||||||
|
--secure
|
||||||
|
--user "${CHPC_DATABASE_USER}"
|
||||||
|
--password "${CHPC_DATABASE_PASSWORD}"
|
||||||
|
--config "right/config/client_config.xml"
|
||||||
|
--database perftest
|
||||||
|
--date_time_input_format=best_effort)
|
||||||
|
|
||||||
|
|
||||||
|
# Precision is going to be 1.5 times worse for PRs, because we run the queries
|
||||||
|
# less times. How do I know it? I ran this:
|
||||||
|
# SELECT quantilesExact(0., 0.1, 0.5, 0.75, 0.95, 1.)(p / m)
|
||||||
|
# FROM
|
||||||
|
# (
|
||||||
|
# SELECT
|
||||||
|
# quantileIf(0.95)(stat_threshold, pr_number = 0) AS m,
|
||||||
|
# quantileIf(0.95)(stat_threshold, (pr_number != 0) AND (abs(diff) < stat_threshold)) AS p
|
||||||
|
# FROM query_metrics_v2
|
||||||
|
# WHERE (event_date > (today() - toIntervalMonth(1))) AND (metric = 'client_time')
|
||||||
|
# GROUP BY
|
||||||
|
# test,
|
||||||
|
# query_index,
|
||||||
|
# query_display_name
|
||||||
|
# HAVING count(*) > 100
|
||||||
|
# )
|
||||||
|
#
|
||||||
|
# The file can be empty if the server is inaccessible, so we can't use
|
||||||
|
# TSVWithNamesAndTypes.
|
||||||
|
#
|
||||||
|
"${client[@]}" --query "
|
||||||
|
select test, query_index,
|
||||||
|
quantileExact(0.99)(abs(diff)) * 1.5 AS max_diff,
|
||||||
|
quantileExactIf(0.99)(stat_threshold, abs(diff) < stat_threshold) * 1.5 AS max_stat_threshold,
|
||||||
|
query_display_name
|
||||||
|
from query_metrics_v2
|
||||||
|
-- We use results at least one week in the past, so that the current
|
||||||
|
-- changes do not immediately influence the statistics, and we have
|
||||||
|
-- some time to notice that something is wrong.
|
||||||
|
where event_date between now() - interval 1 month - interval 1 week
|
||||||
|
and now() - interval 1 week
|
||||||
|
and metric = 'client_time'
|
||||||
|
and pr_number = 0
|
||||||
|
group by test, query_index, query_display_name
|
||||||
|
having count(*) > 100
|
||||||
|
" > analyze/historical-thresholds.tsv
|
||||||
|
set -x
|
||||||
|
else
|
||||||
|
touch analyze/historical-thresholds.tsv
|
||||||
|
fi
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
# Analyze results
|
# Analyze results
|
||||||
@ -596,6 +656,26 @@ create view query_metric_stats as
|
|||||||
diff float, stat_threshold float')
|
diff float, stat_threshold float')
|
||||||
;
|
;
|
||||||
|
|
||||||
|
create table report_thresholds engine File(TSVWithNamesAndTypes, 'report/thresholds.tsv')
|
||||||
|
as select
|
||||||
|
query_display_names.test test, query_display_names.query_index query_index,
|
||||||
|
ceil(greatest(0.1, historical_thresholds.max_diff,
|
||||||
|
test_thresholds.report_threshold), 2) changed_threshold,
|
||||||
|
ceil(greatest(0.2, historical_thresholds.max_stat_threshold,
|
||||||
|
test_thresholds.report_threshold + 0.1), 2) unstable_threshold,
|
||||||
|
query_display_names.query_display_name query_display_name
|
||||||
|
from query_display_names
|
||||||
|
left join file('analyze/historical-thresholds.tsv', TSV,
|
||||||
|
'test text, query_index int, max_diff float, max_stat_threshold float,
|
||||||
|
query_display_name text') historical_thresholds
|
||||||
|
on query_display_names.test = historical_thresholds.test
|
||||||
|
and query_display_names.query_index = historical_thresholds.query_index
|
||||||
|
and query_display_names.query_display_name = historical_thresholds.query_display_name
|
||||||
|
left join file('analyze/report-thresholds.tsv', TSV,
|
||||||
|
'test text, report_threshold float') test_thresholds
|
||||||
|
on query_display_names.test = test_thresholds.test
|
||||||
|
;
|
||||||
|
|
||||||
-- Main statistics for queries -- query time as reported in query log.
|
-- Main statistics for queries -- query time as reported in query log.
|
||||||
create table queries engine File(TSVWithNamesAndTypes, 'report/queries.tsv')
|
create table queries engine File(TSVWithNamesAndTypes, 'report/queries.tsv')
|
||||||
as select
|
as select
|
||||||
@ -610,23 +690,23 @@ create table queries engine File(TSVWithNamesAndTypes, 'report/queries.tsv')
|
|||||||
-- uncaught regressions, because for the default 7 runs we do for PRs,
|
-- uncaught regressions, because for the default 7 runs we do for PRs,
|
||||||
-- the randomization distribution has only 16 values, so the max quantile
|
-- the randomization distribution has only 16 values, so the max quantile
|
||||||
-- is actually 0.9375.
|
-- is actually 0.9375.
|
||||||
abs(diff) > report_threshold and abs(diff) >= stat_threshold as changed_fail,
|
abs(diff) > changed_threshold and abs(diff) >= stat_threshold as changed_fail,
|
||||||
abs(diff) > report_threshold - 0.05 and abs(diff) >= stat_threshold as changed_show,
|
abs(diff) > changed_threshold - 0.05 and abs(diff) >= stat_threshold as changed_show,
|
||||||
|
|
||||||
not changed_fail and stat_threshold > report_threshold + 0.10 as unstable_fail,
|
not changed_fail and stat_threshold > unstable_threshold as unstable_fail,
|
||||||
not changed_show and stat_threshold > report_threshold - 0.05 as unstable_show,
|
not changed_show and stat_threshold > unstable_threshold - 0.05 as unstable_show,
|
||||||
|
|
||||||
left, right, diff, stat_threshold,
|
left, right, diff, stat_threshold,
|
||||||
if(report_threshold > 0, report_threshold, 0.10) as report_threshold,
|
|
||||||
query_metric_stats.test test, query_metric_stats.query_index query_index,
|
query_metric_stats.test test, query_metric_stats.query_index query_index,
|
||||||
query_display_name
|
query_display_names.query_display_name query_display_name
|
||||||
from query_metric_stats
|
from query_metric_stats
|
||||||
left join file('analyze/report-thresholds.tsv', TSV,
|
|
||||||
'test text, report_threshold float') thresholds
|
|
||||||
on query_metric_stats.test = thresholds.test
|
|
||||||
left join query_display_names
|
left join query_display_names
|
||||||
on query_metric_stats.test = query_display_names.test
|
on query_metric_stats.test = query_display_names.test
|
||||||
and query_metric_stats.query_index = query_display_names.query_index
|
and query_metric_stats.query_index = query_display_names.query_index
|
||||||
|
left join report_thresholds
|
||||||
|
on query_display_names.test = report_thresholds.test
|
||||||
|
and query_display_names.query_index = report_thresholds.query_index
|
||||||
|
and query_display_names.query_display_name = report_thresholds.query_display_name
|
||||||
-- 'server_time' is rounded down to ms, which might be bad for very short queries.
|
-- 'server_time' is rounded down to ms, which might be bad for very short queries.
|
||||||
-- Use 'client_time' instead.
|
-- Use 'client_time' instead.
|
||||||
where metric_name = 'client_time'
|
where metric_name = 'client_time'
|
||||||
@ -889,7 +969,6 @@ create table all_query_metrics_tsv engine File(TSV, 'report/all-query-metrics.ts
|
|||||||
order by test, query_index;
|
order by test, query_index;
|
||||||
" 2> >(tee -a report/errors.log 1>&2)
|
" 2> >(tee -a report/errors.log 1>&2)
|
||||||
|
|
||||||
|
|
||||||
# Prepare source data for metrics and flamegraphs for queries that were profiled
|
# Prepare source data for metrics and flamegraphs for queries that were profiled
|
||||||
# by perf.py.
|
# by perf.py.
|
||||||
for version in {right,left}
|
for version in {right,left}
|
||||||
@ -1148,6 +1227,55 @@ unset IFS
|
|||||||
|
|
||||||
function upload_results
|
function upload_results
|
||||||
{
|
{
|
||||||
|
# Prepare info for the CI checks table.
|
||||||
|
rm ci-checks.tsv
|
||||||
|
clickhouse-local --query "
|
||||||
|
create view queries as select * from file('report/queries.tsv', TSVWithNamesAndTypes,
|
||||||
|
'changed_fail int, changed_show int, unstable_fail int, unstable_show int,
|
||||||
|
left float, right float, diff float, stat_threshold float,
|
||||||
|
test text, query_index int, query_display_name text');
|
||||||
|
|
||||||
|
create table ci_checks engine File(TSVWithNamesAndTypes, 'ci-checks.tsv')
|
||||||
|
as select
|
||||||
|
$PR_TO_TEST pull_request_number,
|
||||||
|
'$SHA_TO_TEST' commit_sha,
|
||||||
|
'Performance' check_name,
|
||||||
|
'$(sed -n 's/.*<!--status: \(.*\)-->/\1/p' report.html)' check_status,
|
||||||
|
-- TODO toDateTime() can't parse output of 'date', so no time for now.
|
||||||
|
($(date +%s) - $CHPC_CHECK_START_TIMESTAMP) * 1000 check_duration_ms,
|
||||||
|
fromUnixTimestamp($CHPC_CHECK_START_TIMESTAMP) check_start_time,
|
||||||
|
test_name,
|
||||||
|
test_status,
|
||||||
|
test_duration_ms,
|
||||||
|
report_url,
|
||||||
|
$PR_TO_TEST = 0
|
||||||
|
? 'https://github.com/ClickHouse/ClickHouse/commit/$SHA_TO_TEST'
|
||||||
|
: 'https://github.com/ClickHouse/ClickHouse/pull/$PR_TO_TEST' pull_request_url,
|
||||||
|
'' commit_url,
|
||||||
|
'' task_url,
|
||||||
|
'' base_ref,
|
||||||
|
'' base_repo,
|
||||||
|
'' head_ref,
|
||||||
|
'' head_repo
|
||||||
|
from (
|
||||||
|
select '' test_name,
|
||||||
|
'$(sed -n 's/.*<!--message: \(.*\)-->/\1/p' report.html)' test_status,
|
||||||
|
0 test_duration_ms,
|
||||||
|
'https://clickhouse-test-reports.s3.yandex.net/$PR_TO_TEST/$SHA_TO_TEST/performance_comparison/report.html#fail1' report_url
|
||||||
|
union all
|
||||||
|
select test || ' #' || toString(query_index), 'slower' test_status, 0 test_duration_ms,
|
||||||
|
'https://clickhouse-test-reports.s3.yandex.net/$PR_TO_TEST/$SHA_TO_TEST/performance_comparison/report.html#changes-in-performance.'
|
||||||
|
|| test || '.' || toString(query_index) report_url
|
||||||
|
from queries where changed_fail != 0 and diff > 0
|
||||||
|
union all
|
||||||
|
select test || ' #' || toString(query_index), 'unstable' test_status, 0 test_duration_ms,
|
||||||
|
'https://clickhouse-test-reports.s3.yandex.net/$PR_TO_TEST/$SHA_TO_TEST/performance_comparison/report.html#unstable-queries.'
|
||||||
|
|| test || '.' || toString(query_index) report_url
|
||||||
|
from queries where unstable_fail != 0
|
||||||
|
)
|
||||||
|
;
|
||||||
|
"
|
||||||
|
|
||||||
if ! [ -v CHPC_DATABASE_URL ]
|
if ! [ -v CHPC_DATABASE_URL ]
|
||||||
then
|
then
|
||||||
echo Database for test results is not specified, will not upload them.
|
echo Database for test results is not specified, will not upload them.
|
||||||
@ -1216,6 +1344,10 @@ $REF_SHA $SHA_TO_TEST $(numactl --show | sed -n 's/^cpubind:[[:space:]]\+/numact
|
|||||||
$REF_SHA $SHA_TO_TEST $(numactl --hardware | sed -n 's/^available:[[:space:]]\+/numactl-available /p')
|
$REF_SHA $SHA_TO_TEST $(numactl --hardware | sed -n 's/^available:[[:space:]]\+/numactl-available /p')
|
||||||
EOF
|
EOF
|
||||||
|
|
||||||
|
# Also insert some data about the check into the CI checks table.
|
||||||
|
"${client[@]}" --query "INSERT INTO "'"'"gh-data"'"'".checks FORMAT TSVWithNamesAndTypes" \
|
||||||
|
< ci-checks.tsv
|
||||||
|
|
||||||
set -x
|
set -x
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1,6 +1,9 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
set -ex
|
set -ex
|
||||||
|
|
||||||
|
CHPC_CHECK_START_TIMESTAMP="$(date +%s)"
|
||||||
|
export CHPC_CHECK_START_TIMESTAMP
|
||||||
|
|
||||||
# Use the packaged repository to find the revision we will compare to.
|
# Use the packaged repository to find the revision we will compare to.
|
||||||
function find_reference_sha
|
function find_reference_sha
|
||||||
{
|
{
|
||||||
|
@ -44,7 +44,7 @@ parser.add_argument('--port', nargs='*', default=[9000], help="Space-separated l
|
|||||||
parser.add_argument('--runs', type=int, default=1, help='Number of query runs per server.')
|
parser.add_argument('--runs', type=int, default=1, help='Number of query runs per server.')
|
||||||
parser.add_argument('--max-queries', type=int, default=None, help='Test no more than this number of queries, chosen at random.')
|
parser.add_argument('--max-queries', type=int, default=None, help='Test no more than this number of queries, chosen at random.')
|
||||||
parser.add_argument('--queries-to-run', nargs='*', type=int, default=None, help='Space-separated list of indexes of queries to test.')
|
parser.add_argument('--queries-to-run', nargs='*', type=int, default=None, help='Space-separated list of indexes of queries to test.')
|
||||||
parser.add_argument('--max-query-seconds', type=int, default=10, help='For how many seconds at most a query is allowed to run. The script finishes with error if this time is exceeded.')
|
parser.add_argument('--max-query-seconds', type=int, default=15, help='For how many seconds at most a query is allowed to run. The script finishes with error if this time is exceeded.')
|
||||||
parser.add_argument('--profile-seconds', type=int, default=0, help='For how many seconds to profile a query for which the performance has changed.')
|
parser.add_argument('--profile-seconds', type=int, default=0, help='For how many seconds to profile a query for which the performance has changed.')
|
||||||
parser.add_argument('--long', action='store_true', help='Do not skip the tests tagged as long.')
|
parser.add_argument('--long', action='store_true', help='Do not skip the tests tagged as long.')
|
||||||
parser.add_argument('--print-queries', action='store_true', help='Print test queries and exit.')
|
parser.add_argument('--print-queries', action='store_true', help='Print test queries and exit.')
|
||||||
@ -273,8 +273,14 @@ for query_index in queries_to_run:
|
|||||||
prewarm_id = f'{query_prefix}.prewarm0'
|
prewarm_id = f'{query_prefix}.prewarm0'
|
||||||
|
|
||||||
try:
|
try:
|
||||||
# Will also detect too long queries during warmup stage
|
# During the warmup runs, we will also:
|
||||||
res = c.execute(q, query_id = prewarm_id, settings = {'max_execution_time': args.max_query_seconds})
|
# * detect queries that are exceedingly long, to fail fast,
|
||||||
|
# * collect profiler traces, which might be helpful for analyzing
|
||||||
|
# test coverage. We disable profiler for normal runs because
|
||||||
|
# it makes the results unstable.
|
||||||
|
res = c.execute(q, query_id = prewarm_id,
|
||||||
|
settings = {'max_execution_time': args.max_query_seconds,
|
||||||
|
'query_profiler_real_time_period_ns': 10000000})
|
||||||
except clickhouse_driver.errors.Error as e:
|
except clickhouse_driver.errors.Error as e:
|
||||||
# Add query id to the exception to make debugging easier.
|
# Add query id to the exception to make debugging easier.
|
||||||
e.args = (prewarm_id, *e.args)
|
e.args = (prewarm_id, *e.args)
|
||||||
@ -359,10 +365,11 @@ for query_index in queries_to_run:
|
|||||||
# For very short queries we have a special mode where we run them for at
|
# For very short queries we have a special mode where we run them for at
|
||||||
# least some time. The recommended lower bound of run time for "normal"
|
# least some time. The recommended lower bound of run time for "normal"
|
||||||
# queries is about 0.1 s, and we run them about 10 times, giving the
|
# queries is about 0.1 s, and we run them about 10 times, giving the
|
||||||
# time per query per server of about one second. Use this value as a
|
# time per query per server of about one second. Run "short" queries
|
||||||
# reference for "short" queries.
|
# for longer time, because they have a high percentage of overhead and
|
||||||
|
# might give less stable results.
|
||||||
if is_short[query_index]:
|
if is_short[query_index]:
|
||||||
if server_seconds >= 2 * len(this_query_connections):
|
if server_seconds >= 8 * len(this_query_connections):
|
||||||
break
|
break
|
||||||
# Also limit the number of runs, so that we don't go crazy processing
|
# Also limit the number of runs, so that we don't go crazy processing
|
||||||
# the results -- 'eqmed.sql' is really suboptimal.
|
# the results -- 'eqmed.sql' is really suboptimal.
|
||||||
|
@ -446,10 +446,16 @@ if args.report == 'main':
|
|||||||
attrs[3] = f'style="background: {color_bad}"'
|
attrs[3] = f'style="background: {color_bad}"'
|
||||||
else:
|
else:
|
||||||
attrs[3] = ''
|
attrs[3] = ''
|
||||||
|
# Just don't add the slightly unstable queries we don't consider
|
||||||
|
# errors. It's not clear what the user should do with them.
|
||||||
|
continue
|
||||||
|
|
||||||
text += tableRow(r, attrs, anchor)
|
text += tableRow(r, attrs, anchor)
|
||||||
|
|
||||||
text += tableEnd()
|
text += tableEnd()
|
||||||
|
|
||||||
|
# Don't add an empty table.
|
||||||
|
if very_unstable_queries:
|
||||||
tables.append(text)
|
tables.append(text)
|
||||||
|
|
||||||
add_unstable_queries()
|
add_unstable_queries()
|
||||||
@ -549,16 +555,16 @@ if args.report == 'main':
|
|||||||
message_array.append(str(slower_queries) + ' slower')
|
message_array.append(str(slower_queries) + ' slower')
|
||||||
|
|
||||||
if unstable_partial_queries:
|
if unstable_partial_queries:
|
||||||
unstable_queries += unstable_partial_queries
|
very_unstable_queries += unstable_partial_queries
|
||||||
error_tests += unstable_partial_queries
|
|
||||||
status = 'failure'
|
status = 'failure'
|
||||||
|
|
||||||
if unstable_queries:
|
# Don't show mildly unstable queries, only the very unstable ones we
|
||||||
message_array.append(str(unstable_queries) + ' unstable')
|
# treat as errors.
|
||||||
|
if very_unstable_queries:
|
||||||
# Disabled before fix.
|
if very_unstable_queries > 3:
|
||||||
# if very_unstable_queries:
|
error_tests += very_unstable_queries
|
||||||
# status = 'failure'
|
status = 'failure'
|
||||||
|
message_array.append(str(very_unstable_queries) + ' unstable')
|
||||||
|
|
||||||
error_tests += slow_average_tests
|
error_tests += slow_average_tests
|
||||||
if error_tests:
|
if error_tests:
|
||||||
|
@ -2,7 +2,6 @@
|
|||||||
FROM ubuntu:20.04
|
FROM ubuntu:20.04
|
||||||
|
|
||||||
RUN apt-get update --yes && env DEBIAN_FRONTEND=noninteractive apt-get install wget unzip git openjdk-14-jdk maven python3 --yes --no-install-recommends
|
RUN apt-get update --yes && env DEBIAN_FRONTEND=noninteractive apt-get install wget unzip git openjdk-14-jdk maven python3 --yes --no-install-recommends
|
||||||
|
|
||||||
RUN wget https://github.com/sqlancer/sqlancer/archive/master.zip -O /sqlancer.zip
|
RUN wget https://github.com/sqlancer/sqlancer/archive/master.zip -O /sqlancer.zip
|
||||||
RUN mkdir /sqlancer && \
|
RUN mkdir /sqlancer && \
|
||||||
cd /sqlancer && \
|
cd /sqlancer && \
|
||||||
|
@ -8,6 +8,7 @@ RUN apt-get update -y && \
|
|||||||
python3-wheel \
|
python3-wheel \
|
||||||
brotli \
|
brotli \
|
||||||
netcat-openbsd \
|
netcat-openbsd \
|
||||||
|
postgresql-client \
|
||||||
zstd
|
zstd
|
||||||
|
|
||||||
RUN python3 -m pip install \
|
RUN python3 -m pip install \
|
||||||
|
@ -90,7 +90,7 @@ clickhouse-client --query "RENAME TABLE datasets.hits_v1 TO test.hits"
|
|||||||
clickhouse-client --query "RENAME TABLE datasets.visits_v1 TO test.visits"
|
clickhouse-client --query "RENAME TABLE datasets.visits_v1 TO test.visits"
|
||||||
clickhouse-client --query "SHOW TABLES FROM test"
|
clickhouse-client --query "SHOW TABLES FROM test"
|
||||||
|
|
||||||
./stress --hung-check --output-folder test_output --skip-func-tests "$SKIP_TESTS_OPTION" \
|
./stress --hung-check --drop-databases --output-folder test_output --skip-func-tests "$SKIP_TESTS_OPTION" \
|
||||||
&& echo -e 'Test script exit code\tOK' >> /test_output/test_results.tsv \
|
&& echo -e 'Test script exit code\tOK' >> /test_output/test_results.tsv \
|
||||||
|| echo -e 'Test script failed\tFAIL' >> /test_output/test_results.tsv
|
|| echo -e 'Test script failed\tFAIL' >> /test_output/test_results.tsv
|
||||||
|
|
||||||
|
@ -19,25 +19,25 @@ def get_skip_list_cmd(path):
|
|||||||
|
|
||||||
|
|
||||||
def get_options(i):
|
def get_options(i):
|
||||||
options = ""
|
options = []
|
||||||
if 0 < i:
|
if 0 < i:
|
||||||
options += " --order=random"
|
options.append("--order=random")
|
||||||
|
|
||||||
if i % 3 == 1:
|
if i % 3 == 1:
|
||||||
options += " --db-engine=Ordinary"
|
options.append("--db-engine=Ordinary")
|
||||||
|
|
||||||
if i % 3 == 2:
|
if i % 3 == 2:
|
||||||
options += ''' --db-engine="Replicated('/test/db/test_{}', 's1', 'r1')"'''.format(i)
|
options.append('''--client-option='allow_experimental_database_replicated=1' --db-engine="Replicated('/test/db/test_{}', 's1', 'r1')"'''.format(i))
|
||||||
|
|
||||||
# If database name is not specified, new database is created for each functional test.
|
# If database name is not specified, new database is created for each functional test.
|
||||||
# Run some threads with one database for all tests.
|
# Run some threads with one database for all tests.
|
||||||
if i % 2 == 1:
|
if i % 2 == 1:
|
||||||
options += " --database=test_{}".format(i)
|
options.append(" --database=test_{}".format(i))
|
||||||
|
|
||||||
if i == 13:
|
if i == 13:
|
||||||
options += " --client-option='memory_tracker_fault_probability=0.00001'"
|
options.append(" --client-option='memory_tracker_fault_probability=0.00001'")
|
||||||
|
|
||||||
return options
|
return ' '.join(options)
|
||||||
|
|
||||||
|
|
||||||
def run_func_test(cmd, output_prefix, num_processes, skip_tests_option, global_time_limit):
|
def run_func_test(cmd, output_prefix, num_processes, skip_tests_option, global_time_limit):
|
||||||
@ -58,7 +58,11 @@ def run_func_test(cmd, output_prefix, num_processes, skip_tests_option, global_t
|
|||||||
time.sleep(0.5)
|
time.sleep(0.5)
|
||||||
return pipes
|
return pipes
|
||||||
|
|
||||||
def prepare_for_hung_check():
|
def compress_stress_logs(output_path, files_prefix):
|
||||||
|
cmd = f"cd {output_path} && tar -zcf stress_run_logs.tar.gz {files_prefix}* && rm {files_prefix}*"
|
||||||
|
check_output(cmd, shell=True)
|
||||||
|
|
||||||
|
def prepare_for_hung_check(drop_databases):
|
||||||
# FIXME this function should not exist, but...
|
# FIXME this function should not exist, but...
|
||||||
|
|
||||||
# We attach gdb to clickhouse-server before running tests
|
# We attach gdb to clickhouse-server before running tests
|
||||||
@ -91,6 +95,17 @@ def prepare_for_hung_check():
|
|||||||
# Long query from 00084_external_agregation
|
# Long query from 00084_external_agregation
|
||||||
call("""clickhouse client -q "KILL QUERY WHERE query LIKE 'SELECT URL, uniq(SearchPhrase) AS u FROM test.hits GROUP BY URL ORDER BY u %'" """, shell=True, stderr=STDOUT)
|
call("""clickhouse client -q "KILL QUERY WHERE query LIKE 'SELECT URL, uniq(SearchPhrase) AS u FROM test.hits GROUP BY URL ORDER BY u %'" """, shell=True, stderr=STDOUT)
|
||||||
|
|
||||||
|
if drop_databases:
|
||||||
|
# Here we try to drop all databases in async mode. If some queries really hung, than drop will hung too.
|
||||||
|
# Otherwise we will get rid of queries which wait for background pool. It can take a long time on slow builds (more than 900 seconds).
|
||||||
|
databases = check_output('clickhouse client -q "SHOW DATABASES"', shell=True).decode('utf-8').strip().split()
|
||||||
|
for db in databases:
|
||||||
|
if db == "system":
|
||||||
|
continue
|
||||||
|
command = f'clickhouse client -q "DROP DATABASE {db}"'
|
||||||
|
# we don't wait for drop
|
||||||
|
Popen(command, shell=True)
|
||||||
|
|
||||||
# Wait for last queries to finish if any, not longer than 300 seconds
|
# Wait for last queries to finish if any, not longer than 300 seconds
|
||||||
call("""clickhouse client -q "select sleepEachRow((
|
call("""clickhouse client -q "select sleepEachRow((
|
||||||
select maxOrDefault(300 - elapsed) + 1 from system.processes where query not like '%from system.processes%' and elapsed < 300
|
select maxOrDefault(300 - elapsed) + 1 from system.processes where query not like '%from system.processes%' and elapsed < 300
|
||||||
@ -116,10 +131,14 @@ if __name__ == "__main__":
|
|||||||
parser.add_argument("--server-log-folder", default='/var/log/clickhouse-server')
|
parser.add_argument("--server-log-folder", default='/var/log/clickhouse-server')
|
||||||
parser.add_argument("--output-folder")
|
parser.add_argument("--output-folder")
|
||||||
parser.add_argument("--global-time-limit", type=int, default=3600)
|
parser.add_argument("--global-time-limit", type=int, default=3600)
|
||||||
parser.add_argument("--num-parallel", default=cpu_count())
|
parser.add_argument("--num-parallel", type=int, default=cpu_count())
|
||||||
parser.add_argument('--hung-check', action='store_true', default=False)
|
parser.add_argument('--hung-check', action='store_true', default=False)
|
||||||
|
# make sense only for hung check
|
||||||
|
parser.add_argument('--drop-databases', action='store_true', default=False)
|
||||||
|
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
|
if args.drop_databases and not args.hung_check:
|
||||||
|
raise Exception("--drop-databases only used in hung check (--hung-check)")
|
||||||
func_pipes = []
|
func_pipes = []
|
||||||
func_pipes = run_func_test(args.test_cmd, args.output_folder, args.num_parallel, args.skip_func_tests, args.global_time_limit)
|
func_pipes = run_func_test(args.test_cmd, args.output_folder, args.num_parallel, args.skip_func_tests, args.global_time_limit)
|
||||||
|
|
||||||
@ -135,8 +154,13 @@ if __name__ == "__main__":
|
|||||||
time.sleep(5)
|
time.sleep(5)
|
||||||
|
|
||||||
logging.info("All processes finished")
|
logging.info("All processes finished")
|
||||||
|
|
||||||
|
logging.info("Compressing stress logs")
|
||||||
|
compress_stress_logs(args.output_folder, "stress_test_run_")
|
||||||
|
logging.info("Logs compressed")
|
||||||
|
|
||||||
if args.hung_check:
|
if args.hung_check:
|
||||||
have_long_running_queries = prepare_for_hung_check()
|
have_long_running_queries = prepare_for_hung_check(args.drop_databases)
|
||||||
logging.info("Checking if some queries hung")
|
logging.info("Checking if some queries hung")
|
||||||
cmd = "{} {} {}".format(args.test_cmd, "--hung-check", "00001_select_1")
|
cmd = "{} {} {}".format(args.test_cmd, "--hung-check", "00001_select_1")
|
||||||
res = call(cmd, shell=True, stderr=STDOUT)
|
res = call(cmd, shell=True, stderr=STDOUT)
|
||||||
|
@ -73,4 +73,4 @@ RUN set -x \
|
|||||||
VOLUME /var/lib/docker
|
VOLUME /var/lib/docker
|
||||||
EXPOSE 2375
|
EXPOSE 2375
|
||||||
ENTRYPOINT ["dockerd-entrypoint.sh"]
|
ENTRYPOINT ["dockerd-entrypoint.sh"]
|
||||||
CMD ["sh", "-c", "python3 regression.py --no-color -o classic --local --clickhouse-binary-path ${CLICKHOUSE_TESTS_SERVER_BIN_PATH} --log test.log ${TESTFLOWS_OPTS}; cat test.log | tfs report results --format json > results.json; /usr/local/bin/process_testflows_result.py || echo -e 'failure\tCannot parse results' > check_status.tsv"]
|
CMD ["sh", "-c", "python3 regression.py --no-color -o new-fails --local --clickhouse-binary-path ${CLICKHOUSE_TESTS_SERVER_BIN_PATH} --log test.log ${TESTFLOWS_OPTS}; cat test.log | tfs report results --format json > results.json; /usr/local/bin/process_testflows_result.py || echo -e 'failure\tCannot parse results' > check_status.tsv; find * -type f | grep _instances | grep clickhouse-server | xargs -n1 tar -rvf clickhouse_logs.tar; gzip -9 clickhouse_logs.tar"]
|
||||||
|
@ -1,6 +1,15 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
set -e
|
set -e
|
||||||
|
|
||||||
|
echo "Configure to use Yandex dockerhub-proxy"
|
||||||
|
mkdir -p /etc/docker/
|
||||||
|
cat > /etc/docker/daemon.json << EOF
|
||||||
|
{
|
||||||
|
"insecure-registries" : ["dockerhub-proxy.sas.yp-c.yandex.net:5000"],
|
||||||
|
"registry-mirrors" : ["http://dockerhub-proxy.sas.yp-c.yandex.net:5000"]
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
|
||||||
dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 &>/var/log/somefile &
|
dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 &>/var/log/somefile &
|
||||||
|
|
||||||
set +e
|
set +e
|
||||||
@ -16,14 +25,6 @@ while true; do
|
|||||||
done
|
done
|
||||||
set -e
|
set -e
|
||||||
|
|
||||||
echo "Configure to use Yandex dockerhub-proxy"
|
|
||||||
cat > /etc/docker/daemon.json << EOF
|
|
||||||
{
|
|
||||||
"insecure-registries": ["dockerhub-proxy.sas.yp-c.yandex.net:5000"],
|
|
||||||
"registry-mirrors": ["dockerhub-proxy.sas.yp-c.yandex.net:5000"]
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
echo "Start tests"
|
echo "Start tests"
|
||||||
export CLICKHOUSE_TESTS_SERVER_BIN_PATH=/clickhouse
|
export CLICKHOUSE_TESTS_SERVER_BIN_PATH=/clickhouse
|
||||||
export CLICKHOUSE_TESTS_CLIENT_BIN_PATH=/clickhouse
|
export CLICKHOUSE_TESTS_CLIENT_BIN_PATH=/clickhouse
|
||||||
|
@ -41,6 +41,14 @@ toc_title: Cloud
|
|||||||
- Built-in monitoring and database management platform
|
- Built-in monitoring and database management platform
|
||||||
- Professional database expert technical support and service
|
- Professional database expert technical support and service
|
||||||
|
|
||||||
|
## SberCloud {#sbercloud}
|
||||||
|
|
||||||
|
[SberCloud.Advanced](https://sbercloud.ru/en/advanced) provides [MapReduce Service (MRS)](https://docs.sbercloud.ru/mrs/ug/topics/ug__clickhouse.html), a reliable, secure, and easy-to-use enterprise-level platform for storing, processing, and analyzing big data. MRS allows you to quickly create and manage ClickHouse clusters.
|
||||||
|
|
||||||
|
- A ClickHouse instance consists of three ZooKeeper nodes and multiple ClickHouse nodes. The Dedicated Replica mode is used to ensure high reliability of dual data copies.
|
||||||
|
- MRS provides smooth and elastic scaling capabilities to quickly meet service growth requirements in scenarios where the cluster storage capacity or CPU computing resources are not enough. When you expand the capacity of ClickHouse nodes in a cluster, MRS provides a one-click data balancing tool and gives you the initiative to balance data. You can determine the data balancing mode and time based on service characteristics to ensure service availability, implementing smooth scaling.
|
||||||
|
- MRS uses the Elastic Load Balance ensuring high availability deployment architecture to automatically distribute user access traffic to multiple backend nodes, expanding service capabilities to external systems and improving fault tolerance. With the ELB polling mechanism, data is written to local tables and read from distributed tables on different nodes. In this way, data read/write load and high availability of application access are guaranteed.
|
||||||
|
|
||||||
## Tencent Cloud {#tencent-cloud}
|
## Tencent Cloud {#tencent-cloud}
|
||||||
|
|
||||||
[Tencent Managed Service for ClickHouse](https://cloud.tencent.com/product/cdwch) provides the following key features:
|
[Tencent Managed Service for ClickHouse](https://cloud.tencent.com/product/cdwch) provides the following key features:
|
||||||
|
@ -14,4 +14,4 @@ Service categories:
|
|||||||
- [Support](../commercial/support.md)
|
- [Support](../commercial/support.md)
|
||||||
|
|
||||||
!!! note "For service providers"
|
!!! note "For service providers"
|
||||||
If you happen to represent one of them, feel free to open a pull request adding your company to the respective section (or even adding a new section if the service doesn’t fit into existing categories). The easiest way to open a pull-request for documentation page is by using a “pencil” edit button in the top-right corner. If your service available in some local market, make sure to mention it in a localized documentation page as well (or at least point it out in a pull-request description).
|
If you happen to represent one of them, feel free to open a pull request adding your company to the respective section (or even adding a new section if the service does not fit into existing categories). The easiest way to open a pull-request for documentation page is by using a “pencil” edit button in the top-right corner. If your service available in some local market, make sure to mention it in a localized documentation page as well (or at least point it out in a pull-request description).
|
||||||
|
@ -120,11 +120,11 @@ clickhouse-client -nmT < tests/queries/0_stateless/01521_dummy_test.sql | tee te
|
|||||||
- don't switch databases (unless necessary)
|
- don't switch databases (unless necessary)
|
||||||
- you can create several table replicas on the same node if needed
|
- you can create several table replicas on the same node if needed
|
||||||
- you can use one of the test cluster definitions when needed (see system.clusters)
|
- you can use one of the test cluster definitions when needed (see system.clusters)
|
||||||
- use `number` / `numbers_mt` / `zeros` / `zeros_mt` and similar for queries / to initialize data when appliable
|
- use `number` / `numbers_mt` / `zeros` / `zeros_mt` and similar for queries / to initialize data when applicable
|
||||||
- clean up the created objects after test and before the test (DROP IF EXISTS) - in case of some dirty state
|
- clean up the created objects after test and before the test (DROP IF EXISTS) - in case of some dirty state
|
||||||
- prefer sync mode of operations (mutations, merges, etc.)
|
- prefer sync mode of operations (mutations, merges, etc.)
|
||||||
- use other SQL files in the `0_stateless` folder as an example
|
- use other SQL files in the `0_stateless` folder as an example
|
||||||
- ensure the feature / feature combination you want to tests is not covered yet with existing tests
|
- ensure the feature / feature combination you want to test is not yet covered with existing tests
|
||||||
|
|
||||||
#### Commit / push / create PR.
|
#### Commit / push / create PR.
|
||||||
|
|
||||||
|
@ -21,11 +21,11 @@ Various `IColumn` implementations (`ColumnUInt8`, `ColumnString`, and so on) are
|
|||||||
|
|
||||||
Nevertheless, it is possible to work with individual values as well. To represent an individual value, the `Field` is used. `Field` is just a discriminated union of `UInt64`, `Int64`, `Float64`, `String` and `Array`. `IColumn` has the `operator []` method to get the n-th value as a `Field`, and the `insert` method to append a `Field` to the end of a column. These methods are not very efficient, because they require dealing with temporary `Field` objects representing an individual value. There are more efficient methods, such as `insertFrom`, `insertRangeFrom`, and so on.
|
Nevertheless, it is possible to work with individual values as well. To represent an individual value, the `Field` is used. `Field` is just a discriminated union of `UInt64`, `Int64`, `Float64`, `String` and `Array`. `IColumn` has the `operator []` method to get the n-th value as a `Field`, and the `insert` method to append a `Field` to the end of a column. These methods are not very efficient, because they require dealing with temporary `Field` objects representing an individual value. There are more efficient methods, such as `insertFrom`, `insertRangeFrom`, and so on.
|
||||||
|
|
||||||
`Field` doesn’t have enough information about a specific data type for a table. For example, `UInt8`, `UInt16`, `UInt32`, and `UInt64` are all represented as `UInt64` in a `Field`.
|
`Field` does not have enough information about a specific data type for a table. For example, `UInt8`, `UInt16`, `UInt32`, and `UInt64` are all represented as `UInt64` in a `Field`.
|
||||||
|
|
||||||
## Leaky Abstractions {#leaky-abstractions}
|
## Leaky Abstractions {#leaky-abstractions}
|
||||||
|
|
||||||
`IColumn` has methods for common relational transformations of data, but they don’t meet all needs. For example, `ColumnUInt64` doesn’t have a method to calculate the sum of two columns, and `ColumnString` doesn’t have a method to run a substring search. These countless routines are implemented outside of `IColumn`.
|
`IColumn` has methods for common relational transformations of data, but they do not meet all needs. For example, `ColumnUInt64` does not have a method to calculate the sum of two columns, and `ColumnString` does not have a method to run a substring search. These countless routines are implemented outside of `IColumn`.
|
||||||
|
|
||||||
Various functions on columns can be implemented in a generic, non-efficient way using `IColumn` methods to extract `Field` values, or in a specialized way using knowledge of inner memory layout of data in a specific `IColumn` implementation. It is implemented by casting functions to a specific `IColumn` type and deal with internal representation directly. For example, `ColumnUInt64` has the `getData` method that returns a reference to an internal array, then a separate routine reads or fills that array directly. We have “leaky abstractions” to allow efficient specializations of various routines.
|
Various functions on columns can be implemented in a generic, non-efficient way using `IColumn` methods to extract `Field` values, or in a specialized way using knowledge of inner memory layout of data in a specific `IColumn` implementation. It is implemented by casting functions to a specific `IColumn` type and deal with internal representation directly. For example, `ColumnUInt64` has the `getData` method that returns a reference to an internal array, then a separate routine reads or fills that array directly. We have “leaky abstractions” to allow efficient specializations of various routines.
|
||||||
|
|
||||||
@ -35,7 +35,7 @@ Various functions on columns can be implemented in a generic, non-efficient way
|
|||||||
|
|
||||||
`IDataType` and `IColumn` are only loosely related to each other. Different data types can be represented in memory by the same `IColumn` implementations. For example, `DataTypeUInt32` and `DataTypeDateTime` are both represented by `ColumnUInt32` or `ColumnConstUInt32`. In addition, the same data type can be represented by different `IColumn` implementations. For example, `DataTypeUInt8` can be represented by `ColumnUInt8` or `ColumnConstUInt8`.
|
`IDataType` and `IColumn` are only loosely related to each other. Different data types can be represented in memory by the same `IColumn` implementations. For example, `DataTypeUInt32` and `DataTypeDateTime` are both represented by `ColumnUInt32` or `ColumnConstUInt32`. In addition, the same data type can be represented by different `IColumn` implementations. For example, `DataTypeUInt8` can be represented by `ColumnUInt8` or `ColumnConstUInt8`.
|
||||||
|
|
||||||
`IDataType` only stores metadata. For instance, `DataTypeUInt8` doesn’t store anything at all (except virtual pointer `vptr`) and `DataTypeFixedString` stores just `N` (the size of fixed-size strings).
|
`IDataType` only stores metadata. For instance, `DataTypeUInt8` does not store anything at all (except virtual pointer `vptr`) and `DataTypeFixedString` stores just `N` (the size of fixed-size strings).
|
||||||
|
|
||||||
`IDataType` has helper methods for various data formats. Examples are methods to serialize a value with possible quoting, to serialize a value for JSON, and to serialize a value as part of the XML format. There is no direct correspondence to data formats. For example, the different data formats `Pretty` and `TabSeparated` can use the same `serializeTextEscaped` helper method from the `IDataType` interface.
|
`IDataType` has helper methods for various data formats. Examples are methods to serialize a value with possible quoting, to serialize a value for JSON, and to serialize a value as part of the XML format. There is no direct correspondence to data formats. For example, the different data formats `Pretty` and `TabSeparated` can use the same `serializeTextEscaped` helper method from the `IDataType` interface.
|
||||||
|
|
||||||
@ -43,7 +43,7 @@ Various functions on columns can be implemented in a generic, non-efficient way
|
|||||||
|
|
||||||
A `Block` is a container that represents a subset (chunk) of a table in memory. It is just a set of triples: `(IColumn, IDataType, column name)`. During query execution, data is processed by `Block`s. If we have a `Block`, we have data (in the `IColumn` object), we have information about its type (in `IDataType`) that tells us how to deal with that column, and we have the column name. It could be either the original column name from the table or some artificial name assigned for getting temporary results of calculations.
|
A `Block` is a container that represents a subset (chunk) of a table in memory. It is just a set of triples: `(IColumn, IDataType, column name)`. During query execution, data is processed by `Block`s. If we have a `Block`, we have data (in the `IColumn` object), we have information about its type (in `IDataType`) that tells us how to deal with that column, and we have the column name. It could be either the original column name from the table or some artificial name assigned for getting temporary results of calculations.
|
||||||
|
|
||||||
When we calculate some function over columns in a block, we add another column with its result to the block, and we don’t touch columns for arguments of the function because operations are immutable. Later, unneeded columns can be removed from the block, but not modified. It is convenient for the elimination of common subexpressions.
|
When we calculate some function over columns in a block, we add another column with its result to the block, and we do not touch columns for arguments of the function because operations are immutable. Later, unneeded columns can be removed from the block, but not modified. It is convenient for the elimination of common subexpressions.
|
||||||
|
|
||||||
Blocks are created for every processed chunk of data. Note that for the same type of calculation, the column names and types remain the same for different blocks, and only column data changes. It is better to split block data from the block header because small block sizes have a high overhead of temporary strings for copying shared_ptrs and column names.
|
Blocks are created for every processed chunk of data. Note that for the same type of calculation, the column names and types remain the same for different blocks, and only column data changes. It is better to split block data from the block header because small block sizes have a high overhead of temporary strings for copying shared_ptrs and column names.
|
||||||
|
|
||||||
@ -118,11 +118,11 @@ Interpreters are responsible for creating the query execution pipeline from an `
|
|||||||
|
|
||||||
There are ordinary functions and aggregate functions. For aggregate functions, see the next section.
|
There are ordinary functions and aggregate functions. For aggregate functions, see the next section.
|
||||||
|
|
||||||
Ordinary functions don’t change the number of rows – they work as if they are processing each row independently. In fact, functions are not called for individual rows, but for `Block`’s of data to implement vectorized query execution.
|
Ordinary functions do not change the number of rows – they work as if they are processing each row independently. In fact, functions are not called for individual rows, but for `Block`’s of data to implement vectorized query execution.
|
||||||
|
|
||||||
There are some miscellaneous functions, like [blockSize](../sql-reference/functions/other-functions.md#function-blocksize), [rowNumberInBlock](../sql-reference/functions/other-functions.md#function-rownumberinblock), and [runningAccumulate](../sql-reference/functions/other-functions.md#runningaccumulate), that exploit block processing and violate the independence of rows.
|
There are some miscellaneous functions, like [blockSize](../sql-reference/functions/other-functions.md#function-blocksize), [rowNumberInBlock](../sql-reference/functions/other-functions.md#function-rownumberinblock), and [runningAccumulate](../sql-reference/functions/other-functions.md#runningaccumulate), that exploit block processing and violate the independence of rows.
|
||||||
|
|
||||||
ClickHouse has strong typing, so there’s no implicit type conversion. If a function doesn’t support a specific combination of types, it throws an exception. But functions can work (be overloaded) for many different combinations of types. For example, the `plus` function (to implement the `+` operator) works for any combination of numeric types: `UInt8` + `Float32`, `UInt16` + `Int8`, and so on. Also, some variadic functions can accept any number of arguments, such as the `concat` function.
|
ClickHouse has strong typing, so there’s no implicit type conversion. If a function does not support a specific combination of types, it throws an exception. But functions can work (be overloaded) for many different combinations of types. For example, the `plus` function (to implement the `+` operator) works for any combination of numeric types: `UInt8` + `Float32`, `UInt16` + `Int8`, and so on. Also, some variadic functions can accept any number of arguments, such as the `concat` function.
|
||||||
|
|
||||||
Implementing a function may be slightly inconvenient because a function explicitly dispatches supported data types and supported `IColumns`. For example, the `plus` function has code generated by instantiation of a C++ template for each combination of numeric types, and constant or non-constant left and right arguments.
|
Implementing a function may be slightly inconvenient because a function explicitly dispatches supported data types and supported `IColumns`. For example, the `plus` function has code generated by instantiation of a C++ template for each combination of numeric types, and constant or non-constant left and right arguments.
|
||||||
|
|
||||||
@ -152,7 +152,7 @@ Internally, it is just a primitive multithreaded server without coroutines or fi
|
|||||||
|
|
||||||
The server initializes the `Context` class with the necessary environment for query execution: the list of available databases, users and access rights, settings, clusters, the process list, the query log, and so on. Interpreters use this environment.
|
The server initializes the `Context` class with the necessary environment for query execution: the list of available databases, users and access rights, settings, clusters, the process list, the query log, and so on. Interpreters use this environment.
|
||||||
|
|
||||||
We maintain full backward and forward compatibility for the server TCP protocol: old clients can talk to new servers, and new clients can talk to old servers. But we don’t want to maintain it eternally, and we are removing support for old versions after about one year.
|
We maintain full backward and forward compatibility for the server TCP protocol: old clients can talk to new servers, and new clients can talk to old servers. But we do not want to maintain it eternally, and we are removing support for old versions after about one year.
|
||||||
|
|
||||||
!!! note "Note"
|
!!! note "Note"
|
||||||
For most external applications, we recommend using the HTTP interface because it is simple and easy to use. The TCP protocol is more tightly linked to internal data structures: it uses an internal format for passing blocks of data, and it uses custom framing for compressed data. We haven’t released a C library for that protocol because it requires linking most of the ClickHouse codebase, which is not practical.
|
For most external applications, we recommend using the HTTP interface because it is simple and easy to use. The TCP protocol is more tightly linked to internal data structures: it uses an internal format for passing blocks of data, and it uses custom framing for compressed data. We haven’t released a C library for that protocol because it requires linking most of the ClickHouse codebase, which is not practical.
|
||||||
@ -169,13 +169,13 @@ There is no global query plan for distributed query execution. Each node has its
|
|||||||
|
|
||||||
`MergeTree` is a family of storage engines that supports indexing by primary key. The primary key can be an arbitrary tuple of columns or expressions. Data in a `MergeTree` table is stored in “parts”. Each part stores data in the primary key order, so data is ordered lexicographically by the primary key tuple. All the table columns are stored in separate `column.bin` files in these parts. The files consist of compressed blocks. Each block is usually from 64 KB to 1 MB of uncompressed data, depending on the average value size. The blocks consist of column values placed contiguously one after the other. Column values are in the same order for each column (the primary key defines the order), so when you iterate by many columns, you get values for the corresponding rows.
|
`MergeTree` is a family of storage engines that supports indexing by primary key. The primary key can be an arbitrary tuple of columns or expressions. Data in a `MergeTree` table is stored in “parts”. Each part stores data in the primary key order, so data is ordered lexicographically by the primary key tuple. All the table columns are stored in separate `column.bin` files in these parts. The files consist of compressed blocks. Each block is usually from 64 KB to 1 MB of uncompressed data, depending on the average value size. The blocks consist of column values placed contiguously one after the other. Column values are in the same order for each column (the primary key defines the order), so when you iterate by many columns, you get values for the corresponding rows.
|
||||||
|
|
||||||
The primary key itself is “sparse”. It doesn’t address every single row, but only some ranges of data. A separate `primary.idx` file has the value of the primary key for each N-th row, where N is called `index_granularity` (usually, N = 8192). Also, for each column, we have `column.mrk` files with “marks,” which are offsets to each N-th row in the data file. Each mark is a pair: the offset in the file to the beginning of the compressed block, and the offset in the decompressed block to the beginning of data. Usually, compressed blocks are aligned by marks, and the offset in the decompressed block is zero. Data for `primary.idx` always resides in memory, and data for `column.mrk` files is cached.
|
The primary key itself is “sparse”. It does not address every single row, but only some ranges of data. A separate `primary.idx` file has the value of the primary key for each N-th row, where N is called `index_granularity` (usually, N = 8192). Also, for each column, we have `column.mrk` files with “marks,” which are offsets to each N-th row in the data file. Each mark is a pair: the offset in the file to the beginning of the compressed block, and the offset in the decompressed block to the beginning of data. Usually, compressed blocks are aligned by marks, and the offset in the decompressed block is zero. Data for `primary.idx` always resides in memory, and data for `column.mrk` files is cached.
|
||||||
|
|
||||||
When we are going to read something from a part in `MergeTree`, we look at `primary.idx` data and locate ranges that could contain requested data, then look at `column.mrk` data and calculate offsets for where to start reading those ranges. Because of sparseness, excess data may be read. ClickHouse is not suitable for a high load of simple point queries, because the entire range with `index_granularity` rows must be read for each key, and the entire compressed block must be decompressed for each column. We made the index sparse because we must be able to maintain trillions of rows per single server without noticeable memory consumption for the index. Also, because the primary key is sparse, it is not unique: it cannot check the existence of the key in the table at INSERT time. You could have many rows with the same key in a table.
|
When we are going to read something from a part in `MergeTree`, we look at `primary.idx` data and locate ranges that could contain requested data, then look at `column.mrk` data and calculate offsets for where to start reading those ranges. Because of sparseness, excess data may be read. ClickHouse is not suitable for a high load of simple point queries, because the entire range with `index_granularity` rows must be read for each key, and the entire compressed block must be decompressed for each column. We made the index sparse because we must be able to maintain trillions of rows per single server without noticeable memory consumption for the index. Also, because the primary key is sparse, it is not unique: it cannot check the existence of the key in the table at INSERT time. You could have many rows with the same key in a table.
|
||||||
|
|
||||||
When you `INSERT` a bunch of data into `MergeTree`, that bunch is sorted by primary key order and forms a new part. There are background threads that periodically select some parts and merge them into a single sorted part to keep the number of parts relatively low. That’s why it is called `MergeTree`. Of course, merging leads to “write amplification”. All parts are immutable: they are only created and deleted, but not modified. When SELECT is executed, it holds a snapshot of the table (a set of parts). After merging, we also keep old parts for some time to make a recovery after failure easier, so if we see that some merged part is probably broken, we can replace it with its source parts.
|
When you `INSERT` a bunch of data into `MergeTree`, that bunch is sorted by primary key order and forms a new part. There are background threads that periodically select some parts and merge them into a single sorted part to keep the number of parts relatively low. That’s why it is called `MergeTree`. Of course, merging leads to “write amplification”. All parts are immutable: they are only created and deleted, but not modified. When SELECT is executed, it holds a snapshot of the table (a set of parts). After merging, we also keep old parts for some time to make a recovery after failure easier, so if we see that some merged part is probably broken, we can replace it with its source parts.
|
||||||
|
|
||||||
`MergeTree` is not an LSM tree because it doesn’t contain “memtable” and “log”: inserted data is written directly to the filesystem. This makes it suitable only to INSERT data in batches, not by individual row and not very frequently – about once per second is ok, but a thousand times a second is not. We did it this way for simplicity’s sake, and because we are already inserting data in batches in our applications.
|
`MergeTree` is not an LSM tree because it does not contain “memtable” and “log”: inserted data is written directly to the filesystem. This makes it suitable only to INSERT data in batches, not by individual row and not very frequently – about once per second is ok, but a thousand times a second is not. We did it this way for simplicity’s sake, and because we are already inserting data in batches in our applications.
|
||||||
|
|
||||||
There are MergeTree engines that are doing additional work during background merges. Examples are `CollapsingMergeTree` and `AggregatingMergeTree`. This could be treated as special support for updates. Keep in mind that these are not real updates because users usually have no control over the time when background merges are executed, and data in a `MergeTree` table is almost always stored in more than one part, not in completely merged form.
|
There are MergeTree engines that are doing additional work during background merges. Examples are `CollapsingMergeTree` and `AggregatingMergeTree`. This could be treated as special support for updates. Keep in mind that these are not real updates because users usually have no control over the time when background merges are executed, and data in a `MergeTree` table is almost always stored in more than one part, not in completely merged form.
|
||||||
|
|
||||||
@ -185,7 +185,7 @@ Replication in ClickHouse can be configured on a per-table basis. You could have
|
|||||||
|
|
||||||
Replication is implemented in the `ReplicatedMergeTree` storage engine. The path in `ZooKeeper` is specified as a parameter for the storage engine. All tables with the same path in `ZooKeeper` become replicas of each other: they synchronize their data and maintain consistency. Replicas can be added and removed dynamically simply by creating or dropping a table.
|
Replication is implemented in the `ReplicatedMergeTree` storage engine. The path in `ZooKeeper` is specified as a parameter for the storage engine. All tables with the same path in `ZooKeeper` become replicas of each other: they synchronize their data and maintain consistency. Replicas can be added and removed dynamically simply by creating or dropping a table.
|
||||||
|
|
||||||
Replication uses an asynchronous multi-master scheme. You can insert data into any replica that has a session with `ZooKeeper`, and data is replicated to all other replicas asynchronously. Because ClickHouse doesn’t support UPDATEs, replication is conflict-free. As there is no quorum acknowledgment of inserts, just-inserted data might be lost if one node fails.
|
Replication uses an asynchronous multi-master scheme. You can insert data into any replica that has a session with `ZooKeeper`, and data is replicated to all other replicas asynchronously. Because ClickHouse does not support UPDATEs, replication is conflict-free. As there is no quorum acknowledgment of inserts, just-inserted data might be lost if one node fails.
|
||||||
|
|
||||||
Metadata for replication is stored in ZooKeeper. There is a replication log that lists what actions to do. Actions are: get part; merge parts; drop a partition, and so on. Each replica copies the replication log to its queue and then executes the actions from the queue. For example, on insertion, the “get the part” action is created in the log, and every replica downloads that part. Merges are coordinated between replicas to get byte-identical results. All parts are merged in the same way on all replicas. One of the leaders initiates a new merge first and writes “merge parts” actions to the log. Multiple replicas (or all) can be leaders at the same time. A replica can be prevented from becoming a leader using the `merge_tree` setting `replicated_can_become_leader`. The leaders are responsible for scheduling background merges.
|
Metadata for replication is stored in ZooKeeper. There is a replication log that lists what actions to do. Actions are: get part; merge parts; drop a partition, and so on. Each replica copies the replication log to its queue and then executes the actions from the queue. For example, on insertion, the “get the part” action is created in the log, and every replica downloads that part. Merges are coordinated between replicas to get byte-identical results. All parts are merged in the same way on all replicas. One of the leaders initiates a new merge first and writes “merge parts” actions to the log. Multiple replicas (or all) can be leaders at the same time. A replica can be prevented from becoming a leader using the `merge_tree` setting `replicated_can_become_leader`. The leaders are responsible for scheduling background merges.
|
||||||
|
|
||||||
|
@ -20,7 +20,7 @@ Install the latest [Xcode](https://apps.apple.com/am/app/xcode/id497799835?mt=12
|
|||||||
|
|
||||||
Open it at least once to accept the end-user license agreement and automatically install the required components.
|
Open it at least once to accept the end-user license agreement and automatically install the required components.
|
||||||
|
|
||||||
Then, make sure that the latest Comman Line Tools are installed and selected in the system:
|
Then, make sure that the latest Command Line Tools are installed and selected in the system:
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
sudo rm -rf /Library/Developer/CommandLineTools
|
sudo rm -rf /Library/Developer/CommandLineTools
|
||||||
|
@ -134,7 +134,7 @@ $ ./release
|
|||||||
|
|
||||||
## Faster builds for development
|
## Faster builds for development
|
||||||
|
|
||||||
Normally all tools of the ClickHouse bundle, such as `clickhouse-server`, `clickhouse-client` etc., are linked into a single static executable, `clickhouse`. This executable must be re-linked on every change, which might be slow. Two common ways to improve linking time are to use `lld` linker, and use the 'split' build configuration, which builds a separate binary for every tool, and further splits the code into serveral shared libraries. To enable these tweaks, pass the following flags to `cmake`:
|
Normally all tools of the ClickHouse bundle, such as `clickhouse-server`, `clickhouse-client` etc., are linked into a single static executable, `clickhouse`. This executable must be re-linked on every change, which might be slow. Two common ways to improve linking time are to use `lld` linker, and use the 'split' build configuration, which builds a separate binary for every tool, and further splits the code into several shared libraries. To enable these tweaks, pass the following flags to `cmake`:
|
||||||
|
|
||||||
```
|
```
|
||||||
-DCMAKE_C_FLAGS="--ld-path=lld" -DCMAKE_CXX_FLAGS="--ld-path=lld" -DUSE_STATIC_LIBRARIES=0 -DSPLIT_SHARED_LIBRARIES=1 -DCLICKHOUSE_SPLIT_BINARY=1
|
-DCMAKE_C_FLAGS="--ld-path=lld" -DCMAKE_CXX_FLAGS="--ld-path=lld" -DUSE_STATIC_LIBRARIES=0 -DSPLIT_SHARED_LIBRARIES=1 -DCLICKHOUSE_SPLIT_BINARY=1
|
||||||
|
@ -15,7 +15,7 @@ ClickHouse cannot work or build on a 32-bit system. You should acquire access to
|
|||||||
|
|
||||||
To start working with ClickHouse repository you will need a GitHub account.
|
To start working with ClickHouse repository you will need a GitHub account.
|
||||||
|
|
||||||
You probably already have one, but if you don’t, please register at https://github.com. In case you do not have SSH keys, you should generate them and then upload them on GitHub. It is required for sending over your patches. It is also possible to use the same SSH keys that you use with any other SSH servers - probably you already have those.
|
You probably already have one, but if you do not, please register at https://github.com. In case you do not have SSH keys, you should generate them and then upload them on GitHub. It is required for sending over your patches. It is also possible to use the same SSH keys that you use with any other SSH servers - probably you already have those.
|
||||||
|
|
||||||
Create a fork of ClickHouse repository. To do that please click on the “fork” button in the upper right corner at https://github.com/ClickHouse/ClickHouse. It will fork your own copy of ClickHouse/ClickHouse to your account.
|
Create a fork of ClickHouse repository. To do that please click on the “fork” button in the upper right corner at https://github.com/ClickHouse/ClickHouse. It will fork your own copy of ClickHouse/ClickHouse to your account.
|
||||||
|
|
||||||
|
@ -195,7 +195,7 @@ std::cerr << static_cast<int>(c) << std::endl;
|
|||||||
|
|
||||||
The same is true for small methods in any classes or structs.
|
The same is true for small methods in any classes or structs.
|
||||||
|
|
||||||
For templated classes and structs, don’t separate the method declarations from the implementation (because otherwise they must be defined in the same translation unit).
|
For templated classes and structs, do not separate the method declarations from the implementation (because otherwise they must be defined in the same translation unit).
|
||||||
|
|
||||||
**31.** You can wrap lines at 140 characters, instead of 80.
|
**31.** You can wrap lines at 140 characters, instead of 80.
|
||||||
|
|
||||||
@ -442,7 +442,7 @@ Use `RAII` and see above.
|
|||||||
|
|
||||||
**3.** Error handling.
|
**3.** Error handling.
|
||||||
|
|
||||||
Use exceptions. In most cases, you only need to throw an exception, and don’t need to catch it (because of `RAII`).
|
Use exceptions. In most cases, you only need to throw an exception, and do not need to catch it (because of `RAII`).
|
||||||
|
|
||||||
In offline data processing applications, it’s often acceptable to not catch exceptions.
|
In offline data processing applications, it’s often acceptable to not catch exceptions.
|
||||||
|
|
||||||
@ -599,7 +599,7 @@ public:
|
|||||||
|
|
||||||
There is no need to use a separate `namespace` for application code.
|
There is no need to use a separate `namespace` for application code.
|
||||||
|
|
||||||
Small libraries don’t need this, either.
|
Small libraries do not need this, either.
|
||||||
|
|
||||||
For medium to large libraries, put everything in a `namespace`.
|
For medium to large libraries, put everything in a `namespace`.
|
||||||
|
|
||||||
@ -755,9 +755,9 @@ If there is a good solution already available, then use it, even if it means you
|
|||||||
|
|
||||||
(But be prepared to remove bad libraries from code.)
|
(But be prepared to remove bad libraries from code.)
|
||||||
|
|
||||||
**3.** You can install a library that isn’t in the packages, if the packages don’t have what you need or have an outdated version or the wrong type of compilation.
|
**3.** You can install a library that isn’t in the packages, if the packages do not have what you need or have an outdated version or the wrong type of compilation.
|
||||||
|
|
||||||
**4.** If the library is small and doesn’t have its own complex build system, put the source files in the `contrib` folder.
|
**4.** If the library is small and does not have its own complex build system, put the source files in the `contrib` folder.
|
||||||
|
|
||||||
**5.** Preference is always given to libraries that are already in use.
|
**5.** Preference is always given to libraries that are already in use.
|
||||||
|
|
||||||
|
@ -35,7 +35,7 @@ Tests should use (create, drop, etc) only tables in `test` database that is assu
|
|||||||
|
|
||||||
### Choosing the Test Name
|
### Choosing the Test Name
|
||||||
|
|
||||||
The name of the test starts with a five-digit prefix followed by a descriptive name, such as `00422_hash_function_constexpr.sql`. To choose the prefix, find the largest prefix already present in the directory, and increment it by one. In the meantime, some other tests might be added with the same numeric prefix, but this is OK and doesn't lead to any problems, you don't have to change it later.
|
The name of the test starts with a five-digit prefix followed by a descriptive name, such as `00422_hash_function_constexpr.sql`. To choose the prefix, find the largest prefix already present in the directory, and increment it by one. In the meantime, some other tests might be added with the same numeric prefix, but this is OK and does not lead to any problems, you don't have to change it later.
|
||||||
|
|
||||||
Some tests are marked with `zookeeper`, `shard` or `long` in their names. `zookeeper` is for tests that are using ZooKeeper. `shard` is for tests that requires server to listen `127.0.0.*`; `distributed` or `global` have the same meaning. `long` is for tests that run slightly longer that one second. You can disable these groups of tests using `--no-zookeeper`, `--no-shard` and `--no-long` options, respectively. Make sure to add a proper prefix to your test name if it needs ZooKeeper or distributed queries.
|
Some tests are marked with `zookeeper`, `shard` or `long` in their names. `zookeeper` is for tests that are using ZooKeeper. `shard` is for tests that requires server to listen `127.0.0.*`; `distributed` or `global` have the same meaning. `long` is for tests that run slightly longer that one second. You can disable these groups of tests using `--no-zookeeper`, `--no-shard` and `--no-long` options, respectively. Make sure to add a proper prefix to your test name if it needs ZooKeeper or distributed queries.
|
||||||
|
|
||||||
@ -51,7 +51,7 @@ Do not check for a particular wording of error message, it may change in the fut
|
|||||||
|
|
||||||
### Testing a Distributed Query
|
### Testing a Distributed Query
|
||||||
|
|
||||||
If you want to use distributed queries in functional tests, you can leverage `remote` table function with `127.0.0.{1..2}` addresses for the server to query itself; or you can use predefined test clusters in server configuration file like `test_shard_localhost`. Remember to add the words `shard` or `distributed` to the test name, so that it is ran in CI in correct configurations, where the server is configured to support distributed queries.
|
If you want to use distributed queries in functional tests, you can leverage `remote` table function with `127.0.0.{1..2}` addresses for the server to query itself; or you can use predefined test clusters in server configuration file like `test_shard_localhost`. Remember to add the words `shard` or `distributed` to the test name, so that it is run in CI in correct configurations, where the server is configured to support distributed queries.
|
||||||
|
|
||||||
|
|
||||||
## Known Bugs {#known-bugs}
|
## Known Bugs {#known-bugs}
|
||||||
@ -60,11 +60,11 @@ If we know some bugs that can be easily reproduced by functional tests, we place
|
|||||||
|
|
||||||
## Integration Tests {#integration-tests}
|
## Integration Tests {#integration-tests}
|
||||||
|
|
||||||
Integration tests allow to test ClickHouse in clustered configuration and ClickHouse interaction with other servers like MySQL, Postgres, MongoDB. They are useful to emulate network splits, packet drops, etc. These tests are run under Docker and create multiple containers with various software.
|
Integration tests allow testing ClickHouse in clustered configuration and ClickHouse interaction with other servers like MySQL, Postgres, MongoDB. They are useful to emulate network splits, packet drops, etc. These tests are run under Docker and create multiple containers with various software.
|
||||||
|
|
||||||
See `tests/integration/README.md` on how to run these tests.
|
See `tests/integration/README.md` on how to run these tests.
|
||||||
|
|
||||||
Note that integration of ClickHouse with third-party drivers is not tested. Also we currently don’t have integration tests with our JDBC and ODBC drivers.
|
Note that integration of ClickHouse with third-party drivers is not tested. Also, we currently do not have integration tests with our JDBC and ODBC drivers.
|
||||||
|
|
||||||
## Unit Tests {#unit-tests}
|
## Unit Tests {#unit-tests}
|
||||||
|
|
||||||
@ -123,7 +123,7 @@ Example with gdb:
|
|||||||
$ sudo -u clickhouse gdb --args /usr/bin/clickhouse server --config-file /etc/clickhouse-server/config.xml
|
$ sudo -u clickhouse gdb --args /usr/bin/clickhouse server --config-file /etc/clickhouse-server/config.xml
|
||||||
```
|
```
|
||||||
|
|
||||||
If the system clickhouse-server is already running and you don’t want to stop it, you can change port numbers in your `config.xml` (or override them in a file in `config.d` directory), provide appropriate data path, and run it.
|
If the system clickhouse-server is already running and you do not want to stop it, you can change port numbers in your `config.xml` (or override them in a file in `config.d` directory), provide appropriate data path, and run it.
|
||||||
|
|
||||||
`clickhouse` binary has almost no dependencies and works across wide range of Linux distributions. To quick and dirty test your changes on a server, you can simply `scp` your fresh built `clickhouse` binary to your server and then run it as in examples above.
|
`clickhouse` binary has almost no dependencies and works across wide range of Linux distributions. To quick and dirty test your changes on a server, you can simply `scp` your fresh built `clickhouse` binary to your server and then run it as in examples above.
|
||||||
|
|
||||||
@ -161,7 +161,7 @@ $ clickhouse benchmark --concurrency 16 < queries.tsv
|
|||||||
|
|
||||||
Then leave it for a night or weekend and go take a rest.
|
Then leave it for a night or weekend and go take a rest.
|
||||||
|
|
||||||
You should check that `clickhouse-server` doesn’t crash, memory footprint is bounded and performance not degrading over time.
|
You should check that `clickhouse-server` does not crash, memory footprint is bounded and performance not degrading over time.
|
||||||
|
|
||||||
Precise query execution timings are not recorded and not compared due to high variability of queries and environment.
|
Precise query execution timings are not recorded and not compared due to high variability of queries and environment.
|
||||||
|
|
||||||
@ -230,7 +230,7 @@ Fuzzers are not built by default. To build fuzzers both `-DENABLE_FUZZING=1` and
|
|||||||
We recommend to disable Jemalloc while building fuzzers. Configuration used to integrate ClickHouse fuzzing to
|
We recommend to disable Jemalloc while building fuzzers. Configuration used to integrate ClickHouse fuzzing to
|
||||||
Google OSS-Fuzz can be found at `docker/fuzz`.
|
Google OSS-Fuzz can be found at `docker/fuzz`.
|
||||||
|
|
||||||
We also use simple fuzz test to generate random SQL queries and to check that the server doesn’t die executing them.
|
We also use simple fuzz test to generate random SQL queries and to check that the server does not die executing them.
|
||||||
You can find it in `00746_sql_fuzzy.pl`. This test should be run continuously (overnight and longer).
|
You can find it in `00746_sql_fuzzy.pl`. This test should be run continuously (overnight and longer).
|
||||||
|
|
||||||
We also use sophisticated AST-based query fuzzer that is able to find huge amount of corner cases. It does random permutations and substitutions in queries AST. It remembers AST nodes from previous tests to use them for fuzzing of subsequent tests while processing them in random order. You can learn more about this fuzzer in [this blog article](https://clickhouse.tech/blog/en/2021/fuzzing-clickhouse/).
|
We also use sophisticated AST-based query fuzzer that is able to find huge amount of corner cases. It does random permutations and substitutions in queries AST. It remembers AST nodes from previous tests to use them for fuzzing of subsequent tests while processing them in random order. You can learn more about this fuzzer in [this blog article](https://clickhouse.tech/blog/en/2021/fuzzing-clickhouse/).
|
||||||
@ -332,7 +332,7 @@ We run tests with Yandex internal CI and job automation system named “Sandbox
|
|||||||
|
|
||||||
Build jobs and tests are run in Sandbox on per commit basis. Resulting packages and test results are published in GitHub and can be downloaded by direct links. Artifacts are stored for several months. When you send a pull request on GitHub, we tag it as “can be tested” and our CI system will build ClickHouse packages (release, debug, with address sanitizer, etc) for you.
|
Build jobs and tests are run in Sandbox on per commit basis. Resulting packages and test results are published in GitHub and can be downloaded by direct links. Artifacts are stored for several months. When you send a pull request on GitHub, we tag it as “can be tested” and our CI system will build ClickHouse packages (release, debug, with address sanitizer, etc) for you.
|
||||||
|
|
||||||
We don’t use Travis CI due to the limit on time and computational power.
|
We do not use Travis CI due to the limit on time and computational power.
|
||||||
We don’t use Jenkins. It was used before and now we are happy we are not using Jenkins.
|
We do not use Jenkins. It was used before and now we are happy we are not using Jenkins.
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/development/tests/) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/development/tests/) <!--hide-->
|
||||||
|
@ -47,7 +47,7 @@ EXCHANGE TABLES new_table AND old_table;
|
|||||||
|
|
||||||
### ReplicatedMergeTree in Atomic Database {#replicatedmergetree-in-atomic-database}
|
### ReplicatedMergeTree in Atomic Database {#replicatedmergetree-in-atomic-database}
|
||||||
|
|
||||||
For [ReplicatedMergeTree](../table-engines/mergetree-family/replication.md#table_engines-replication) tables is recomended do not specify parameters of engine - path in ZooKeeper and replica name. In this case will be used parameters of the configuration [default_replica_path](../../operations/server-configuration-parameters/settings.md#default_replica_path) and [default_replica_name](../../operations/server-configuration-parameters/settings.md#default_replica_name). If you want specify parameters of engine explicitly than recomended to use {uuid} macros. This is useful so that unique paths are automatically generated for each table in the ZooKeeper.
|
For [ReplicatedMergeTree](../table-engines/mergetree-family/replication.md#table_engines-replication) tables, it is recommended to not specify engine parameters - path in ZooKeeper and replica name. In this case, configuration parameters will be used [default_replica_path](../../operations/server-configuration-parameters/settings.md#default_replica_path) and [default_replica_name](../../operations/server-configuration-parameters/settings.md#default_replica_name). If you want to specify engine parameters explicitly, it is recommended to use {uuid} macros. This is useful so that unique paths are automatically generated for each table in ZooKeeper.
|
||||||
|
|
||||||
## See Also
|
## See Also
|
||||||
|
|
||||||
|
@ -82,8 +82,8 @@ Virtual column is an integral table engine attribute that is defined in the engi
|
|||||||
|
|
||||||
You shouldn’t specify virtual columns in the `CREATE TABLE` query and you can’t see them in `SHOW CREATE TABLE` and `DESCRIBE TABLE` query results. Virtual columns are also read-only, so you can’t insert data into virtual columns.
|
You shouldn’t specify virtual columns in the `CREATE TABLE` query and you can’t see them in `SHOW CREATE TABLE` and `DESCRIBE TABLE` query results. Virtual columns are also read-only, so you can’t insert data into virtual columns.
|
||||||
|
|
||||||
To select data from a virtual column, you must specify its name in the `SELECT` query. `SELECT *` doesn’t return values from virtual columns.
|
To select data from a virtual column, you must specify its name in the `SELECT` query. `SELECT *` does not return values from virtual columns.
|
||||||
|
|
||||||
If you create a table with a column that has the same name as one of the table virtual columns, the virtual column becomes inaccessible. We don’t recommend doing this. To help avoid conflicts, virtual column names are usually prefixed with an underscore.
|
If you create a table with a column that has the same name as one of the table virtual columns, the virtual column becomes inaccessible. We do not recommend doing this. To help avoid conflicts, virtual column names are usually prefixed with an underscore.
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/engines/table-engines/) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/engines/table-engines/) <!--hide-->
|
||||||
|
@ -40,7 +40,7 @@ Required parameters:
|
|||||||
|
|
||||||
- `kafka_broker_list` — A comma-separated list of brokers (for example, `localhost:9092`).
|
- `kafka_broker_list` — A comma-separated list of brokers (for example, `localhost:9092`).
|
||||||
- `kafka_topic_list` — A list of Kafka topics.
|
- `kafka_topic_list` — A list of Kafka topics.
|
||||||
- `kafka_group_name` — A group of Kafka consumers. Reading margins are tracked for each group separately. If you don’t want messages to be duplicated in the cluster, use the same group name everywhere.
|
- `kafka_group_name` — A group of Kafka consumers. Reading margins are tracked for each group separately. If you do not want messages to be duplicated in the cluster, use the same group name everywhere.
|
||||||
- `kafka_format` — Message format. Uses the same notation as the SQL `FORMAT` function, such as `JSONEachRow`. For more information, see the [Formats](../../../interfaces/formats.md) section.
|
- `kafka_format` — Message format. Uses the same notation as the SQL `FORMAT` function, such as `JSONEachRow`. For more information, see the [Formats](../../../interfaces/formats.md) section.
|
||||||
|
|
||||||
Optional parameters:
|
Optional parameters:
|
||||||
|
@ -5,7 +5,7 @@ toc_title: MySQL
|
|||||||
|
|
||||||
# MySQL {#mysql}
|
# MySQL {#mysql}
|
||||||
|
|
||||||
The MySQL engine allows you to perform `SELECT` queries on data that is stored on a remote MySQL server.
|
The MySQL engine allows you to perform `SELECT` and `INSERT` queries on data that is stored on a remote MySQL server.
|
||||||
|
|
||||||
## Creating a Table {#creating-a-table}
|
## Creating a Table {#creating-a-table}
|
||||||
|
|
||||||
|
@ -130,6 +130,7 @@ The following settings can be set before query execution or placed into configur
|
|||||||
- `s3_max_single_part_upload_size` — The maximum size of object to upload using singlepart upload to S3. Default value is `64Mb`.
|
- `s3_max_single_part_upload_size` — The maximum size of object to upload using singlepart upload to S3. Default value is `64Mb`.
|
||||||
- `s3_min_upload_part_size` — The minimum size of part to upload during multipart upload to [S3 Multipart upload](https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html). Default value is `512Mb`.
|
- `s3_min_upload_part_size` — The minimum size of part to upload during multipart upload to [S3 Multipart upload](https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html). Default value is `512Mb`.
|
||||||
- `s3_max_redirects` — Max number of S3 redirects hops allowed. Default value is `10`.
|
- `s3_max_redirects` — Max number of S3 redirects hops allowed. Default value is `10`.
|
||||||
|
- `s3_single_read_retries` — The maximum number of attempts during single read. Default value is `4`.
|
||||||
|
|
||||||
Security consideration: if malicious user can specify arbitrary S3 URLs, `s3_max_redirects` must be set to zero to avoid [SSRF](https://en.wikipedia.org/wiki/Server-side_request_forgery) attacks; or alternatively, `remote_host_filter` must be specified in server configuration.
|
Security consideration: if malicious user can specify arbitrary S3 URLs, `s3_max_redirects` must be set to zero to avoid [SSRF](https://en.wikipedia.org/wiki/Server-side_request_forgery) attacks; or alternatively, `remote_host_filter` must be specified in server configuration.
|
||||||
|
|
||||||
@ -144,6 +145,7 @@ The following settings can be specified in configuration file for given endpoint
|
|||||||
- `use_insecure_imds_request` — If set to `true`, S3 client will use insecure IMDS request while obtaining credentials from Amazon EC2 metadata. Optional, default value is `false`.
|
- `use_insecure_imds_request` — If set to `true`, S3 client will use insecure IMDS request while obtaining credentials from Amazon EC2 metadata. Optional, default value is `false`.
|
||||||
- `header` — Adds specified HTTP header to a request to given endpoint. Optional, can be speficied multiple times.
|
- `header` — Adds specified HTTP header to a request to given endpoint. Optional, can be speficied multiple times.
|
||||||
- `server_side_encryption_customer_key_base64` — If specified, required headers for accessing S3 objects with SSE-C encryption will be set. Optional.
|
- `server_side_encryption_customer_key_base64` — If specified, required headers for accessing S3 objects with SSE-C encryption will be set. Optional.
|
||||||
|
- `max_single_read_retries` — The maximum number of attempts during single read. Default value is `4`. Optional.
|
||||||
|
|
||||||
**Example:**
|
**Example:**
|
||||||
|
|
||||||
@ -158,13 +160,14 @@ The following settings can be specified in configuration file for given endpoint
|
|||||||
<!-- <use_insecure_imds_request>false</use_insecure_imds_request> -->
|
<!-- <use_insecure_imds_request>false</use_insecure_imds_request> -->
|
||||||
<!-- <header>Authorization: Bearer SOME-TOKEN</header> -->
|
<!-- <header>Authorization: Bearer SOME-TOKEN</header> -->
|
||||||
<!-- <server_side_encryption_customer_key_base64>BASE64-ENCODED-KEY</server_side_encryption_customer_key_base64> -->
|
<!-- <server_side_encryption_customer_key_base64>BASE64-ENCODED-KEY</server_side_encryption_customer_key_base64> -->
|
||||||
|
<!-- <max_single_read_retries>4</max_single_read_retries> -->
|
||||||
</endpoint-name>
|
</endpoint-name>
|
||||||
</s3>
|
</s3>
|
||||||
```
|
```
|
||||||
|
|
||||||
## Usage {#usage-examples}
|
## Usage {#usage-examples}
|
||||||
|
|
||||||
Suppose we have several files in TSV format with the following URIs on HDFS:
|
Suppose we have several files in CSV format with the following URIs on S3:
|
||||||
|
|
||||||
- 'https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_1.csv'
|
- 'https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_1.csv'
|
||||||
- 'https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_2.csv'
|
- 'https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_2.csv'
|
||||||
|
@ -38,7 +38,7 @@ Engines:
|
|||||||
|
|
||||||
## Differences {#differences}
|
## Differences {#differences}
|
||||||
|
|
||||||
The `TinyLog` engine is the simplest in the family and provides the poorest functionality and lowest efficiency. The `TinyLog` engine doesn’t support parallel data reading by several threads in a single query. It reads data slower than other engines in the family that support parallel reading from a single query and it uses almost as many file descriptors as the `Log` engine because it stores each column in a separate file. Use it only in simple scenarios.
|
The `TinyLog` engine is the simplest in the family and provides the poorest functionality and lowest efficiency. The `TinyLog` engine does not support parallel data reading by several threads in a single query. It reads data slower than other engines in the family that support parallel reading from a single query and it uses almost as many file descriptors as the `Log` engine because it stores each column in a separate file. Use it only in simple scenarios.
|
||||||
|
|
||||||
The `Log` and `StripeLog` engines support parallel data reading. When reading data, ClickHouse uses multiple threads. Each thread processes a separate data block. The `Log` engine uses a separate file for each column of the table. `StripeLog` stores all the data in one file. As a result, the `StripeLog` engine uses fewer file descriptors, but the `Log` engine provides higher efficiency when reading data.
|
The `Log` and `StripeLog` engines support parallel data reading. When reading data, ClickHouse uses multiple threads. Each thread processes a separate data block. The `Log` engine uses a separate file for each column of the table. `StripeLog` stores all the data in one file. As a result, the `StripeLog` engine uses fewer file descriptors, but the `Log` engine provides higher efficiency when reading data.
|
||||||
|
|
||||||
|
@ -126,7 +126,7 @@ Also when there are at least 2 more “state” rows than “cancel” rows, or
|
|||||||
Thus, collapsing should not change the results of calculating statistics.
|
Thus, collapsing should not change the results of calculating statistics.
|
||||||
Changes gradually collapsed so that in the end only the last state of almost every object left.
|
Changes gradually collapsed so that in the end only the last state of almost every object left.
|
||||||
|
|
||||||
The `Sign` is required because the merging algorithm doesn’t guarantee that all of the rows with the same sorting key will be in the same resulting data part and even on the same physical server. ClickHouse process `SELECT` queries with multiple threads, and it can not predict the order of rows in the result. The aggregation is required if there is a need to get completely “collapsed” data from `CollapsingMergeTree` table.
|
The `Sign` is required because the merging algorithm does not guarantee that all of the rows with the same sorting key will be in the same resulting data part and even on the same physical server. ClickHouse process `SELECT` queries with multiple threads, and it can not predict the order of rows in the result. The aggregation is required if there is a need to get completely “collapsed” data from `CollapsingMergeTree` table.
|
||||||
|
|
||||||
To finalize collapsing, write a query with `GROUP BY` clause and aggregate functions that account for the sign. For example, to calculate quantity, use `sum(Sign)` instead of `count()`. To calculate the sum of something, use `sum(Sign * x)` instead of `sum(x)`, and so on, and also add `HAVING sum(Sign) > 0`.
|
To finalize collapsing, write a query with `GROUP BY` clause and aggregate functions that account for the sign. For example, to calculate quantity, use `sum(Sign)` instead of `count()`. To calculate the sum of something, use `sum(Sign * x)` instead of `sum(x)`, and so on, and also add `HAVING sum(Sign) > 0`.
|
||||||
|
|
||||||
|
@ -33,6 +33,8 @@ ORDER BY (CounterID, StartDate, intHash32(UserID));
|
|||||||
|
|
||||||
In this example, we set partitioning by the event types that occurred during the current week.
|
In this example, we set partitioning by the event types that occurred during the current week.
|
||||||
|
|
||||||
|
By default, the floating-point partition key is not supported. To use it enable the setting [allow_floating_point_partition_key](../../../operations/settings/merge-tree-settings.md#allow_floating_point_partition_key).
|
||||||
|
|
||||||
When inserting new data to a table, this data is stored as a separate part (chunk) sorted by the primary key. In 10-15 minutes after inserting, the parts of the same partition are merged into the entire part.
|
When inserting new data to a table, this data is stored as a separate part (chunk) sorted by the primary key. In 10-15 minutes after inserting, the parts of the same partition are merged into the entire part.
|
||||||
|
|
||||||
!!! info "Info"
|
!!! info "Info"
|
||||||
|
@ -7,7 +7,7 @@ toc_title: GraphiteMergeTree
|
|||||||
|
|
||||||
This engine is designed for thinning and aggregating/averaging (rollup) [Graphite](http://graphite.readthedocs.io/en/latest/index.html) data. It may be helpful to developers who want to use ClickHouse as a data store for Graphite.
|
This engine is designed for thinning and aggregating/averaging (rollup) [Graphite](http://graphite.readthedocs.io/en/latest/index.html) data. It may be helpful to developers who want to use ClickHouse as a data store for Graphite.
|
||||||
|
|
||||||
You can use any ClickHouse table engine to store the Graphite data if you don’t need rollup, but if you need a rollup use `GraphiteMergeTree`. The engine reduces the volume of storage and increases the efficiency of queries from Graphite.
|
You can use any ClickHouse table engine to store the Graphite data if you do not need rollup, but if you need a rollup use `GraphiteMergeTree`. The engine reduces the volume of storage and increases the efficiency of queries from Graphite.
|
||||||
|
|
||||||
The engine inherits properties from [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md).
|
The engine inherits properties from [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md).
|
||||||
|
|
||||||
|
@ -64,7 +64,7 @@ For a description of parameters, see the [CREATE query description](../../../sql
|
|||||||
|
|
||||||
ClickHouse uses the sorting key as a primary key if the primary key is not defined obviously by the `PRIMARY KEY` clause.
|
ClickHouse uses the sorting key as a primary key if the primary key is not defined obviously by the `PRIMARY KEY` clause.
|
||||||
|
|
||||||
Use the `ORDER BY tuple()` syntax, if you don’t need sorting. See [Selecting the Primary Key](#selecting-the-primary-key).
|
Use the `ORDER BY tuple()` syntax, if you do not need sorting. See [Selecting the Primary Key](#selecting-the-primary-key).
|
||||||
|
|
||||||
- `PARTITION BY` — The [partitioning key](../../../engines/table-engines/mergetree-family/custom-partitioning-key.md). Optional.
|
- `PARTITION BY` — The [partitioning key](../../../engines/table-engines/mergetree-family/custom-partitioning-key.md). Optional.
|
||||||
|
|
||||||
@ -162,7 +162,7 @@ Data parts can be stored in `Wide` or `Compact` format. In `Wide` format each co
|
|||||||
|
|
||||||
Data storing format is controlled by the `min_bytes_for_wide_part` and `min_rows_for_wide_part` settings of the table engine. If the number of bytes or rows in a data part is less then the corresponding setting's value, the part is stored in `Compact` format. Otherwise it is stored in `Wide` format. If none of these settings is set, data parts are stored in `Wide` format.
|
Data storing format is controlled by the `min_bytes_for_wide_part` and `min_rows_for_wide_part` settings of the table engine. If the number of bytes or rows in a data part is less then the corresponding setting's value, the part is stored in `Compact` format. Otherwise it is stored in `Wide` format. If none of these settings is set, data parts are stored in `Wide` format.
|
||||||
|
|
||||||
Each data part is logically divided into granules. A granule is the smallest indivisible data set that ClickHouse reads when selecting data. ClickHouse doesn’t split rows or values, so each granule always contains an integer number of rows. The first row of a granule is marked with the value of the primary key for the row. For each data part, ClickHouse creates an index file that stores the marks. For each column, whether it’s in the primary key or not, ClickHouse also stores the same marks. These marks let you find data directly in column files.
|
Each data part is logically divided into granules. A granule is the smallest indivisible data set that ClickHouse reads when selecting data. ClickHouse does not split rows or values, so each granule always contains an integer number of rows. The first row of a granule is marked with the value of the primary key for the row. For each data part, ClickHouse creates an index file that stores the marks. For each column, whether it’s in the primary key or not, ClickHouse also stores the same marks. These marks let you find data directly in column files.
|
||||||
|
|
||||||
The granule size is restricted by the `index_granularity` and `index_granularity_bytes` settings of the table engine. The number of rows in a granule lays in the `[1, index_granularity]` range, depending on the size of the rows. The size of a granule can exceed `index_granularity_bytes` if the size of a single row is greater than the value of the setting. In this case, the size of the granule equals the size of the row.
|
The granule size is restricted by the `index_granularity` and `index_granularity_bytes` settings of the table engine. The number of rows in a granule lays in the `[1, index_granularity]` range, depending on the size of the rows. The size of a granule can exceed `index_granularity_bytes` if the size of a single row is greater than the value of the setting. In this case, the size of the granule equals the size of the row.
|
||||||
|
|
||||||
@ -227,7 +227,7 @@ This feature is helpful when using the [SummingMergeTree](../../../engines/table
|
|||||||
|
|
||||||
In this case it makes sense to leave only a few columns in the primary key that will provide efficient range scans and add the remaining dimension columns to the sorting key tuple.
|
In this case it makes sense to leave only a few columns in the primary key that will provide efficient range scans and add the remaining dimension columns to the sorting key tuple.
|
||||||
|
|
||||||
[ALTER](../../../sql-reference/statements/alter/index.md) of the sorting key is a lightweight operation because when a new column is simultaneously added to the table and to the sorting key, existing data parts don’t need to be changed. Since the old sorting key is a prefix of the new sorting key and there is no data in the newly added column, the data is sorted by both the old and new sorting keys at the moment of table modification.
|
[ALTER](../../../sql-reference/statements/alter/index.md) of the sorting key is a lightweight operation because when a new column is simultaneously added to the table and to the sorting key, existing data parts do not need to be changed. Since the old sorting key is a prefix of the new sorting key and there is no data in the newly added column, the data is sorted by both the old and new sorting keys at the moment of table modification.
|
||||||
|
|
||||||
### Use of Indexes and Partitions in Queries {#use-of-indexes-and-partitions-in-queries}
|
### Use of Indexes and Partitions in Queries {#use-of-indexes-and-partitions-in-queries}
|
||||||
|
|
||||||
@ -265,7 +265,7 @@ The key for partitioning by month allows reading only those data blocks which co
|
|||||||
|
|
||||||
Consider, for example, the days of the month. They form a [monotonic sequence](https://en.wikipedia.org/wiki/Monotonic_function) for one month, but not monotonic for more extended periods. This is a partially-monotonic sequence. If a user creates the table with partially-monotonic primary key, ClickHouse creates a sparse index as usual. When a user selects data from this kind of table, ClickHouse analyzes the query conditions. If the user wants to get data between two marks of the index and both these marks fall within one month, ClickHouse can use the index in this particular case because it can calculate the distance between the parameters of a query and index marks.
|
Consider, for example, the days of the month. They form a [monotonic sequence](https://en.wikipedia.org/wiki/Monotonic_function) for one month, but not monotonic for more extended periods. This is a partially-monotonic sequence. If a user creates the table with partially-monotonic primary key, ClickHouse creates a sparse index as usual. When a user selects data from this kind of table, ClickHouse analyzes the query conditions. If the user wants to get data between two marks of the index and both these marks fall within one month, ClickHouse can use the index in this particular case because it can calculate the distance between the parameters of a query and index marks.
|
||||||
|
|
||||||
ClickHouse cannot use an index if the values of the primary key in the query parameter range don’t represent a monotonic sequence. In this case, ClickHouse uses the full scan method.
|
ClickHouse cannot use an index if the values of the primary key in the query parameter range do not represent a monotonic sequence. In this case, ClickHouse uses the full scan method.
|
||||||
|
|
||||||
ClickHouse uses this logic not only for days of the month sequences, but for any primary key that represents a partially-monotonic sequence.
|
ClickHouse uses this logic not only for days of the month sequences, but for any primary key that represents a partially-monotonic sequence.
|
||||||
|
|
||||||
@ -748,6 +748,7 @@ Configuration markup:
|
|||||||
<connect_timeout_ms>10000</connect_timeout_ms>
|
<connect_timeout_ms>10000</connect_timeout_ms>
|
||||||
<request_timeout_ms>5000</request_timeout_ms>
|
<request_timeout_ms>5000</request_timeout_ms>
|
||||||
<retry_attempts>10</retry_attempts>
|
<retry_attempts>10</retry_attempts>
|
||||||
|
<single_read_retries>4</single_read_retries>
|
||||||
<min_bytes_for_seek>1000</min_bytes_for_seek>
|
<min_bytes_for_seek>1000</min_bytes_for_seek>
|
||||||
<metadata_path>/var/lib/clickhouse/disks/s3/</metadata_path>
|
<metadata_path>/var/lib/clickhouse/disks/s3/</metadata_path>
|
||||||
<cache_enabled>true</cache_enabled>
|
<cache_enabled>true</cache_enabled>
|
||||||
@ -772,6 +773,7 @@ Optional parameters:
|
|||||||
- `connect_timeout_ms` — Socket connect timeout in milliseconds. Default value is `10 seconds`.
|
- `connect_timeout_ms` — Socket connect timeout in milliseconds. Default value is `10 seconds`.
|
||||||
- `request_timeout_ms` — Request timeout in milliseconds. Default value is `5 seconds`.
|
- `request_timeout_ms` — Request timeout in milliseconds. Default value is `5 seconds`.
|
||||||
- `retry_attempts` — Number of retry attempts in case of failed request. Default value is `10`.
|
- `retry_attempts` — Number of retry attempts in case of failed request. Default value is `10`.
|
||||||
|
- `single_read_retries` — Number of retry attempts in case of connection drop during read. Default value is `4`.
|
||||||
- `min_bytes_for_seek` — Minimal number of bytes to use seek operation instead of sequential read. Default value is `1 Mb`.
|
- `min_bytes_for_seek` — Minimal number of bytes to use seek operation instead of sequential read. Default value is `1 Mb`.
|
||||||
- `metadata_path` — Path on local FS to store metadata files for S3. Default value is `/var/lib/clickhouse/disks/<disk_name>/`.
|
- `metadata_path` — Path on local FS to store metadata files for S3. Default value is `/var/lib/clickhouse/disks/<disk_name>/`.
|
||||||
- `cache_enabled` — Allows to cache mark and index files on local FS. Default value is `true`.
|
- `cache_enabled` — Allows to cache mark and index files on local FS. Default value is `true`.
|
||||||
|
@ -7,9 +7,9 @@ toc_title: ReplacingMergeTree
|
|||||||
|
|
||||||
The engine differs from [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md#table_engines-mergetree) in that it removes duplicate entries with the same [sorting key](../../../engines/table-engines/mergetree-family/mergetree.md) value (`ORDER BY` table section, not `PRIMARY KEY`).
|
The engine differs from [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md#table_engines-mergetree) in that it removes duplicate entries with the same [sorting key](../../../engines/table-engines/mergetree-family/mergetree.md) value (`ORDER BY` table section, not `PRIMARY KEY`).
|
||||||
|
|
||||||
Data deduplication occurs only during a merge. Merging occurs in the background at an unknown time, so you can’t plan for it. Some of the data may remain unprocessed. Although you can run an unscheduled merge using the `OPTIMIZE` query, don’t count on using it, because the `OPTIMIZE` query will read and write a large amount of data.
|
Data deduplication occurs only during a merge. Merging occurs in the background at an unknown time, so you can’t plan for it. Some of the data may remain unprocessed. Although you can run an unscheduled merge using the `OPTIMIZE` query, do not count on using it, because the `OPTIMIZE` query will read and write a large amount of data.
|
||||||
|
|
||||||
Thus, `ReplacingMergeTree` is suitable for clearing out duplicate data in the background in order to save space, but it doesn’t guarantee the absence of duplicates.
|
Thus, `ReplacingMergeTree` is suitable for clearing out duplicate data in the background in order to save space, but it does not guarantee the absence of duplicates.
|
||||||
|
|
||||||
## Creating a Table {#creating-a-table}
|
## Creating a Table {#creating-a-table}
|
||||||
|
|
||||||
@ -34,7 +34,7 @@ For a description of request parameters, see [statement description](../../../sq
|
|||||||
|
|
||||||
**ReplacingMergeTree Parameters**
|
**ReplacingMergeTree Parameters**
|
||||||
|
|
||||||
- `ver` — column with version. Type `UInt*`, `Date` or `DateTime`. Optional parameter.
|
- `ver` — column with the version number. Type `UInt*`, `Date`, `DateTime` or `DateTime64`. Optional parameter.
|
||||||
|
|
||||||
When merging, `ReplacingMergeTree` from all the rows with the same sorting key leaves only one:
|
When merging, `ReplacingMergeTree` from all the rows with the same sorting key leaves only one:
|
||||||
|
|
||||||
@ -66,5 +66,3 @@ All of the parameters excepting `ver` have the same meaning as in `MergeTree`.
|
|||||||
- `ver` - column with the version. Optional parameter. For a description, see the text above.
|
- `ver` - column with the version. Optional parameter. For a description, see the text above.
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/replacingmergetree/) <!--hide-->
|
|
||||||
|
@ -95,17 +95,19 @@ If ZooKeeper isn’t set in the config file, you can’t create replicated table
|
|||||||
|
|
||||||
ZooKeeper is not used in `SELECT` queries because replication does not affect the performance of `SELECT` and queries run just as fast as they do for non-replicated tables. When querying distributed replicated tables, ClickHouse behavior is controlled by the settings [max_replica_delay_for_distributed_queries](../../../operations/settings/settings.md#settings-max_replica_delay_for_distributed_queries) and [fallback_to_stale_replicas_for_distributed_queries](../../../operations/settings/settings.md#settings-fallback_to_stale_replicas_for_distributed_queries).
|
ZooKeeper is not used in `SELECT` queries because replication does not affect the performance of `SELECT` and queries run just as fast as they do for non-replicated tables. When querying distributed replicated tables, ClickHouse behavior is controlled by the settings [max_replica_delay_for_distributed_queries](../../../operations/settings/settings.md#settings-max_replica_delay_for_distributed_queries) and [fallback_to_stale_replicas_for_distributed_queries](../../../operations/settings/settings.md#settings-fallback_to_stale_replicas_for_distributed_queries).
|
||||||
|
|
||||||
For each `INSERT` query, approximately ten entries are added to ZooKeeper through several transactions. (To be more precise, this is for each inserted block of data; an INSERT query contains one block or one block per `max_insert_block_size = 1048576` rows.) This leads to slightly longer latencies for `INSERT` compared to non-replicated tables. But if you follow the recommendations to insert data in batches of no more than one `INSERT` per second, it doesn’t create any problems. The entire ClickHouse cluster used for coordinating one ZooKeeper cluster has a total of several hundred `INSERTs` per second. The throughput on data inserts (the number of rows per second) is just as high as for non-replicated data.
|
For each `INSERT` query, approximately ten entries are added to ZooKeeper through several transactions. (To be more precise, this is for each inserted block of data; an INSERT query contains one block or one block per `max_insert_block_size = 1048576` rows.) This leads to slightly longer latencies for `INSERT` compared to non-replicated tables. But if you follow the recommendations to insert data in batches of no more than one `INSERT` per second, it does not create any problems. The entire ClickHouse cluster used for coordinating one ZooKeeper cluster has a total of several hundred `INSERTs` per second. The throughput on data inserts (the number of rows per second) is just as high as for non-replicated data.
|
||||||
|
|
||||||
For very large clusters, you can use different ZooKeeper clusters for different shards. However, this hasn’t proven necessary on the Yandex.Metrica cluster (approximately 300 servers).
|
For very large clusters, you can use different ZooKeeper clusters for different shards. However, this hasn’t proven necessary on the Yandex.Metrica cluster (approximately 300 servers).
|
||||||
|
|
||||||
Replication is asynchronous and multi-master. `INSERT` queries (as well as `ALTER`) can be sent to any available server. Data is inserted on the server where the query is run, and then it is copied to the other servers. Because it is asynchronous, recently inserted data appears on the other replicas with some latency. If part of the replicas are not available, the data is written when they become available. If a replica is available, the latency is the amount of time it takes to transfer the block of compressed data over the network. The number of threads performing background tasks for replicated tables can be set by [background_schedule_pool_size](../../../operations/settings/settings.md#background_schedule_pool_size) setting.
|
Replication is asynchronous and multi-master. `INSERT` queries (as well as `ALTER`) can be sent to any available server. Data is inserted on the server where the query is run, and then it is copied to the other servers. Because it is asynchronous, recently inserted data appears on the other replicas with some latency. If part of the replicas are not available, the data is written when they become available. If a replica is available, the latency is the amount of time it takes to transfer the block of compressed data over the network. The number of threads performing background tasks for replicated tables can be set by [background_schedule_pool_size](../../../operations/settings/settings.md#background_schedule_pool_size) setting.
|
||||||
|
|
||||||
|
`ReplicatedMergeTree` engine uses a separate thread pool for replicated fetches. Size of the pool is limited by the [background_fetches_pool_size](../../../operations/settings/settings.md#background_fetches_pool_size) setting which can be tuned with a server restart.
|
||||||
|
|
||||||
By default, an INSERT query waits for confirmation of writing the data from only one replica. If the data was successfully written to only one replica and the server with this replica ceases to exist, the stored data will be lost. To enable getting confirmation of data writes from multiple replicas, use the `insert_quorum` option.
|
By default, an INSERT query waits for confirmation of writing the data from only one replica. If the data was successfully written to only one replica and the server with this replica ceases to exist, the stored data will be lost. To enable getting confirmation of data writes from multiple replicas, use the `insert_quorum` option.
|
||||||
|
|
||||||
Each block of data is written atomically. The INSERT query is divided into blocks up to `max_insert_block_size = 1048576` rows. In other words, if the `INSERT` query has less than 1048576 rows, it is made atomically.
|
Each block of data is written atomically. The INSERT query is divided into blocks up to `max_insert_block_size = 1048576` rows. In other words, if the `INSERT` query has less than 1048576 rows, it is made atomically.
|
||||||
|
|
||||||
Data blocks are deduplicated. For multiple writes of the same data block (data blocks of the same size containing the same rows in the same order), the block is only written once. The reason for this is in case of network failures when the client application doesn’t know if the data was written to the DB, so the `INSERT` query can simply be repeated. It doesn’t matter which replica INSERTs were sent to with identical data. `INSERTs` are idempotent. Deduplication parameters are controlled by [merge_tree](../../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-merge_tree) server settings.
|
Data blocks are deduplicated. For multiple writes of the same data block (data blocks of the same size containing the same rows in the same order), the block is only written once. The reason for this is in case of network failures when the client application does not know if the data was written to the DB, so the `INSERT` query can simply be repeated. It does not matter which replica INSERTs were sent to with identical data. `INSERTs` are idempotent. Deduplication parameters are controlled by [merge_tree](../../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-merge_tree) server settings.
|
||||||
|
|
||||||
During replication, only the source data to insert is transferred over the network. Further data transformation (merging) is coordinated and performed on all the replicas in the same way. This minimizes network usage, which means that replication works well when replicas reside in different datacenters. (Note that duplicating data in different datacenters is the main goal of replication.)
|
During replication, only the source data to insert is transferred over the network. Further data transformation (merging) is coordinated and performed on all the replicas in the same way. This minimizes network usage, which means that replication works well when replicas reside in different datacenters. (Note that duplicating data in different datacenters is the main goal of replication.)
|
||||||
|
|
||||||
@ -172,7 +174,7 @@ In this case, the path consists of the following parts:
|
|||||||
|
|
||||||
`{layer}-{shard}` is the shard identifier. In this example it consists of two parts, since the Yandex.Metrica cluster uses bi-level sharding. For most tasks, you can leave just the {shard} substitution, which will be expanded to the shard identifier.
|
`{layer}-{shard}` is the shard identifier. In this example it consists of two parts, since the Yandex.Metrica cluster uses bi-level sharding. For most tasks, you can leave just the {shard} substitution, which will be expanded to the shard identifier.
|
||||||
|
|
||||||
`table_name` is the name of the node for the table in ZooKeeper. It is a good idea to make it the same as the table name. It is defined explicitly, because in contrast to the table name, it doesn’t change after a RENAME query.
|
`table_name` is the name of the node for the table in ZooKeeper. It is a good idea to make it the same as the table name. It is defined explicitly, because in contrast to the table name, it does not change after a RENAME query.
|
||||||
*HINT*: you could add a database name in front of `table_name` as well. E.g. `db_name.table_name`
|
*HINT*: you could add a database name in front of `table_name` as well. E.g. `db_name.table_name`
|
||||||
|
|
||||||
The two built-in substitutions `{database}` and `{table}` can be used, they expand into the table name and the database name respectively (unless these macros are defined in the `macros` section). So the zookeeper path can be specified as `'/clickhouse/tables/{layer}-{shard}/{database}/{table}'`.
|
The two built-in substitutions `{database}` and `{table}` can be used, they expand into the table name and the database name respectively (unless these macros are defined in the `macros` section). So the zookeeper path can be specified as `'/clickhouse/tables/{layer}-{shard}/{database}/{table}'`.
|
||||||
@ -284,6 +286,7 @@ If the data in ZooKeeper was lost or damaged, you can save data by moving it to
|
|||||||
**See Also**
|
**See Also**
|
||||||
|
|
||||||
- [background_schedule_pool_size](../../../operations/settings/settings.md#background_schedule_pool_size)
|
- [background_schedule_pool_size](../../../operations/settings/settings.md#background_schedule_pool_size)
|
||||||
|
- [background_fetches_pool_size](../../../operations/settings/settings.md#background_fetches_pool_size)
|
||||||
- [execute_merges_on_single_replica_time_threshold](../../../operations/settings/settings.md#execute-merges-on-single-replica-time-threshold)
|
- [execute_merges_on_single_replica_time_threshold](../../../operations/settings/settings.md#execute-merges-on-single-replica-time-threshold)
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/replication/) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/replication/) <!--hide-->
|
||||||
|
@ -7,7 +7,7 @@ toc_title: SummingMergeTree
|
|||||||
|
|
||||||
The engine inherits from [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md#table_engines-mergetree). The difference is that when merging data parts for `SummingMergeTree` tables ClickHouse replaces all the rows with the same primary key (or more accurately, with the same [sorting key](../../../engines/table-engines/mergetree-family/mergetree.md)) with one row which contains summarized values for the columns with the numeric data type. If the sorting key is composed in a way that a single key value corresponds to large number of rows, this significantly reduces storage volume and speeds up data selection.
|
The engine inherits from [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md#table_engines-mergetree). The difference is that when merging data parts for `SummingMergeTree` tables ClickHouse replaces all the rows with the same primary key (or more accurately, with the same [sorting key](../../../engines/table-engines/mergetree-family/mergetree.md)) with one row which contains summarized values for the columns with the numeric data type. If the sorting key is composed in a way that a single key value corresponds to large number of rows, this significantly reduces storage volume and speeds up data selection.
|
||||||
|
|
||||||
We recommend to use the engine together with `MergeTree`. Store complete data in `MergeTree` table, and use `SummingMergeTree` for aggregated data storing, for example, when preparing reports. Such an approach will prevent you from losing valuable data due to an incorrectly composed primary key.
|
We recommend using the engine together with `MergeTree`. Store complete data in `MergeTree` table, and use `SummingMergeTree` for aggregated data storing, for example, when preparing reports. Such an approach will prevent you from losing valuable data due to an incorrectly composed primary key.
|
||||||
|
|
||||||
## Creating a Table {#creating-a-table}
|
## Creating a Table {#creating-a-table}
|
||||||
|
|
||||||
|
@ -133,7 +133,7 @@ When ClickHouse inserts data, it orders rows by the primary key. If the `Version
|
|||||||
|
|
||||||
## Selecting Data {#selecting-data}
|
## Selecting Data {#selecting-data}
|
||||||
|
|
||||||
ClickHouse doesn’t guarantee that all of the rows with the same primary key will be in the same resulting data part or even on the same physical server. This is true both for writing the data and for subsequent merging of the data parts. In addition, ClickHouse processes `SELECT` queries with multiple threads, and it cannot predict the order of rows in the result. This means that aggregation is required if there is a need to get completely “collapsed” data from a `VersionedCollapsingMergeTree` table.
|
ClickHouse does not guarantee that all of the rows with the same primary key will be in the same resulting data part or even on the same physical server. This is true both for writing the data and for subsequent merging of the data parts. In addition, ClickHouse processes `SELECT` queries with multiple threads, and it cannot predict the order of rows in the result. This means that aggregation is required if there is a need to get completely “collapsed” data from a `VersionedCollapsingMergeTree` table.
|
||||||
|
|
||||||
To finalize collapsing, write a query with a `GROUP BY` clause and aggregate functions that account for the sign. For example, to calculate quantity, use `sum(Sign)` instead of `count()`. To calculate the sum of something, use `sum(Sign * x)` instead of `sum(x)`, and add `HAVING sum(Sign) > 0`.
|
To finalize collapsing, write a query with a `GROUP BY` clause and aggregate functions that account for the sign. For example, to calculate quantity, use `sum(Sign)` instead of `count()`. To calculate the sum of something, use `sum(Sign * x)` instead of `sum(x)`, and add `HAVING sum(Sign) > 0`.
|
||||||
|
|
||||||
@ -219,7 +219,7 @@ HAVING sum(Sign) > 0
|
|||||||
└─────────────────────┴───────────┴──────────┴─────────┘
|
└─────────────────────┴───────────┴──────────┴─────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
If we don’t need aggregation and want to force collapsing, we can use the `FINAL` modifier for the `FROM` clause.
|
If we do not need aggregation and want to force collapsing, we can use the `FINAL` modifier for the `FROM` clause.
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
SELECT * FROM UAct FINAL
|
SELECT * FROM UAct FINAL
|
||||||
|
@ -20,11 +20,11 @@ Engine parameters:
|
|||||||
|
|
||||||
Optional engine parameters:
|
Optional engine parameters:
|
||||||
|
|
||||||
- `flush_time`, `flush_rows`, `flush_bytes` – Conditions for flushing data from the buffer, that will happen only in background (ommited or zero means no `flush*` parameters).
|
- `flush_time`, `flush_rows`, `flush_bytes` – Conditions for flushing data from the buffer, that will happen only in background (omitted or zero means no `flush*` parameters).
|
||||||
|
|
||||||
Data is flushed from the buffer and written to the destination table if all the `min*` conditions or at least one `max*` condition are met.
|
Data is flushed from the buffer and written to the destination table if all the `min*` conditions or at least one `max*` condition are met.
|
||||||
|
|
||||||
Also if at least one `flush*` condition are met flush initiated in background, this is different from `max*`, since `flush*` allows you to configure background flushes separately to avoid adding latency for `INSERT` (into `Buffer`) queries.
|
Also, if at least one `flush*` condition are met flush initiated in background, this is different from `max*`, since `flush*` allows you to configure background flushes separately to avoid adding latency for `INSERT` (into `Buffer`) queries.
|
||||||
|
|
||||||
- `min_time`, `max_time`, `flush_time` – Condition for the time in seconds from the moment of the first write to the buffer.
|
- `min_time`, `max_time`, `flush_time` – Condition for the time in seconds from the moment of the first write to the buffer.
|
||||||
- `min_rows`, `max_rows`, `flush_rows` – Condition for the number of rows in the buffer.
|
- `min_rows`, `max_rows`, `flush_rows` – Condition for the number of rows in the buffer.
|
||||||
@ -49,12 +49,12 @@ You can set empty strings in single quotation marks for the database and table n
|
|||||||
When reading from a Buffer table, data is processed both from the buffer and from the destination table (if there is one).
|
When reading from a Buffer table, data is processed both from the buffer and from the destination table (if there is one).
|
||||||
Note that the Buffer tables does not support an index. In other words, data in the buffer is fully scanned, which might be slow for large buffers. (For data in a subordinate table, the index that it supports will be used.)
|
Note that the Buffer tables does not support an index. In other words, data in the buffer is fully scanned, which might be slow for large buffers. (For data in a subordinate table, the index that it supports will be used.)
|
||||||
|
|
||||||
If the set of columns in the Buffer table doesn’t match the set of columns in a subordinate table, a subset of columns that exist in both tables is inserted.
|
If the set of columns in the Buffer table does not match the set of columns in a subordinate table, a subset of columns that exist in both tables is inserted.
|
||||||
|
|
||||||
If the types don’t match for one of the columns in the Buffer table and a subordinate table, an error message is entered in the server log and the buffer is cleared.
|
If the types do not match for one of the columns in the Buffer table and a subordinate table, an error message is entered in the server log, and the buffer is cleared.
|
||||||
The same thing happens if the subordinate table doesn’t exist when the buffer is flushed.
|
The same thing happens if the subordinate table does not exist when the buffer is flushed.
|
||||||
|
|
||||||
If you need to run ALTER for a subordinate table and the Buffer table, we recommend first deleting the Buffer table, running ALTER for the subordinate table, then creating the Buffer table again.
|
If you need to run ALTER for a subordinate table, and the Buffer table, we recommend first deleting the Buffer table, running ALTER for the subordinate table, then creating the Buffer table again.
|
||||||
|
|
||||||
If the server is restarted abnormally, the data in the buffer is lost.
|
If the server is restarted abnormally, the data in the buffer is lost.
|
||||||
|
|
||||||
@ -70,6 +70,6 @@ Due to these disadvantages, we can only recommend using a Buffer table in rare c
|
|||||||
|
|
||||||
A Buffer table is used when too many INSERTs are received from a large number of servers over a unit of time and data can’t be buffered before insertion, which means the INSERTs can’t run fast enough.
|
A Buffer table is used when too many INSERTs are received from a large number of servers over a unit of time and data can’t be buffered before insertion, which means the INSERTs can’t run fast enough.
|
||||||
|
|
||||||
Note that it doesn’t make sense to insert data one row at a time, even for Buffer tables. This will only produce a speed of a few thousand rows per second, while inserting larger blocks of data can produce over a million rows per second (see the section “Performance”).
|
Note that it does not make sense to insert data one row at a time, even for Buffer tables. This will only produce a speed of a few thousand rows per second, while inserting larger blocks of data can produce over a million rows per second (see the section “Performance”).
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/buffer/) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/buffer/) <!--hide-->
|
||||||
|
@ -94,4 +94,6 @@ select * from products limit 1;
|
|||||||
└───────────────┴─────────────────┘
|
└───────────────┴─────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/dictionary/) <!--hide-->
|
**See Also**
|
||||||
|
|
||||||
|
- [Dictionary function](../../../sql-reference/table-functions/dictionary.md#dictionary-function)
|
||||||
|
@ -25,7 +25,7 @@ The Distributed engine accepts parameters:
|
|||||||
- [insert_distributed_sync](../../../operations/settings/settings.md#insert_distributed_sync) setting
|
- [insert_distributed_sync](../../../operations/settings/settings.md#insert_distributed_sync) setting
|
||||||
- [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-multiple-volumes) for the examples
|
- [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-multiple-volumes) for the examples
|
||||||
|
|
||||||
Also it accept the following settings:
|
Also, it accepts the following settings:
|
||||||
|
|
||||||
- `fsync_after_insert` - do the `fsync` for the file data after asynchronous insert to Distributed. Guarantees that the OS flushed the whole inserted data to a file **on the initiator node** disk.
|
- `fsync_after_insert` - do the `fsync` for the file data after asynchronous insert to Distributed. Guarantees that the OS flushed the whole inserted data to a file **on the initiator node** disk.
|
||||||
|
|
||||||
@ -124,7 +124,7 @@ Replicas are duplicating servers (in order to read all the data, you can access
|
|||||||
Cluster names must not contain dots.
|
Cluster names must not contain dots.
|
||||||
|
|
||||||
The parameters `host`, `port`, and optionally `user`, `password`, `secure`, `compression` are specified for each server:
|
The parameters `host`, `port`, and optionally `user`, `password`, `secure`, `compression` are specified for each server:
|
||||||
- `host` – The address of the remote server. You can use either the domain or the IPv4 or IPv6 address. If you specify the domain, the server makes a DNS request when it starts, and the result is stored as long as the server is running. If the DNS request fails, the server doesn’t start. If you change the DNS record, restart the server.
|
- `host` – The address of the remote server. You can use either the domain or the IPv4 or IPv6 address. If you specify the domain, the server makes a DNS request when it starts, and the result is stored as long as the server is running. If the DNS request fails, the server does not start. If you change the DNS record, restart the server.
|
||||||
- `port` – The TCP port for messenger activity (`tcp_port` in the config, usually set to 9000). Do not confuse it with http_port.
|
- `port` – The TCP port for messenger activity (`tcp_port` in the config, usually set to 9000). Do not confuse it with http_port.
|
||||||
- `user` – Name of the user for connecting to a remote server. Default value: default. This user must have access to connect to the specified server. Access is configured in the users.xml file. For more information, see the section [Access rights](../../../operations/access-rights.md).
|
- `user` – Name of the user for connecting to a remote server. Default value: default. This user must have access to connect to the specified server. Access is configured in the users.xml file. For more information, see the section [Access rights](../../../operations/access-rights.md).
|
||||||
- `password` – The password for connecting to a remote server (not masked). Default value: empty string.
|
- `password` – The password for connecting to a remote server (not masked). Default value: empty string.
|
||||||
@ -143,13 +143,13 @@ To view your clusters, use the `system.clusters` table.
|
|||||||
|
|
||||||
The Distributed engine allows working with a cluster like a local server. However, the cluster is inextensible: you must write its configuration in the server config file (even better, for all the cluster’s servers).
|
The Distributed engine allows working with a cluster like a local server. However, the cluster is inextensible: you must write its configuration in the server config file (even better, for all the cluster’s servers).
|
||||||
|
|
||||||
The Distributed engine requires writing clusters to the config file. Clusters from the config file are updated on the fly, without restarting the server. If you need to send a query to an unknown set of shards and replicas each time, you don’t need to create a Distributed table – use the `remote` table function instead. See the section [Table functions](../../../sql-reference/table-functions/index.md).
|
The Distributed engine requires writing clusters to the config file. Clusters from the config file are updated on the fly, without restarting the server. If you need to send a query to an unknown set of shards and replicas each time, you do not need to create a Distributed table – use the `remote` table function instead. See the section [Table functions](../../../sql-reference/table-functions/index.md).
|
||||||
|
|
||||||
There are two methods for writing data to a cluster:
|
There are two methods for writing data to a cluster:
|
||||||
|
|
||||||
First, you can define which servers to write which data to and perform the write directly on each shard. In other words, perform INSERT in the tables that the distributed table “looks at”. This is the most flexible solution as you can use any sharding scheme, which could be non-trivial due to the requirements of the subject area. This is also the most optimal solution since data can be written to different shards completely independently.
|
First, you can define which servers to write which data to and perform the write directly on each shard. In other words, perform INSERT in the tables that the distributed table “looks at”. This is the most flexible solution as you can use any sharding scheme, which could be non-trivial due to the requirements of the subject area. This is also the most optimal solution since data can be written to different shards completely independently.
|
||||||
|
|
||||||
Second, you can perform INSERT in a Distributed table. In this case, the table will distribute the inserted data across the servers itself. In order to write to a Distributed table, it must have a sharding key set (the last parameter). In addition, if there is only one shard, the write operation works without specifying the sharding key, since it doesn’t mean anything in this case.
|
Second, you can perform INSERT in a Distributed table. In this case, the table will distribute the inserted data across the servers itself. In order to write to a Distributed table, it must have a sharding key set (the last parameter). In addition, if there is only one shard, the write operation works without specifying the sharding key, since it does not mean anything in this case.
|
||||||
|
|
||||||
Each shard can have a weight defined in the config file. By default, the weight is equal to one. Data is distributed across shards in the amount proportional to the shard weight. For example, if there are two shards and the first has a weight of 9 while the second has a weight of 10, the first will be sent 9 / 19 parts of the rows, and the second will be sent 10 / 19.
|
Each shard can have a weight defined in the config file. By default, the weight is equal to one. Data is distributed across shards in the amount proportional to the shard weight. For example, if there are two shards and the first has a weight of 9 while the second has a weight of 10, the first will be sent 9 / 19 parts of the rows, and the second will be sent 10 / 19.
|
||||||
|
|
||||||
@ -165,7 +165,7 @@ The sharding expression can be any expression from constants and table columns t
|
|||||||
|
|
||||||
A simple reminder from the division is a limited solution for sharding and isn’t always appropriate. It works for medium and large volumes of data (dozens of servers), but not for very large volumes of data (hundreds of servers or more). In the latter case, use the sharding scheme required by the subject area, rather than using entries in Distributed tables.
|
A simple reminder from the division is a limited solution for sharding and isn’t always appropriate. It works for medium and large volumes of data (dozens of servers), but not for very large volumes of data (hundreds of servers or more). In the latter case, use the sharding scheme required by the subject area, rather than using entries in Distributed tables.
|
||||||
|
|
||||||
SELECT queries are sent to all the shards and work regardless of how data is distributed across the shards (they can be distributed completely randomly). When you add a new shard, you don’t have to transfer the old data to it. You can write new data with a heavier weight – the data will be distributed slightly unevenly, but queries will work correctly and efficiently.
|
SELECT queries are sent to all the shards and work regardless of how data is distributed across the shards (they can be distributed completely randomly). When you add a new shard, you do not have to transfer the old data to it. You can write new data with a heavier weight – the data will be distributed slightly unevenly, but queries will work correctly and efficiently.
|
||||||
|
|
||||||
You should be concerned about the sharding scheme in the following cases:
|
You should be concerned about the sharding scheme in the following cases:
|
||||||
|
|
||||||
|
@ -9,7 +9,7 @@ ClickHouse allows sending a server the data that is needed for processing a quer
|
|||||||
|
|
||||||
For example, if you have a text file with important user identifiers, you can upload it to the server along with a query that uses filtration by this list.
|
For example, if you have a text file with important user identifiers, you can upload it to the server along with a query that uses filtration by this list.
|
||||||
|
|
||||||
If you need to run more than one query with a large volume of external data, don’t use this feature. It is better to upload the data to the DB ahead of time.
|
If you need to run more than one query with a large volume of external data, do not use this feature. It is better to upload the data to the DB ahead of time.
|
||||||
|
|
||||||
External data can be uploaded using the command-line client (in non-interactive mode), or using the HTTP interface.
|
External data can be uploaded using the command-line client (in non-interactive mode), or using the HTTP interface.
|
||||||
|
|
||||||
|
@ -24,7 +24,7 @@ The `Format` parameter specifies one of the available file formats. To perform
|
|||||||
`INSERT` queries – for output. The available formats are listed in the
|
`INSERT` queries – for output. The available formats are listed in the
|
||||||
[Formats](../../../interfaces/formats.md#formats) section.
|
[Formats](../../../interfaces/formats.md#formats) section.
|
||||||
|
|
||||||
ClickHouse does not allow to specify filesystem path for`File`. It will use folder defined by [path](../../../operations/server-configuration-parameters/settings.md) setting in server configuration.
|
ClickHouse does not allow specifying filesystem path for`File`. It will use folder defined by [path](../../../operations/server-configuration-parameters/settings.md) setting in server configuration.
|
||||||
|
|
||||||
When creating table using `File(Format)` it creates empty subdirectory in that folder. When data is written to that table, it’s put into `data.Format` file in that subdirectory.
|
When creating table using `File(Format)` it creates empty subdirectory in that folder. When data is written to that table, it’s put into `data.Format` file in that subdirectory.
|
||||||
|
|
||||||
|
@ -28,7 +28,7 @@ See the detailed description of the [CREATE TABLE](../../../sql-reference/statem
|
|||||||
- `join_type` – [JOIN type](../../../sql-reference/statements/select/join.md#select-join-types).
|
- `join_type` – [JOIN type](../../../sql-reference/statements/select/join.md#select-join-types).
|
||||||
- `k1[, k2, ...]` – Key columns from the `USING` clause that the `JOIN` operation is made with.
|
- `k1[, k2, ...]` – Key columns from the `USING` clause that the `JOIN` operation is made with.
|
||||||
|
|
||||||
Enter `join_strictness` and `join_type` parameters without quotes, for example, `Join(ANY, LEFT, col1)`. They must match the `JOIN` operation that the table will be used for. If the parameters don’t match, ClickHouse doesn’t throw an exception and may return incorrect data.
|
Enter `join_strictness` and `join_type` parameters without quotes, for example, `Join(ANY, LEFT, col1)`. They must match the `JOIN` operation that the table will be used for. If the parameters do not match, ClickHouse does not throw an exception and may return incorrect data.
|
||||||
|
|
||||||
## Table Usage {#table-usage}
|
## Table Usage {#table-usage}
|
||||||
|
|
||||||
|
@ -6,7 +6,7 @@ toc_title: Memory
|
|||||||
# Memory Table Engine {#memory}
|
# Memory Table Engine {#memory}
|
||||||
|
|
||||||
The Memory engine stores data in RAM, in uncompressed form. Data is stored in exactly the same form as it is received when read. In other words, reading from this table is completely free.
|
The Memory engine stores data in RAM, in uncompressed form. Data is stored in exactly the same form as it is received when read. In other words, reading from this table is completely free.
|
||||||
Concurrent data access is synchronized. Locks are short: read and write operations don’t block each other.
|
Concurrent data access is synchronized. Locks are short: read and write operations do not block each other.
|
||||||
Indexes are not supported. Reading is parallelized.
|
Indexes are not supported. Reading is parallelized.
|
||||||
|
|
||||||
Maximal productivity (over 10 GB/sec) is reached on simple queries, because there is no reading from the disk, decompressing, or deserializing data. (We should note that in many cases, the productivity of the MergeTree engine is almost as high.)
|
Maximal productivity (over 10 GB/sec) is reached on simple queries, because there is no reading from the disk, decompressing, or deserializing data. (We should note that in many cases, the productivity of the MergeTree engine is almost as high.)
|
||||||
|
@ -22,4 +22,4 @@ Here is the illustration of the difference between traditional row-oriented syst
|
|||||||
**Columnar**
|
**Columnar**
|
||||||
![Columnar](https://clickhouse.tech/docs/en/images/column-oriented.gif#)
|
![Columnar](https://clickhouse.tech/docs/en/images/column-oriented.gif#)
|
||||||
|
|
||||||
A columnar database is a preferred choice for analytical applications because it allows to have many columns in a table just in case, but don’t pay the cost for unused columns on read query execution time. Column-oriented databases are designed for big data processing because and data warehousing, they often natively scale using distributed clusters of low-cost hardware to increase throughput. ClickHouse does it with combination of [distributed](../../engines/table-engines/special/distributed.md) and [replicated](../../engines/table-engines/mergetree-family/replication.md) tables.
|
A columnar database is a preferred choice for analytical applications because it allows to have many columns in a table just in case, but do not pay the cost for unused columns on read query execution time. Column-oriented databases are designed for big data processing because and data warehousing, they often natively scale using distributed clusters of low-cost hardware to increase throughput. ClickHouse does it with combination of [distributed](../../engines/table-engines/special/distributed.md) and [replicated](../../engines/table-engines/mergetree-family/replication.md) tables.
|
||||||
|
@ -6,7 +6,7 @@ toc_priority: 10
|
|||||||
|
|
||||||
# What Does “ClickHouse” Mean? {#what-does-clickhouse-mean}
|
# What Does “ClickHouse” Mean? {#what-does-clickhouse-mean}
|
||||||
|
|
||||||
It’s a combination of “**Click**stream” and “Data ware**House**”. It comes from the original use case at Yandex.Metrica, where ClickHouse was supposed to keep records of all clicks by people from all over the Internet and it still does the job. You can read more about this use case on [ClickHouse history](../../introduction/history.md) page.
|
It’s a combination of “**Click**stream” and “Data ware**House**”. It comes from the original use case at Yandex.Metrica, where ClickHouse was supposed to keep records of all clicks by people from all over the Internet, and it still does the job. You can read more about this use case on [ClickHouse history](../../introduction/history.md) page.
|
||||||
|
|
||||||
This two-part meaning has two consequences:
|
This two-part meaning has two consequences:
|
||||||
|
|
||||||
|
@ -15,9 +15,9 @@ One of the following batches of those t-shirts was supposed to be given away on
|
|||||||
|
|
||||||
So, what does it mean? Here are some ways to translate *“не тормозит”*:
|
So, what does it mean? Here are some ways to translate *“не тормозит”*:
|
||||||
|
|
||||||
- If you translate it literally, it’d be something like *“ClickHouse doesn’t press the brake pedal”*.
|
- If you translate it literally, it’d be something like *“ClickHouse does not press the brake pedal”*.
|
||||||
- If you’d want to express it as close to how it sounds to a Russian person with IT background, it’d be something like *“If your larger system lags, it’s not because it uses ClickHouse”*.
|
- If you’d want to express it as close to how it sounds to a Russian person with IT background, it’d be something like *“If your larger system lags, it’s not because it uses ClickHouse”*.
|
||||||
- Shorter, but not so precise versions could be *“ClickHouse is not slow”*, *“ClickHouse doesn’t lag”* or just *“ClickHouse is fast”*.
|
- Shorter, but not so precise versions could be *“ClickHouse is not slow”*, *“ClickHouse does not lag”* or just *“ClickHouse is fast”*.
|
||||||
|
|
||||||
If you haven’t seen one of those t-shirts in person, you can check them out online in many ClickHouse-related videos. For example, this one:
|
If you haven’t seen one of those t-shirts in person, you can check them out online in many ClickHouse-related videos. For example, this one:
|
||||||
|
|
||||||
|
@ -31,7 +31,7 @@ All database management systems could be classified into two groups: OLAP (Onlin
|
|||||||
|
|
||||||
In practice OLAP and OLTP are not categories, it’s more like a spectrum. Most real systems usually focus on one of them but provide some solutions or workarounds if the opposite kind of workload is also desired. This situation often forces businesses to operate multiple storage systems integrated, which might be not so big deal but having more systems make it more expensive to maintain. So the trend of recent years is HTAP (**Hybrid Transactional/Analytical Processing**) when both kinds of the workload are handled equally well by a single database management system.
|
In practice OLAP and OLTP are not categories, it’s more like a spectrum. Most real systems usually focus on one of them but provide some solutions or workarounds if the opposite kind of workload is also desired. This situation often forces businesses to operate multiple storage systems integrated, which might be not so big deal but having more systems make it more expensive to maintain. So the trend of recent years is HTAP (**Hybrid Transactional/Analytical Processing**) when both kinds of the workload are handled equally well by a single database management system.
|
||||||
|
|
||||||
Even if a DBMS started as a pure OLAP or pure OLTP, they are forced to move towards that HTAP direction to keep up with their competition. And ClickHouse is no exception, initially, it has been designed as [fast-as-possible OLAP system](../../faq/general/why-clickhouse-is-so-fast.md) and it still doesn’t have full-fledged transaction support, but some features like consistent read/writes and mutations for updating/deleting data had to be added.
|
Even if a DBMS started as a pure OLAP or pure OLTP, they are forced to move towards that HTAP direction to keep up with their competition. And ClickHouse is no exception, initially, it has been designed as [fast-as-possible OLAP system](../../faq/general/why-clickhouse-is-so-fast.md) and it still does not have full-fledged transaction support, but some features like consistent read/writes and mutations for updating/deleting data had to be added.
|
||||||
|
|
||||||
The fundamental trade-off between OLAP and OLTP systems remains:
|
The fundamental trade-off between OLAP and OLTP systems remains:
|
||||||
|
|
||||||
|
@ -6,9 +6,9 @@ toc_priority: 9
|
|||||||
|
|
||||||
# Who Is Using ClickHouse? {#who-is-using-clickhouse}
|
# Who Is Using ClickHouse? {#who-is-using-clickhouse}
|
||||||
|
|
||||||
Being an open-source product makes this question not so straightforward to answer. You don’t have to tell anyone if you want to start using ClickHouse, you just go grab source code or pre-compiled packages. There’s no contract to sign and the [Apache 2.0 license](https://github.com/ClickHouse/ClickHouse/blob/master/LICENSE) allows for unconstrained software distribution.
|
Being an open-source product makes this question not so straightforward to answer. You do not have to tell anyone if you want to start using ClickHouse, you just go grab source code or pre-compiled packages. There’s no contract to sign and the [Apache 2.0 license](https://github.com/ClickHouse/ClickHouse/blob/master/LICENSE) allows for unconstrained software distribution.
|
||||||
|
|
||||||
Also, the technology stack is often in a grey zone of what’s covered by an NDA. Some companies consider technologies they use as a competitive advantage even if they are open-source and don’t allow employees to share any details publicly. Some see some PR risks and allow employees to share implementation details only with their PR department approval.
|
Also, the technology stack is often in a grey zone of what’s covered by an NDA. Some companies consider technologies they use as a competitive advantage even if they are open-source and do not allow employees to share any details publicly. Some see some PR risks and allow employees to share implementation details only with their PR department approval.
|
||||||
|
|
||||||
So how to tell who is using ClickHouse?
|
So how to tell who is using ClickHouse?
|
||||||
|
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user