Merge branch 'master' into ncb/import-export-lz4

This commit is contained in:
Nikolay Degterinsky 2021-08-18 10:20:23 +03:00
commit c245af5243
2709 changed files with 61110 additions and 23821 deletions

View File

@ -7,6 +7,6 @@ assignees: ''
--- ---
Make sure to check documentation https://clickhouse.yandex/docs/en/ first. If the question is concise and probably has a short answer, asking it in Telegram chat https://telegram.me/clickhouse_en is probably the fastest way to find the answer. For more complicated questions, consider asking them on StackOverflow with "clickhouse" tag https://stackoverflow.com/questions/tagged/clickhouse > Make sure to check documentation https://clickhouse.yandex/docs/en/ first. If the question is concise and probably has a short answer, asking it in Telegram chat https://telegram.me/clickhouse_en is probably the fastest way to find the answer. For more complicated questions, consider asking them on StackOverflow with "clickhouse" tag https://stackoverflow.com/questions/tagged/clickhouse
If you still prefer GitHub issues, remove all this text and ask your question here. > If you still prefer GitHub issues, remove all this text and ask your question here.

View File

@ -7,16 +7,20 @@ assignees: ''
--- ---
(you don't have to strictly follow this form) > (you don't have to strictly follow this form)
**Use case** **Use case**
A clear and concise description of what is the intended usage scenario is.
> A clear and concise description of what is the intended usage scenario is.
**Describe the solution you'd like** **Describe the solution you'd like**
A clear and concise description of what you want to happen.
> A clear and concise description of what you want to happen.
**Describe alternatives you've considered** **Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
> A clear and concise description of any alternative solutions or features you've considered.
**Additional context** **Additional context**
Add any other context or screenshots about the feature request here.
> Add any other context or screenshots about the feature request here.

View File

@ -7,10 +7,11 @@ assignees: ''
--- ---
Make sure that `git diff` result is empty and you've just pulled fresh master. Try cleaning up cmake cache. Just in case, official build instructions are published here: https://clickhouse.yandex/docs/en/development/build/ > Make sure that `git diff` result is empty and you've just pulled fresh master. Try cleaning up cmake cache. Just in case, official build instructions are published here: https://clickhouse.yandex/docs/en/development/build/
**Operating system** **Operating system**
OS kind or distribution, specific version/release, non-standard kernel if any. If you are trying to build inside virtual machine, please mention it too.
> OS kind or distribution, specific version/release, non-standard kernel if any. If you are trying to build inside virtual machine, please mention it too.
**Cmake version** **Cmake version**

View File

@ -1,17 +1,17 @@
--- ---
name: Bug report name: Bug report
about: Create a report to help us improve ClickHouse about: Wrong behaviour (visible to users) in official ClickHouse release.
title: '' title: ''
labels: bug labels: 'potential bug'
assignees: '' assignees: ''
--- ---
You have to provide the following information whenever possible. > You have to provide the following information whenever possible.
**Describe the bug** **Describe the bug**
A clear and concise description of what works not as it is supposed to. > A clear and concise description of what works not as it is supposed to.
**Does it reproduce on recent release?** **Does it reproduce on recent release?**
@ -19,7 +19,7 @@ A clear and concise description of what works not as it is supposed to.
**Enable crash reporting** **Enable crash reporting**
If possible, change "enabled" to true in "send_crash_reports" section in `config.xml`: > If possible, change "enabled" to true in "send_crash_reports" section in `config.xml`:
``` ```
<send_crash_reports> <send_crash_reports>
@ -39,12 +39,12 @@ If possible, change "enabled" to true in "send_crash_reports" section in `config
**Expected behavior** **Expected behavior**
A clear and concise description of what you expected to happen. > A clear and concise description of what you expected to happen.
**Error message and/or stacktrace** **Error message and/or stacktrace**
If applicable, add screenshots to help explain your problem. > If applicable, add screenshots to help explain your problem.
**Additional context** **Additional context**
Add any other context about the problem here. > Add any other context about the problem here.

View File

@ -2,28 +2,26 @@ I hereby agree to the terms of the CLA available at: https://yandex.ru/legal/cla
Changelog category (leave one): Changelog category (leave one):
- New Feature - New Feature
- Bug Fix
- Improvement - Improvement
- Bug Fix
- Performance Improvement - Performance Improvement
- Backward Incompatible Change - Backward Incompatible Change
- Build/Testing/Packaging Improvement - Build/Testing/Packaging Improvement
- Documentation (changelog entry is not required) - Documentation (changelog entry is not required)
- Other
- Not for changelog (changelog entry is not required) - Not for changelog (changelog entry is not required)
Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md): Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):
... ...
Detailed description / Documentation draft: Detailed description / Documentation draft:
... ...
By adding documentation, you'll allow users to try your new feature immediately, not when someone else will have time to document it later. Documentation is necessary for all features that affect user experience in any way. You can add brief documentation draft above, or add documentation right into your patch as Markdown files in [docs](https://github.com/ClickHouse/ClickHouse/tree/master/docs) folder.
If you are doing this for the first time, it's recommended to read the lightweight [Contributing to ClickHouse Documentation](https://github.com/ClickHouse/ClickHouse/tree/master/docs/README.md) guide first. > By adding documentation, you'll allow users to try your new feature immediately, not when someone else will have time to document it later. Documentation is necessary for all features that affect user experience in any way. You can add brief documentation draft above, or add documentation right into your patch as Markdown files in [docs](https://github.com/ClickHouse/ClickHouse/tree/master/docs) folder.
> If you are doing this for the first time, it's recommended to read the lightweight [Contributing to ClickHouse Documentation](https://github.com/ClickHouse/ClickHouse/tree/master/docs/README.md) guide first.
Information about CI checks: https://clickhouse.tech/docs/en/development/continuous-integration/ > Information about CI checks: https://clickhouse.tech/docs/en/development/continuous-integration/

18
.gitmodules vendored
View File

@ -225,6 +225,24 @@
[submodule "contrib/yaml-cpp"] [submodule "contrib/yaml-cpp"]
path = contrib/yaml-cpp path = contrib/yaml-cpp
url = https://github.com/ClickHouse-Extras/yaml-cpp.git url = https://github.com/ClickHouse-Extras/yaml-cpp.git
[submodule "contrib/libstemmer_c"]
path = contrib/libstemmer_c
url = https://github.com/ClickHouse-Extras/libstemmer_c.git
[submodule "contrib/wordnet-blast"]
path = contrib/wordnet-blast
url = https://github.com/ClickHouse-Extras/wordnet-blast.git
[submodule "contrib/lemmagen-c"]
path = contrib/lemmagen-c
url = https://github.com/ClickHouse-Extras/lemmagen-c.git
[submodule "contrib/libpqxx"] [submodule "contrib/libpqxx"]
path = contrib/libpqxx path = contrib/libpqxx
url = https://github.com/ClickHouse-Extras/libpqxx.git url = https://github.com/ClickHouse-Extras/libpqxx.git
[submodule "contrib/sqlite-amalgamation"]
path = contrib/sqlite-amalgamation
url = https://github.com/azadkuh/sqlite-amalgamation
[submodule "contrib/s2geometry"]
path = contrib/s2geometry
url = https://github.com/ClickHouse-Extras/s2geometry.git
[submodule "contrib/bzip2"]
path = contrib/bzip2
url = https://github.com/ClickHouse-Extras/bzip2.git

View File

@ -1,3 +1,258 @@
### ClickHouse release v21.8, 2021-08-12
#### New Features
* Add support for a part of SQL/JSON standard. [#24148](https://github.com/ClickHouse/ClickHouse/pull/24148) ([l1tsolaiki](https://github.com/l1tsolaiki), [Kseniia Sumarokova](https://github.com/kssenii)).
* Collect common system metrics (in `system.asynchronous_metrics` and `system.asynchronous_metric_log`) on CPU usage, disk usage, memory usage, IO, network, files, load average, CPU frequencies, thermal sensors, EDAC counters, system uptime; also added metrics about the scheduling jitter and the time spent collecting the metrics. It works similar to `atop` in ClickHouse and allows access to monitoring data even if you have no additional tools installed. Close [#9430](https://github.com/ClickHouse/ClickHouse/issues/9430). [#24416](https://github.com/ClickHouse/ClickHouse/pull/24416) ([alexey-milovidov](https://github.com/alexey-milovidov), [Yegor Levankov](https://github.com/elevankoff)).
* Add MaterializedPostgreSQL table engine and database engine. This database engine allows replicating a whole database or any subset of database tables. [#20470](https://github.com/ClickHouse/ClickHouse/pull/20470) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Add new functions `leftPad()`, `rightPad()`, `leftPadUTF8()`, `rightPadUTF8()`. [#26075](https://github.com/ClickHouse/ClickHouse/pull/26075) ([Vitaly Baranov](https://github.com/vitlibar)).
* Add the `FIRST` keyword to the `ADD INDEX` command to be able to add the index at the beginning of the indices list. [#25904](https://github.com/ClickHouse/ClickHouse/pull/25904) ([xjewer](https://github.com/xjewer)).
* Introduce `system.data_skipping_indices` table containing information about existing data skipping indices. Close [#7659](https://github.com/ClickHouse/ClickHouse/issues/7659). [#25693](https://github.com/ClickHouse/ClickHouse/pull/25693) ([Dmitry Novik](https://github.com/novikd)).
* Add `bin`/`unbin` functions. [#25609](https://github.com/ClickHouse/ClickHouse/pull/25609) ([zhaoyu](https://github.com/zxc111)).
* Support `Map` and `UInt128`, `Int128`, `UInt256`, `Int256` types in `mapAdd` and `mapSubtract` functions. [#25596](https://github.com/ClickHouse/ClickHouse/pull/25596) ([Ildus Kurbangaliev](https://github.com/ildus)).
* Support `DISTINCT ON (columns)` expression, close [#25404](https://github.com/ClickHouse/ClickHouse/issues/25404). [#25589](https://github.com/ClickHouse/ClickHouse/pull/25589) ([Zijie Lu](https://github.com/TszKitLo40)).
* Add an ability to reset a custom setting to default and remove it from the table's metadata. It allows rolling back the change without knowing the system/config's default. Closes [#14449](https://github.com/ClickHouse/ClickHouse/issues/14449). [#17769](https://github.com/ClickHouse/ClickHouse/pull/17769) ([xjewer](https://github.com/xjewer)).
* Render pipelines as graphs in Web UI if `EXPLAIN PIPELINE graph = 1` query is submitted. [#26067](https://github.com/ClickHouse/ClickHouse/pull/26067) ([alexey-milovidov](https://github.com/alexey-milovidov)).
#### Performance Improvements
* Compile aggregate functions. Use option `compile_aggregate_expressions` to enable it. [#24789](https://github.com/ClickHouse/ClickHouse/pull/24789) ([Maksim Kita](https://github.com/kitaisreal)).
* Improve latency of short queries that require reading from tables with many columns. [#26371](https://github.com/ClickHouse/ClickHouse/pull/26371) ([Anton Popov](https://github.com/CurtizJ)).
#### Improvements
* Use `Map` data type for system logs tables (`system.query_log`, `system.query_thread_log`, `system.processes`, `system.opentelemetry_span_log`). These tables will be auto-created with new data types. Virtual columns are created to support old queries. Closes [#18698](https://github.com/ClickHouse/ClickHouse/issues/18698). [#23934](https://github.com/ClickHouse/ClickHouse/pull/23934), [#25773](https://github.com/ClickHouse/ClickHouse/pull/25773) ([hexiaoting](https://github.com/hexiaoting), [sundy-li](https://github.com/sundy-li), [Maksim Kita](https://github.com/kitaisreal)).
* For a dictionary with a complex key containing only one attribute, allow not wrapping the key expression in tuple for functions `dictGet`, `dictHas`. [#26130](https://github.com/ClickHouse/ClickHouse/pull/26130) ([Maksim Kita](https://github.com/kitaisreal)).
* Implement function `bin`/`hex` from `AggregateFunction` states. [#26094](https://github.com/ClickHouse/ClickHouse/pull/26094) ([zhaoyu](https://github.com/zxc111)).
* Support arguments of `UUID` type for `empty` and `notEmpty` functions. `UUID` is empty if it is all zeros (nil UUID). Closes [#3446](https://github.com/ClickHouse/ClickHouse/issues/3446). [#25974](https://github.com/ClickHouse/ClickHouse/pull/25974) ([zhaoyu](https://github.com/zxc111)).
* Add support for `SET SQL_SELECT_LIMIT` in MySQL protocol. Closes [#17115](https://github.com/ClickHouse/ClickHouse/issues/17115). [#25972](https://github.com/ClickHouse/ClickHouse/pull/25972) ([Kseniia Sumarokova](https://github.com/kssenii)).
* More instrumentation for network interaction: add counters for recv/send bytes; add gauges for recvs/sends. Added missing documentation. Close [#5897](https://github.com/ClickHouse/ClickHouse/issues/5897). [#25962](https://github.com/ClickHouse/ClickHouse/pull/25962) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Add setting `optimize_move_to_prewhere_if_final`. If query has `FINAL`, the optimization `move_to_prewhere` will be enabled only if both `optimize_move_to_prewhere` and `optimize_move_to_prewhere_if_final` are enabled. Closes [#8684](https://github.com/ClickHouse/ClickHouse/issues/8684). [#25940](https://github.com/ClickHouse/ClickHouse/pull/25940) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Allow complex quoted identifiers of JOINed tables. Close [#17861](https://github.com/ClickHouse/ClickHouse/issues/17861). [#25924](https://github.com/ClickHouse/ClickHouse/pull/25924) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Add support for Unicode (e.g. Chinese, Cyrillic) components in `Nested` data types. Close [#25594](https://github.com/ClickHouse/ClickHouse/issues/25594). [#25923](https://github.com/ClickHouse/ClickHouse/pull/25923) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Allow `quantiles*` functions to work with `aggregate_functions_null_for_empty`. Close [#25892](https://github.com/ClickHouse/ClickHouse/issues/25892). [#25919](https://github.com/ClickHouse/ClickHouse/pull/25919) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Allow parameters for parametric aggregate functions to be arbitrary constant expressions (e.g., `1 + 2`), not just literals. It also allows using the query parameters (in parameterized queries like `{param:UInt8}`) inside parametric aggregate functions. Closes [#11607](https://github.com/ClickHouse/ClickHouse/issues/11607). [#25910](https://github.com/ClickHouse/ClickHouse/pull/25910) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Correctly throw the exception on the attempt to parse an invalid `Date`. Closes [#6481](https://github.com/ClickHouse/ClickHouse/issues/6481). [#25909](https://github.com/ClickHouse/ClickHouse/pull/25909) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Support for multiple includes in configuration. It is possible to include users configuration, remote server configuration from multiple sources. Simply place `<include />` element with `from_zk`, `from_env` or `incl` attribute, and it will be replaced with the substitution. [#24404](https://github.com/ClickHouse/ClickHouse/pull/24404) ([nvartolomei](https://github.com/nvartolomei)).
* Support for queries with a column named `"null"` (it must be specified in back-ticks or double quotes) and `ON CLUSTER`. Closes [#24035](https://github.com/ClickHouse/ClickHouse/issues/24035). [#25907](https://github.com/ClickHouse/ClickHouse/pull/25907) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Support `LowCardinality`, `Decimal`, and `UUID` for `JSONExtract`. Closes [#24606](https://github.com/ClickHouse/ClickHouse/issues/24606). [#25900](https://github.com/ClickHouse/ClickHouse/pull/25900) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Convert history file from `readline` format to `replxx` format. [#25888](https://github.com/ClickHouse/ClickHouse/pull/25888) ([Azat Khuzhin](https://github.com/azat)).
* Fix an issue which can lead to intersecting parts after `DROP PART` or background deletion of an empty part. [#25884](https://github.com/ClickHouse/ClickHouse/pull/25884) ([alesapin](https://github.com/alesapin)).
* Better handling of lost parts for `ReplicatedMergeTree` tables. Fixes rare inconsistencies in `ReplicationQueue`. Fixes [#10368](https://github.com/ClickHouse/ClickHouse/issues/10368). [#25820](https://github.com/ClickHouse/ClickHouse/pull/25820) ([alesapin](https://github.com/alesapin)).
* Allow starting clickhouse-client with unreadable working directory. [#25817](https://github.com/ClickHouse/ClickHouse/pull/25817) ([ianton-ru](https://github.com/ianton-ru)).
* Fix "No available columns" error for `Merge` storage. [#25801](https://github.com/ClickHouse/ClickHouse/pull/25801) ([Azat Khuzhin](https://github.com/azat)).
* MySQL Engine now supports the exchange of column comments between MySQL and ClickHouse. [#25795](https://github.com/ClickHouse/ClickHouse/pull/25795) ([Storozhuk Kostiantyn](https://github.com/sand6255)).
* Fix inconsistent behaviour of `GROUP BY` constant on empty set. Closes [#6842](https://github.com/ClickHouse/ClickHouse/issues/6842). [#25786](https://github.com/ClickHouse/ClickHouse/pull/25786) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Cancel already running merges in partition on `DROP PARTITION` and `TRUNCATE` for `ReplicatedMergeTree`. Resolves [#17151](https://github.com/ClickHouse/ClickHouse/issues/17151). [#25684](https://github.com/ClickHouse/ClickHouse/pull/25684) ([tavplubix](https://github.com/tavplubix)).
* Support ENUM` data type for MaterializeMySQL. [#25676](https://github.com/ClickHouse/ClickHouse/pull/25676) ([Storozhuk Kostiantyn](https://github.com/sand6255)).
* Support materialized and aliased columns in JOIN, close [#13274](https://github.com/ClickHouse/ClickHouse/issues/13274). [#25634](https://github.com/ClickHouse/ClickHouse/pull/25634) ([Vladimir C](https://github.com/vdimir)).
* Fix possible logical race condition between `ALTER TABLE ... DETACH` and background merges. [#25605](https://github.com/ClickHouse/ClickHouse/pull/25605) ([Azat Khuzhin](https://github.com/azat)).
* Make `NetworkReceiveElapsedMicroseconds` metric to correctly include the time spent waiting for data from the client to `INSERT`. Close [#9958](https://github.com/ClickHouse/ClickHouse/issues/9958). [#25602](https://github.com/ClickHouse/ClickHouse/pull/25602) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Support `TRUNCATE TABLE` for S3 and HDFS. Close [#25530](https://github.com/ClickHouse/ClickHouse/issues/25530). [#25550](https://github.com/ClickHouse/ClickHouse/pull/25550) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Support for dynamic reloading of config to change number of threads in pool for background jobs execution (merges, mutations, fetches). [#25548](https://github.com/ClickHouse/ClickHouse/pull/25548) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Allow extracting of non-string element as string using `JSONExtract`. This is for [#25414](https://github.com/ClickHouse/ClickHouse/issues/25414). [#25452](https://github.com/ClickHouse/ClickHouse/pull/25452) ([Amos Bird](https://github.com/amosbird)).
* Support regular expression in `Database` argument for `StorageMerge`. Close [#776](https://github.com/ClickHouse/ClickHouse/issues/776). [#25064](https://github.com/ClickHouse/ClickHouse/pull/25064) ([flynn](https://github.com/ucasfl)).
* Web UI: if the value looks like a URL, automatically generate a link. [#25965](https://github.com/ClickHouse/ClickHouse/pull/25965) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Make `sudo service clickhouse-server start` to work on systems with `systemd` like Centos 8. Close [#14298](https://github.com/ClickHouse/ClickHouse/issues/14298). Close [#17799](https://github.com/ClickHouse/ClickHouse/issues/17799). [#25921](https://github.com/ClickHouse/ClickHouse/pull/25921) ([alexey-milovidov](https://github.com/alexey-milovidov)).
#### Bug Fixes
* Fix incorrect `SET ROLE` in some cases. [#26707](https://github.com/ClickHouse/ClickHouse/pull/26707) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fix potential `nullptr` dereference in window functions. Fix [#25276](https://github.com/ClickHouse/ClickHouse/issues/25276). [#26668](https://github.com/ClickHouse/ClickHouse/pull/26668) ([Alexander Kuzmenkov](https://github.com/akuzm)).
* Fix incorrect function names of `groupBitmapAnd/Or/Xor`. Fix [#26557](https://github.com/ClickHouse/ClickHouse/pull/26557) ([Amos Bird](https://github.com/amosbird)).
* Fix crash in RabbitMQ shutdown in case RabbitMQ setup was not started. Closes [#26504](https://github.com/ClickHouse/ClickHouse/issues/26504). [#26529](https://github.com/ClickHouse/ClickHouse/pull/26529) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Fix issues with `CREATE DICTIONARY` query if dictionary name or database name was quoted. Closes [#26491](https://github.com/ClickHouse/ClickHouse/issues/26491). [#26508](https://github.com/ClickHouse/ClickHouse/pull/26508) ([Maksim Kita](https://github.com/kitaisreal)).
* Fix broken name resolution after rewriting column aliases. Fix [#26432](https://github.com/ClickHouse/ClickHouse/issues/26432). [#26475](https://github.com/ClickHouse/ClickHouse/pull/26475) ([Amos Bird](https://github.com/amosbird)).
* Fix infinite non-joined block stream in `partial_merge_join` close [#26325](https://github.com/ClickHouse/ClickHouse/issues/26325). [#26374](https://github.com/ClickHouse/ClickHouse/pull/26374) ([Vladimir C](https://github.com/vdimir)).
* Fix possible crash when login as dropped user. Fix [#26073](https://github.com/ClickHouse/ClickHouse/issues/26073). [#26363](https://github.com/ClickHouse/ClickHouse/pull/26363) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fix `optimize_distributed_group_by_sharding_key` for multiple columns (leads to incorrect result w/ `optimize_skip_unused_shards=1`/`allow_nondeterministic_optimize_skip_unused_shards=1` and multiple columns in sharding key expression). [#26353](https://github.com/ClickHouse/ClickHouse/pull/26353) ([Azat Khuzhin](https://github.com/azat)).
* `CAST` from `Date` to `DateTime` (or `DateTime64`) was not using the timezone of the `DateTime` type. It can also affect the comparison between `Date` and `DateTime`. Inference of the common type for `Date` and `DateTime` also was not using the corresponding timezone. It affected the results of function `if` and array construction. Closes [#24128](https://github.com/ClickHouse/ClickHouse/issues/24128). [#24129](https://github.com/ClickHouse/ClickHouse/pull/24129) ([Maksim Kita](https://github.com/kitaisreal)).
* Fixed rare bug in lost replica recovery that may cause replicas to diverge. [#26321](https://github.com/ClickHouse/ClickHouse/pull/26321) ([tavplubix](https://github.com/tavplubix)).
* Fix zstd decompression in case there are escape sequences at the end of internal buffer. Closes [#26013](https://github.com/ClickHouse/ClickHouse/issues/26013). [#26314](https://github.com/ClickHouse/ClickHouse/pull/26314) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Fix logical error on join with totals, close [#26017](https://github.com/ClickHouse/ClickHouse/issues/26017). [#26250](https://github.com/ClickHouse/ClickHouse/pull/26250) ([Vladimir C](https://github.com/vdimir)).
* Remove excessive newline in `thread_name` column in `system.stack_trace` table. Fix [#24124](https://github.com/ClickHouse/ClickHouse/issues/24124). [#26210](https://github.com/ClickHouse/ClickHouse/pull/26210) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix `joinGet` with `LowCarinality` columns, close [#25993](https://github.com/ClickHouse/ClickHouse/issues/25993). [#26118](https://github.com/ClickHouse/ClickHouse/pull/26118) ([Vladimir C](https://github.com/vdimir)).
* Fix possible crash in `pointInPolygon` if the setting `validate_polygons` is turned off. [#26113](https://github.com/ClickHouse/ClickHouse/pull/26113) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix throwing exception when iterate over non-existing remote directory. [#26087](https://github.com/ClickHouse/ClickHouse/pull/26087) ([ianton-ru](https://github.com/ianton-ru)).
* Fix rare server crash because of `abort` in ZooKeeper client. Fixes [#25813](https://github.com/ClickHouse/ClickHouse/issues/25813). [#26079](https://github.com/ClickHouse/ClickHouse/pull/26079) ([alesapin](https://github.com/alesapin)).
* Fix wrong thread count estimation for right subquery join in some cases. Close [#24075](https://github.com/ClickHouse/ClickHouse/issues/24075). [#26052](https://github.com/ClickHouse/ClickHouse/pull/26052) ([Vladimir C](https://github.com/vdimir)).
* Fixed incorrect `sequence_id` in MySQL protocol packets that ClickHouse sends on exception during query execution. It might cause MySQL client to reset connection to ClickHouse server. Fixes [#21184](https://github.com/ClickHouse/ClickHouse/issues/21184). [#26051](https://github.com/ClickHouse/ClickHouse/pull/26051) ([tavplubix](https://github.com/tavplubix)).
* Fix possible mismatched header when using normal projection with `PREWHERE`. Fix [#26020](https://github.com/ClickHouse/ClickHouse/issues/26020). [#26038](https://github.com/ClickHouse/ClickHouse/pull/26038) ([Amos Bird](https://github.com/amosbird)).
* Fix formatting of type `Map` with integer keys to `JSON`. [#25982](https://github.com/ClickHouse/ClickHouse/pull/25982) ([Anton Popov](https://github.com/CurtizJ)).
* Fix possible deadlock during query profiler stack unwinding. Fix [#25968](https://github.com/ClickHouse/ClickHouse/issues/25968). [#25970](https://github.com/ClickHouse/ClickHouse/pull/25970) ([Maksim Kita](https://github.com/kitaisreal)).
* Fix crash on call `dictGet()` with bad arguments. [#25913](https://github.com/ClickHouse/ClickHouse/pull/25913) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fixed `scram-sha-256` authentication for PostgreSQL engines. Closes [#24516](https://github.com/ClickHouse/ClickHouse/issues/24516). [#25906](https://github.com/ClickHouse/ClickHouse/pull/25906) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Fix extremely long backoff for background tasks when the background pool is full. Fixes [#25836](https://github.com/ClickHouse/ClickHouse/issues/25836). [#25893](https://github.com/ClickHouse/ClickHouse/pull/25893) ([alesapin](https://github.com/alesapin)).
* Fix ARM exception handling with non default page size. Fixes [#25512](https://github.com/ClickHouse/ClickHouse/issues/25512), [#25044](https://github.com/ClickHouse/ClickHouse/issues/25044), [#24901](https://github.com/ClickHouse/ClickHouse/issues/24901), [#23183](https://github.com/ClickHouse/ClickHouse/issues/23183), [#20221](https://github.com/ClickHouse/ClickHouse/issues/20221), [#19703](https://github.com/ClickHouse/ClickHouse/issues/19703), [#19028](https://github.com/ClickHouse/ClickHouse/issues/19028), [#18391](https://github.com/ClickHouse/ClickHouse/issues/18391), [#18121](https://github.com/ClickHouse/ClickHouse/issues/18121), [#17994](https://github.com/ClickHouse/ClickHouse/issues/17994), [#12483](https://github.com/ClickHouse/ClickHouse/issues/12483). [#25854](https://github.com/ClickHouse/ClickHouse/pull/25854) ([Maksim Kita](https://github.com/kitaisreal)).
* Fix sharding_key from column w/o function for `remote()` (before `select * from remote('127.1', system.one, dummy)` leads to `Unknown column: dummy, there are only columns .` error). [#25824](https://github.com/ClickHouse/ClickHouse/pull/25824) ([Azat Khuzhin](https://github.com/azat)).
* Fixed `Not found column ...` and `Missing column ...` errors when selecting from `MaterializeMySQL`. Fixes [#23708](https://github.com/ClickHouse/ClickHouse/issues/23708), [#24830](https://github.com/ClickHouse/ClickHouse/issues/24830), [#25794](https://github.com/ClickHouse/ClickHouse/issues/25794). [#25822](https://github.com/ClickHouse/ClickHouse/pull/25822) ([tavplubix](https://github.com/tavplubix)).
* Fix `optimize_skip_unused_shards_rewrite_in` for non-UInt64 types (may select incorrect shards eventually or throw `Cannot infer type of an empty tuple` or `Function tuple requires at least one argument`). [#25798](https://github.com/ClickHouse/ClickHouse/pull/25798) ([Azat Khuzhin](https://github.com/azat)).
* Fix rare bug with `DROP PART` query for `ReplicatedMergeTree` tables which can lead to error message `Unexpected merged part intersecting drop range`. [#25783](https://github.com/ClickHouse/ClickHouse/pull/25783) ([alesapin](https://github.com/alesapin)).
* Fix bug in `TTL` with `GROUP BY` expression which refuses to execute `TTL` after first execution in part. [#25743](https://github.com/ClickHouse/ClickHouse/pull/25743) ([alesapin](https://github.com/alesapin)).
* Allow StorageMerge to access tables with aliases. Closes [#6051](https://github.com/ClickHouse/ClickHouse/issues/6051). [#25694](https://github.com/ClickHouse/ClickHouse/pull/25694) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Fix slow dict join in some cases, close [#24209](https://github.com/ClickHouse/ClickHouse/issues/24209). [#25618](https://github.com/ClickHouse/ClickHouse/pull/25618) ([Vladimir C](https://github.com/vdimir)).
* Fix `ALTER MODIFY COLUMN` of columns, which participates in TTL expressions. [#25554](https://github.com/ClickHouse/ClickHouse/pull/25554) ([Anton Popov](https://github.com/CurtizJ)).
* Fix assertion in `PREWHERE` with non-UInt8 type, close [#19589](https://github.com/ClickHouse/ClickHouse/issues/19589). [#25484](https://github.com/ClickHouse/ClickHouse/pull/25484) ([Vladimir C](https://github.com/vdimir)).
* Fix some fuzzed msan crash. Fixes [#22517](https://github.com/ClickHouse/ClickHouse/issues/22517). [#26428](https://github.com/ClickHouse/ClickHouse/pull/26428) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Update `chown` cmd check in `clickhouse-server` docker entrypoint. It fixes error 'cluster pod restart failed (or timeout)' on kubernetes. [#26545](https://github.com/ClickHouse/ClickHouse/pull/26545) ([Ky Li](https://github.com/Kylinrix)).
### ClickHouse release v21.7, 2021-07-09
#### Backward Incompatible Change
* Improved performance of queries with explicitly defined large sets. Added compatibility setting `legacy_column_name_of_tuple_literal`. It makes sense to set it to `true`, while doing rolling update of cluster from version lower than 21.7 to any higher version. Otherwise distributed queries with explicitly defined sets at `IN` clause may fail during update. [#25371](https://github.com/ClickHouse/ClickHouse/pull/25371) ([Anton Popov](https://github.com/CurtizJ)).
* Forward/backward incompatible change of maximum buffer size in clickhouse-keeper (an experimental alternative to ZooKeeper). Better to do it now (before production), than later. [#25421](https://github.com/ClickHouse/ClickHouse/pull/25421) ([alesapin](https://github.com/alesapin)).
#### New Feature
* Support configuration in YAML format as alternative to XML. This closes [#3607](https://github.com/ClickHouse/ClickHouse/issues/3607). [#21858](https://github.com/ClickHouse/ClickHouse/pull/21858) ([BoloniniD](https://github.com/BoloniniD)).
* Provides a way to restore replicated table when the data is (possibly) present, but the ZooKeeper metadata is lost. Resolves [#13458](https://github.com/ClickHouse/ClickHouse/issues/13458). [#13652](https://github.com/ClickHouse/ClickHouse/pull/13652) ([Mike Kot](https://github.com/myrrc)).
* Support structs and maps in Arrow/Parquet/ORC and dictionaries in Arrow input/output formats. Present new setting `output_format_arrow_low_cardinality_as_dictionary`. [#24341](https://github.com/ClickHouse/ClickHouse/pull/24341) ([Kruglov Pavel](https://github.com/Avogar)).
* Added support for `Array` type in dictionaries. [#25119](https://github.com/ClickHouse/ClickHouse/pull/25119) ([Maksim Kita](https://github.com/kitaisreal)).
* Added function `bitPositionsToArray`. Closes [#23792](https://github.com/ClickHouse/ClickHouse/issues/23792). Author [Kevin Wan] (@MaxWk). [#25394](https://github.com/ClickHouse/ClickHouse/pull/25394) ([Maksim Kita](https://github.com/kitaisreal)).
* Added function `dateName` to return names like 'Friday' or 'April'. Author [Daniil Kondratyev] (@dankondr). [#25372](https://github.com/ClickHouse/ClickHouse/pull/25372) ([Maksim Kita](https://github.com/kitaisreal)).
* Add `toJSONString` function to serialize columns to their JSON representations. [#25164](https://github.com/ClickHouse/ClickHouse/pull/25164) ([Amos Bird](https://github.com/amosbird)).
* Now `query_log` has two new columns: `initial_query_start_time`, `initial_query_start_time_microsecond` that record the starting time of a distributed query if any. [#25022](https://github.com/ClickHouse/ClickHouse/pull/25022) ([Amos Bird](https://github.com/amosbird)).
* Add aggregate function `segmentLengthSum`. [#24250](https://github.com/ClickHouse/ClickHouse/pull/24250) ([flynn](https://github.com/ucasfl)).
* Add a new boolean setting `prefer_global_in_and_join` which defaults all IN/JOIN as GLOBAL IN/JOIN. [#23434](https://github.com/ClickHouse/ClickHouse/pull/23434) ([Amos Bird](https://github.com/amosbird)).
* Support `ALTER DELETE` queries for `Join` table engine. [#23260](https://github.com/ClickHouse/ClickHouse/pull/23260) ([foolchi](https://github.com/foolchi)).
* Add `quantileBFloat16` aggregate function as well as the corresponding `quantilesBFloat16` and `medianBFloat16`. It is very simple and fast quantile estimator with relative error not more than 0.390625%. This closes [#16641](https://github.com/ClickHouse/ClickHouse/issues/16641). [#23204](https://github.com/ClickHouse/ClickHouse/pull/23204) ([Ivan Novitskiy](https://github.com/RedClusive)).
* Implement `sequenceNextNode()` function useful for `flow analysis`. [#19766](https://github.com/ClickHouse/ClickHouse/pull/19766) ([achimbab](https://github.com/achimbab)).
#### Experimental Feature
* Add support for virtual filesystem over HDFS. [#11058](https://github.com/ClickHouse/ClickHouse/pull/11058) ([overshov](https://github.com/overshov)) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Now clickhouse-keeper (an experimental alternative to ZooKeeper) supports ZooKeeper-like `digest` ACLs. [#24448](https://github.com/ClickHouse/ClickHouse/pull/24448) ([alesapin](https://github.com/alesapin)).
#### Performance Improvement
* Added optimization that transforms some functions to reading of subcolumns to reduce amount of read data. E.g., statement `col IS NULL` is transformed to reading of subcolumn `col.null`. Optimization can be enabled by setting `optimize_functions_to_subcolumns` which is currently off by default. [#24406](https://github.com/ClickHouse/ClickHouse/pull/24406) ([Anton Popov](https://github.com/CurtizJ)).
* Rewrite more columns to possible alias expressions. This may enable better optimization, such as projections. [#24405](https://github.com/ClickHouse/ClickHouse/pull/24405) ([Amos Bird](https://github.com/amosbird)).
* Index of type `bloom_filter` can be used for expressions with `hasAny` function with constant arrays. This closes: [#24291](https://github.com/ClickHouse/ClickHouse/issues/24291). [#24900](https://github.com/ClickHouse/ClickHouse/pull/24900) ([Vasily Nemkov](https://github.com/Enmk)).
* Add exponential backoff to reschedule read attempt in case RabbitMQ queues are empty. (ClickHouse has support for importing data from RabbitMQ). Closes [#24340](https://github.com/ClickHouse/ClickHouse/issues/24340). [#24415](https://github.com/ClickHouse/ClickHouse/pull/24415) ([Kseniia Sumarokova](https://github.com/kssenii)).
#### Improvement
* Allow to limit bandwidth for replication. Add two Replicated\*MergeTree settings: `max_replicated_fetches_network_bandwidth` and `max_replicated_sends_network_bandwidth` which allows to limit maximum speed of replicated fetches/sends for table. Add two server-wide settings (in `default` user profile): `max_replicated_fetches_network_bandwidth_for_server` and `max_replicated_sends_network_bandwidth_for_server` which limit maximum speed of replication for all tables. The settings are not followed perfectly accurately. Turned off by default. Fixes [#1821](https://github.com/ClickHouse/ClickHouse/issues/1821). [#24573](https://github.com/ClickHouse/ClickHouse/pull/24573) ([alesapin](https://github.com/alesapin)).
* Resource constraints and isolation for ODBC and Library bridges. Use separate `clickhouse-bridge` group and user for bridge processes. Set oom_score_adj so the bridges will be first subjects for OOM killer. Set set maximum RSS to 1 GiB. Closes [#23861](https://github.com/ClickHouse/ClickHouse/issues/23861). [#25280](https://github.com/ClickHouse/ClickHouse/pull/25280) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Add standalone `clickhouse-keeper` symlink to the main `clickhouse` binary. Now it's possible to run coordination without the main clickhouse server. [#24059](https://github.com/ClickHouse/ClickHouse/pull/24059) ([alesapin](https://github.com/alesapin)).
* Use global settings for query to `VIEW`. Fixed the behavior when queries to `VIEW` use local settings, that leads to errors if setting on `CREATE VIEW` and `SELECT` were different. As for now, `VIEW` won't use these modified settings, but you can still pass additional settings in `SETTINGS` section of `CREATE VIEW` query. Close [#20551](https://github.com/ClickHouse/ClickHouse/issues/20551). [#24095](https://github.com/ClickHouse/ClickHouse/pull/24095) ([Vladimir](https://github.com/vdimir)).
* On server start, parts with incorrect partition ID would not be ever removed, but always detached. [#25070](https://github.com/ClickHouse/ClickHouse/issues/25070). [#25166](https://github.com/ClickHouse/ClickHouse/pull/25166) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Increase size of background schedule pool to 128 (`background_schedule_pool_size` setting). It allows avoiding replication queue hung on slow zookeeper connection. [#25072](https://github.com/ClickHouse/ClickHouse/pull/25072) ([alesapin](https://github.com/alesapin)).
* Add merge tree setting `max_parts_to_merge_at_once` which limits the number of parts that can be merged in the background at once. Doesn't affect `OPTIMIZE FINAL` query. Fixes [#1820](https://github.com/ClickHouse/ClickHouse/issues/1820). [#24496](https://github.com/ClickHouse/ClickHouse/pull/24496) ([alesapin](https://github.com/alesapin)).
* Allow `NOT IN` operator to be used in partition pruning. [#24894](https://github.com/ClickHouse/ClickHouse/pull/24894) ([Amos Bird](https://github.com/amosbird)).
* Recognize IPv4 addresses like `127.0.1.1` as local. This is controversial and closes [#23504](https://github.com/ClickHouse/ClickHouse/issues/23504). Michael Filimonov will test this feature. [#24316](https://github.com/ClickHouse/ClickHouse/pull/24316) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* ClickHouse database created with MaterializeMySQL (it is an experimental feature) now contains all column comments from the MySQL database that materialized. [#25199](https://github.com/ClickHouse/ClickHouse/pull/25199) ([Storozhuk Kostiantyn](https://github.com/sand6255)).
* Add settings (`connection_auto_close`/`connection_max_tries`/`connection_pool_size`) for MySQL storage engine. [#24146](https://github.com/ClickHouse/ClickHouse/pull/24146) ([Azat Khuzhin](https://github.com/azat)).
* Improve startup time of Distributed engine. [#25663](https://github.com/ClickHouse/ClickHouse/pull/25663) ([Azat Khuzhin](https://github.com/azat)).
* Improvement for Distributed tables. Drop replicas from dirname for internal_replication=true (allows INSERT into Distributed with cluster from any number of replicas, before only 15 replicas was supported, everything more will fail with ENAMETOOLONG while creating directory for async blocks). [#25513](https://github.com/ClickHouse/ClickHouse/pull/25513) ([Azat Khuzhin](https://github.com/azat)).
* Added support `Interval` type for `LowCardinality`. It is needed for intermediate values of some expressions. Closes [#21730](https://github.com/ClickHouse/ClickHouse/issues/21730). [#25410](https://github.com/ClickHouse/ClickHouse/pull/25410) ([Vladimir](https://github.com/vdimir)).
* Add `==` operator on time conditions for `sequenceMatch` and `sequenceCount` functions. For eg: sequenceMatch('(?1)(?t==1)(?2)')(time, data = 1, data = 2). [#25299](https://github.com/ClickHouse/ClickHouse/pull/25299) ([Christophe Kalenzaga](https://github.com/mga-chka)).
* Add settings `http_max_fields`, `http_max_field_name_size`, `http_max_field_value_size`. [#25296](https://github.com/ClickHouse/ClickHouse/pull/25296) ([Ivan](https://github.com/abyss7)).
* Add support for function `if` with `Decimal` and `Int` types on its branches. This closes [#20549](https://github.com/ClickHouse/ClickHouse/issues/20549). This closes [#10142](https://github.com/ClickHouse/ClickHouse/issues/10142). [#25283](https://github.com/ClickHouse/ClickHouse/pull/25283) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Update prompt in `clickhouse-client` and display a message when reconnecting. This closes [#10577](https://github.com/ClickHouse/ClickHouse/issues/10577). [#25281](https://github.com/ClickHouse/ClickHouse/pull/25281) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Correct memory tracking in aggregate function `topK`. This closes [#25259](https://github.com/ClickHouse/ClickHouse/issues/25259). [#25260](https://github.com/ClickHouse/ClickHouse/pull/25260) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix `topLevelDomain` for IDN hosts (i.e. `example.рф`), before it returns empty string for such hosts. [#25103](https://github.com/ClickHouse/ClickHouse/pull/25103) ([Azat Khuzhin](https://github.com/azat)).
* Detect Linux kernel version at runtime (for worked nested epoll, that is required for `async_socket_for_remote`/`use_hedged_requests`, otherwise remote queries may stuck). [#25067](https://github.com/ClickHouse/ClickHouse/pull/25067) ([Azat Khuzhin](https://github.com/azat)).
* For distributed query, when `optimize_skip_unused_shards=1`, allow to skip shard with condition like `(sharding key) IN (one-element-tuple)`. (Tuples with many elements were supported. Tuple with single element did not work because it is parsed as literal). [#24930](https://github.com/ClickHouse/ClickHouse/pull/24930) ([Amos Bird](https://github.com/amosbird)).
* Improved log messages of S3 errors, no more double whitespaces in case of empty keys and buckets. [#24897](https://github.com/ClickHouse/ClickHouse/pull/24897) ([Vladimir Chebotarev](https://github.com/excitoon)).
* Some queries require multi-pass semantic analysis. Try reusing built sets for `IN` in this case. [#24874](https://github.com/ClickHouse/ClickHouse/pull/24874) ([Amos Bird](https://github.com/amosbird)).
* Respect `max_distributed_connections` for `insert_distributed_sync` (otherwise for huge clusters and sync insert it may run out of `max_thread_pool_size`). [#24754](https://github.com/ClickHouse/ClickHouse/pull/24754) ([Azat Khuzhin](https://github.com/azat)).
* Avoid hiding errors like `Limit for rows or bytes to read exceeded` for scalar subqueries. [#24545](https://github.com/ClickHouse/ClickHouse/pull/24545) ([nvartolomei](https://github.com/nvartolomei)).
* Make String-to-Int parser stricter so that `toInt64('+')` will throw. [#24475](https://github.com/ClickHouse/ClickHouse/pull/24475) ([Amos Bird](https://github.com/amosbird)).
* If `SSD_CACHE` is created with DDL query, it can be created only inside `user_files` directory. [#24466](https://github.com/ClickHouse/ClickHouse/pull/24466) ([Maksim Kita](https://github.com/kitaisreal)).
* PostgreSQL support for specifying non default schema for insert queries. Closes [#24149](https://github.com/ClickHouse/ClickHouse/issues/24149). [#24413](https://github.com/ClickHouse/ClickHouse/pull/24413) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Fix IPv6 addresses resolving (i.e. fixes `select * from remote('[::1]', system.one)`). [#24319](https://github.com/ClickHouse/ClickHouse/pull/24319) ([Azat Khuzhin](https://github.com/azat)).
* Fix trailing whitespaces in FROM clause with subqueries in multiline mode, and also changes the output of the queries slightly in a more human friendly way. [#24151](https://github.com/ClickHouse/ClickHouse/pull/24151) ([Azat Khuzhin](https://github.com/azat)).
* Improvement for Distributed tables. Add ability to split distributed batch on failures (i.e. due to memory limits, corruptions), under `distributed_directory_monitor_split_batch_on_failure` (OFF by default). [#23864](https://github.com/ClickHouse/ClickHouse/pull/23864) ([Azat Khuzhin](https://github.com/azat)).
* Handle column name clashes for `Join` table engine. Closes [#20309](https://github.com/ClickHouse/ClickHouse/issues/20309). [#23769](https://github.com/ClickHouse/ClickHouse/pull/23769) ([Vladimir](https://github.com/vdimir)).
* Display progress for `File` table engine in `clickhouse-local` and on INSERT query in `clickhouse-client` when data is passed to stdin. Closes [#18209](https://github.com/ClickHouse/ClickHouse/issues/18209). [#23656](https://github.com/ClickHouse/ClickHouse/pull/23656) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Bugfixes and improvements of `clickhouse-copier`. Allow to copy tables with different (but compatible schemas). Closes [#9159](https://github.com/ClickHouse/ClickHouse/issues/9159). Added test to copy ReplacingMergeTree. Closes [#22711](https://github.com/ClickHouse/ClickHouse/issues/22711). Support TTL on columns and Data Skipping Indices. It simply removes it to create internal Distributed table (underlying table will have TTL and skipping indices). Closes [#19384](https://github.com/ClickHouse/ClickHouse/issues/19384). Allow to copy MATERIALIZED and ALIAS columns. There are some cases in which it could be helpful (e.g. if this column is in PRIMARY KEY). Now it could be allowed by setting `allow_to_copy_alias_and_materialized_columns` property to true in task configuration. Closes [#9177](https://github.com/ClickHouse/ClickHouse/issues/9177). Closes [#11007] (https://github.com/ClickHouse/ClickHouse/issues/11007). Closes [#9514](https://github.com/ClickHouse/ClickHouse/issues/9514). Added a property `allow_to_drop_target_partitions` in task configuration to drop partition in original table before moving helping tables. Closes [#20957](https://github.com/ClickHouse/ClickHouse/issues/20957). Get rid of `OPTIMIZE DEDUPLICATE` query. This hack was needed, because `ALTER TABLE MOVE PARTITION` was retried many times and plain MergeTree tables don't have deduplication. Closes [#17966](https://github.com/ClickHouse/ClickHouse/issues/17966). Write progress to ZooKeeper node on path `task_path + /status` in JSON format. Closes [#20955](https://github.com/ClickHouse/ClickHouse/issues/20955). Support for ReplicatedTables without arguments. Closes [#24834](https://github.com/ClickHouse/ClickHouse/issues/24834) .[#23518](https://github.com/ClickHouse/ClickHouse/pull/23518) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Added sleep with backoff between read retries from S3. [#23461](https://github.com/ClickHouse/ClickHouse/pull/23461) ([Vladimir Chebotarev](https://github.com/excitoon)).
* Respect `insert_allow_materialized_columns` (allows materialized columns) for INSERT into `Distributed` table. [#23349](https://github.com/ClickHouse/ClickHouse/pull/23349) ([Azat Khuzhin](https://github.com/azat)).
* Add ability to push down LIMIT for distributed queries. [#23027](https://github.com/ClickHouse/ClickHouse/pull/23027) ([Azat Khuzhin](https://github.com/azat)).
* Fix zero-copy replication with several S3 volumes (Fixes [#22679](https://github.com/ClickHouse/ClickHouse/issues/22679)). [#22864](https://github.com/ClickHouse/ClickHouse/pull/22864) ([ianton-ru](https://github.com/ianton-ru)).
* Resolve the actual port number bound when a user requests any available port from the operating system to show it in the log message. [#25569](https://github.com/ClickHouse/ClickHouse/pull/25569) ([bnaecker](https://github.com/bnaecker)).
* Fixed case, when sometimes conversion of postgres arrays resulted in String data type, not n-dimensional array, because `attndims` works incorrectly in some cases. Closes [#24804](https://github.com/ClickHouse/ClickHouse/issues/24804). [#25538](https://github.com/ClickHouse/ClickHouse/pull/25538) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Fix convertion of DateTime with timezone for MySQL, PostgreSQL, ODBC. Closes [#5057](https://github.com/ClickHouse/ClickHouse/issues/5057). [#25528](https://github.com/ClickHouse/ClickHouse/pull/25528) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Distinguish KILL MUTATION for different tables (fixes unexpected `Cancelled mutating parts` error). [#25025](https://github.com/ClickHouse/ClickHouse/pull/25025) ([Azat Khuzhin](https://github.com/azat)).
* Allow to declare S3 disk at root of bucket (S3 virtual filesystem is an experimental feature under development). [#24898](https://github.com/ClickHouse/ClickHouse/pull/24898) ([Vladimir Chebotarev](https://github.com/excitoon)).
* Enable reading of subcolumns (e.g. components of Tuples) for distributed tables. [#24472](https://github.com/ClickHouse/ClickHouse/pull/24472) ([Anton Popov](https://github.com/CurtizJ)).
* A feature for MySQL compatibility protocol: make `user` function to return correct output. Closes [#25697](https://github.com/ClickHouse/ClickHouse/pull/25697). [#25697](https://github.com/ClickHouse/ClickHouse/pull/25697) ([sundyli](https://github.com/sundy-li)).
#### Bug Fix
* Improvement for backward compatibility. Use old modulo function version when used in partition key. Closes [#23508](https://github.com/ClickHouse/ClickHouse/issues/23508). [#24157](https://github.com/ClickHouse/ClickHouse/pull/24157) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Fix extremely rare bug on low-memory servers which can lead to the inability to perform merges without restart. Possibly fixes [#24603](https://github.com/ClickHouse/ClickHouse/issues/24603). [#24872](https://github.com/ClickHouse/ClickHouse/pull/24872) ([alesapin](https://github.com/alesapin)).
* Fix extremely rare error `Tagging already tagged part` in replication queue during concurrent `alter move/replace partition`. Possibly fixes [#22142](https://github.com/ClickHouse/ClickHouse/issues/22142). [#24961](https://github.com/ClickHouse/ClickHouse/pull/24961) ([alesapin](https://github.com/alesapin)).
* Fix potential crash when calculating aggregate function states by aggregation of aggregate function states of other aggregate functions (not a practical use case). See [#24523](https://github.com/ClickHouse/ClickHouse/issues/24523). [#25015](https://github.com/ClickHouse/ClickHouse/pull/25015) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed the behavior when query `SYSTEM RESTART REPLICA` or `SYSTEM SYNC REPLICA` does not finish. This was detected on server with extremely low amount of RAM. [#24457](https://github.com/ClickHouse/ClickHouse/pull/24457) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Fix bug which can lead to ZooKeeper client hung inside clickhouse-server. [#24721](https://github.com/ClickHouse/ClickHouse/pull/24721) ([alesapin](https://github.com/alesapin)).
* If ZooKeeper connection was lost and replica was cloned after restoring the connection, its replication queue might contain outdated entries. Fixed failed assertion when replication queue contains intersecting virtual parts. It may rarely happen if some data part was lost. Print error in log instead of terminating. [#24777](https://github.com/ClickHouse/ClickHouse/pull/24777) ([tavplubix](https://github.com/tavplubix)).
* Fix lost `WHERE` condition in expression-push-down optimization of query plan (setting `query_plan_filter_push_down = 1` by default). Fixes [#25368](https://github.com/ClickHouse/ClickHouse/issues/25368). [#25370](https://github.com/ClickHouse/ClickHouse/pull/25370) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix bug which can lead to intersecting parts after merges with TTL: `Part all_40_40_0 is covered by all_40_40_1 but should be merged into all_40_41_1. This shouldn't happen often.`. [#25549](https://github.com/ClickHouse/ClickHouse/pull/25549) ([alesapin](https://github.com/alesapin)).
* On ZooKeeper connection loss `ReplicatedMergeTree` table might wait for background operations to complete before trying to reconnect. It's fixed, now background operations are stopped forcefully. [#25306](https://github.com/ClickHouse/ClickHouse/pull/25306) ([tavplubix](https://github.com/tavplubix)).
* Fix error `Key expression contains comparison between inconvertible types` for queries with `ARRAY JOIN` in case if array is used in primary key. Fixes [#8247](https://github.com/ClickHouse/ClickHouse/issues/8247). [#25546](https://github.com/ClickHouse/ClickHouse/pull/25546) ([Anton Popov](https://github.com/CurtizJ)).
* Fix wrong totals for query `WITH TOTALS` and `WITH FILL`. Fixes [#20872](https://github.com/ClickHouse/ClickHouse/issues/20872). [#25539](https://github.com/ClickHouse/ClickHouse/pull/25539) ([Anton Popov](https://github.com/CurtizJ)).
* Fix data race when querying `system.clusters` while reloading the cluster configuration at the same time. [#25737](https://github.com/ClickHouse/ClickHouse/pull/25737) ([Amos Bird](https://github.com/amosbird)).
* Fixed `No such file or directory` error on moving `Distributed` table between databases. Fixes [#24971](https://github.com/ClickHouse/ClickHouse/issues/24971). [#25667](https://github.com/ClickHouse/ClickHouse/pull/25667) ([tavplubix](https://github.com/tavplubix)).
* `REPLACE PARTITION` might be ignored in rare cases if the source partition was empty. It's fixed. Fixes [#24869](https://github.com/ClickHouse/ClickHouse/issues/24869). [#25665](https://github.com/ClickHouse/ClickHouse/pull/25665) ([tavplubix](https://github.com/tavplubix)).
* Fixed a bug in `Replicated` database engine that might rarely cause some replica to skip enqueued DDL query. [#24805](https://github.com/ClickHouse/ClickHouse/pull/24805) ([tavplubix](https://github.com/tavplubix)).
* Fix null pointer dereference in `EXPLAIN AST` without query. [#25631](https://github.com/ClickHouse/ClickHouse/pull/25631) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix waiting of automatic dropping of empty parts. It could lead to full filling of background pool and stuck of replication. [#23315](https://github.com/ClickHouse/ClickHouse/pull/23315) ([Anton Popov](https://github.com/CurtizJ)).
* Fix restore of a table stored in S3 virtual filesystem (it is an experimental feature not ready for production). [#25601](https://github.com/ClickHouse/ClickHouse/pull/25601) ([ianton-ru](https://github.com/ianton-ru)).
* Fix nullptr dereference in `Arrow` format when using `Decimal256`. Add `Decimal256` support for `Arrow` format. [#25531](https://github.com/ClickHouse/ClickHouse/pull/25531) ([Kruglov Pavel](https://github.com/Avogar)).
* Fix excessive underscore before the names of the preprocessed configuration files. [#25431](https://github.com/ClickHouse/ClickHouse/pull/25431) ([Vitaly Baranov](https://github.com/vitlibar)).
* A fix for `clickhouse-copier` tool: Fix segfault when sharding_key is absent in task config for copier. [#25419](https://github.com/ClickHouse/ClickHouse/pull/25419) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Fix `REPLACE` column transformer when used in DDL by correctly quoting the formated query. This fixes [#23925](https://github.com/ClickHouse/ClickHouse/issues/23925). [#25391](https://github.com/ClickHouse/ClickHouse/pull/25391) ([Amos Bird](https://github.com/amosbird)).
* Fix the possibility of non-deterministic behaviour of the `quantileDeterministic` function and similar. This closes [#20480](https://github.com/ClickHouse/ClickHouse/issues/20480). [#25313](https://github.com/ClickHouse/ClickHouse/pull/25313) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Support `SimpleAggregateFunction(LowCardinality)` for `SummingMergeTree`. Fixes [#25134](https://github.com/ClickHouse/ClickHouse/issues/25134). [#25300](https://github.com/ClickHouse/ClickHouse/pull/25300) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix logical error with exception message "Cannot sum Array/Tuple in min/maxMap". [#25298](https://github.com/ClickHouse/ClickHouse/pull/25298) ([Kruglov Pavel](https://github.com/Avogar)).
* Fix error `Bad cast from type DB::ColumnLowCardinality to DB::ColumnVector<char8_t>` for queries where `LowCardinality` argument was used for IN (this bug appeared in 21.6). Fixes [#25187](https://github.com/ClickHouse/ClickHouse/issues/25187). [#25290](https://github.com/ClickHouse/ClickHouse/pull/25290) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix incorrect behaviour of `joinGetOrNull` with not-nullable columns. This fixes [#24261](https://github.com/ClickHouse/ClickHouse/issues/24261). [#25288](https://github.com/ClickHouse/ClickHouse/pull/25288) ([Amos Bird](https://github.com/amosbird)).
* Fix incorrect behaviour and UBSan report in big integers. In previous versions `CAST(1e19 AS UInt128)` returned zero. [#25279](https://github.com/ClickHouse/ClickHouse/pull/25279) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed an error which occurred while inserting a subset of columns using CSVWithNames format. Fixes [#25129](https://github.com/ClickHouse/ClickHouse/issues/25129). [#25169](https://github.com/ClickHouse/ClickHouse/pull/25169) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Do not use table's projection for `SELECT` with `FINAL`. It is not supported yet. [#25163](https://github.com/ClickHouse/ClickHouse/pull/25163) ([Amos Bird](https://github.com/amosbird)).
* Fix possible parts loss after updating up to 21.5 in case table used `UUID` in partition key. (It is not recommended to use `UUID` in partition key). Fixes [#25070](https://github.com/ClickHouse/ClickHouse/issues/25070). [#25127](https://github.com/ClickHouse/ClickHouse/pull/25127) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix crash in query with cross join and `joined_subquery_requires_alias = 0`. Fixes [#24011](https://github.com/ClickHouse/ClickHouse/issues/24011). [#25082](https://github.com/ClickHouse/ClickHouse/pull/25082) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix bug with constant maps in mapContains function that lead to error `empty column was returned by function mapContains`. Closes [#25077](https://github.com/ClickHouse/ClickHouse/issues/25077). [#25080](https://github.com/ClickHouse/ClickHouse/pull/25080) ([Kruglov Pavel](https://github.com/Avogar)).
* Remove possibility to create tables with columns referencing themselves like `a UInt32 ALIAS a + 1` or `b UInt32 MATERIALIZED b`. Fixes [#24910](https://github.com/ClickHouse/ClickHouse/issues/24910), [#24292](https://github.com/ClickHouse/ClickHouse/issues/24292). [#25059](https://github.com/ClickHouse/ClickHouse/pull/25059) ([alesapin](https://github.com/alesapin)).
* Fix wrong result when using aggregate projection with *not empty* `GROUP BY` key to execute query with `GROUP BY` by *empty* key. [#25055](https://github.com/ClickHouse/ClickHouse/pull/25055) ([Amos Bird](https://github.com/amosbird)).
* Fix serialization of splitted nested messages in Protobuf format. This PR fixes [#24647](https://github.com/ClickHouse/ClickHouse/issues/24647). [#25000](https://github.com/ClickHouse/ClickHouse/pull/25000) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fix limit/offset settings for distributed queries (ignore on the remote nodes). [#24940](https://github.com/ClickHouse/ClickHouse/pull/24940) ([Azat Khuzhin](https://github.com/azat)).
* Fix possible heap-buffer-overflow in `Arrow` format. [#24922](https://github.com/ClickHouse/ClickHouse/pull/24922) ([Kruglov Pavel](https://github.com/Avogar)).
* Fixed possible error 'Cannot read from istream at offset 0' when reading a file from DiskS3 (S3 virtual filesystem is an experimental feature under development that should not be used in production). [#24885](https://github.com/ClickHouse/ClickHouse/pull/24885) ([Pavel Kovalenko](https://github.com/Jokser)).
* Fix "Missing columns" exception when joining Distributed Materialized View. [#24870](https://github.com/ClickHouse/ClickHouse/pull/24870) ([Azat Khuzhin](https://github.com/azat)).
* Allow `NULL` values in postgresql compatibility protocol. Closes [#22622](https://github.com/ClickHouse/ClickHouse/issues/22622). [#24857](https://github.com/ClickHouse/ClickHouse/pull/24857) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Fix bug when exception `Mutation was killed` can be thrown to the client on mutation wait when mutation not loaded into memory yet. [#24809](https://github.com/ClickHouse/ClickHouse/pull/24809) ([alesapin](https://github.com/alesapin)).
* Fixed bug in deserialization of random generator state with might cause some data types such as `AggregateFunction(groupArraySample(N), T))` to behave in a non-deterministic way. [#24538](https://github.com/ClickHouse/ClickHouse/pull/24538) ([tavplubix](https://github.com/tavplubix)).
* Disallow building uniqXXXXStates of other aggregation states. [#24523](https://github.com/ClickHouse/ClickHouse/pull/24523) ([Raúl Marín](https://github.com/Algunenano)). Then allow it back by actually eliminating the root cause of the related issue. ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix usage of tuples in `CREATE .. AS SELECT` queries. [#24464](https://github.com/ClickHouse/ClickHouse/pull/24464) ([Anton Popov](https://github.com/CurtizJ)).
* Fix computation of total bytes in `Buffer` table. In current ClickHouse version total_writes.bytes counter decreases too much during the buffer flush. It leads to counter overflow and totalBytes return something around 17.44 EB some time after the flush. [#24450](https://github.com/ClickHouse/ClickHouse/pull/24450) ([DimasKovas](https://github.com/DimasKovas)).
* Fix incorrect information about the monotonicity of toWeek function. This fixes [#24422](https://github.com/ClickHouse/ClickHouse/issues/24422) . This bug was introduced in https://github.com/ClickHouse/ClickHouse/pull/5212 , and was exposed later by smarter partition pruner. [#24446](https://github.com/ClickHouse/ClickHouse/pull/24446) ([Amos Bird](https://github.com/amosbird)).
* When user authentication is managed by LDAP. Fixed potential deadlock that can happen during LDAP role (re)mapping, when LDAP group is mapped to a nonexistent local role. [#24431](https://github.com/ClickHouse/ClickHouse/pull/24431) ([Denis Glazachev](https://github.com/traceon)).
* In "multipart/form-data" message consider the CRLF preceding a boundary as part of it. Fixes [#23905](https://github.com/ClickHouse/ClickHouse/issues/23905). [#24399](https://github.com/ClickHouse/ClickHouse/pull/24399) ([Ivan](https://github.com/abyss7)).
* Fix drop partition with intersect fake parts. In rare cases there might be parts with mutation version greater than current block number. [#24321](https://github.com/ClickHouse/ClickHouse/pull/24321) ([Amos Bird](https://github.com/amosbird)).
* Fixed a bug in moving Materialized View from Ordinary to Atomic database (`RENAME TABLE` query). Now inner table is moved to new database together with Materialized View. Fixes [#23926](https://github.com/ClickHouse/ClickHouse/issues/23926). [#24309](https://github.com/ClickHouse/ClickHouse/pull/24309) ([tavplubix](https://github.com/tavplubix)).
* Allow empty HTTP headers. Fixes [#23901](https://github.com/ClickHouse/ClickHouse/issues/23901). [#24285](https://github.com/ClickHouse/ClickHouse/pull/24285) ([Ivan](https://github.com/abyss7)).
* Correct processing of mutations (ALTER UPDATE/DELETE) in Memory tables. Closes [#24274](https://github.com/ClickHouse/ClickHouse/issues/24274). [#24275](https://github.com/ClickHouse/ClickHouse/pull/24275) ([flynn](https://github.com/ucasfl)).
* Make column LowCardinality property in JOIN output the same as in the input, close [#23351](https://github.com/ClickHouse/ClickHouse/issues/23351), close [#20315](https://github.com/ClickHouse/ClickHouse/issues/20315). [#24061](https://github.com/ClickHouse/ClickHouse/pull/24061) ([Vladimir](https://github.com/vdimir)).
* A fix for Kafka tables. Fix the bug in failover behavior when Engine = Kafka was not able to start consumption if the same consumer had an empty assignment previously. Closes [#21118](https://github.com/ClickHouse/ClickHouse/issues/21118). [#21267](https://github.com/ClickHouse/ClickHouse/pull/21267) ([filimonov](https://github.com/filimonov)).
#### Build/Testing/Packaging Improvement
* Add `darwin-aarch64` (Mac M1 / Apple Silicon) builds in CI [#25560](https://github.com/ClickHouse/ClickHouse/pull/25560) ([Ivan](https://github.com/abyss7)) and put the links to the docs and website ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Adds cross-platform embedding of binary resources into executables. It works on Illumos. [#25146](https://github.com/ClickHouse/ClickHouse/pull/25146) ([bnaecker](https://github.com/bnaecker)).
* Add join related options to stress tests to improve fuzzing. [#25200](https://github.com/ClickHouse/ClickHouse/pull/25200) ([Vladimir](https://github.com/vdimir)).
* Enable build with s3 module in osx [#25217](https://github.com/ClickHouse/ClickHouse/issues/25217). [#25218](https://github.com/ClickHouse/ClickHouse/pull/25218) ([kevin wan](https://github.com/MaxWk)).
* Add integration test cases to cover JDBC bridge. [#25047](https://github.com/ClickHouse/ClickHouse/pull/25047) ([Zhichun Wu](https://github.com/zhicwu)).
* Integration tests configuration has special treatment for dictionaries. Removed remaining dictionaries manual setup. [#24728](https://github.com/ClickHouse/ClickHouse/pull/24728) ([Ilya Yatsishin](https://github.com/qoega)).
* Add libfuzzer tests for YAMLParser class. [#24480](https://github.com/ClickHouse/ClickHouse/pull/24480) ([BoloniniD](https://github.com/BoloniniD)).
* Ubuntu 20.04 is now used to run integration tests, docker-compose version used to run integration tests is updated to 1.28.2. Environment variables now take effect on docker-compose. Rework test_dictionaries_all_layouts_separate_sources to allow parallel run. [#20393](https://github.com/ClickHouse/ClickHouse/pull/20393) ([Ilya Yatsishin](https://github.com/qoega)).
* Fix TOCTOU error in installation script. [#25277](https://github.com/ClickHouse/ClickHouse/pull/25277) ([alexey-milovidov](https://github.com/alexey-milovidov)).
### ClickHouse release 21.6, 2021-06-05 ### ClickHouse release 21.6, 2021-06-05
#### Upgrade Notes #### Upgrade Notes
@ -1027,13 +1282,6 @@
* PODArray: Avoid call to memcpy with (nullptr, 0) arguments (Fix UBSan report). This fixes [#18525](https://github.com/ClickHouse/ClickHouse/issues/18525). [#18526](https://github.com/ClickHouse/ClickHouse/pull/18526) ([alexey-milovidov](https://github.com/alexey-milovidov)). * PODArray: Avoid call to memcpy with (nullptr, 0) arguments (Fix UBSan report). This fixes [#18525](https://github.com/ClickHouse/ClickHouse/issues/18525). [#18526](https://github.com/ClickHouse/ClickHouse/pull/18526) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Minor improvement for path concatenation of zookeeper paths inside DDLWorker. [#17767](https://github.com/ClickHouse/ClickHouse/pull/17767) ([Bharat Nallan](https://github.com/bharatnc)). * Minor improvement for path concatenation of zookeeper paths inside DDLWorker. [#17767](https://github.com/ClickHouse/ClickHouse/pull/17767) ([Bharat Nallan](https://github.com/bharatnc)).
* Allow to reload symbols from debug file. This PR also fixes a build-id issue. [#17637](https://github.com/ClickHouse/ClickHouse/pull/17637) ([Amos Bird](https://github.com/amosbird)). * Allow to reload symbols from debug file. This PR also fixes a build-id issue. [#17637](https://github.com/ClickHouse/ClickHouse/pull/17637) ([Amos Bird](https://github.com/amosbird)).
* TestFlows: fixes to LDAP tests that fail due to slow test execution. [#18790](https://github.com/ClickHouse/ClickHouse/pull/18790) ([vzakaznikov](https://github.com/vzakaznikov)).
* TestFlows: Merging requirements for AES encryption functions. Updating aes_encryption tests to use new requirements. Updating TestFlows version to 1.6.72. [#18221](https://github.com/ClickHouse/ClickHouse/pull/18221) ([vzakaznikov](https://github.com/vzakaznikov)).
* TestFlows: Updating TestFlows version to the latest 1.6.72. Re-generating requirements.py. [#18208](https://github.com/ClickHouse/ClickHouse/pull/18208) ([vzakaznikov](https://github.com/vzakaznikov)).
* TestFlows: Updating TestFlows README.md to include "How To Debug Why Test Failed" section. [#17808](https://github.com/ClickHouse/ClickHouse/pull/17808) ([vzakaznikov](https://github.com/vzakaznikov)).
* TestFlows: tests for RBAC [ACCESS MANAGEMENT](https://clickhouse.tech/docs/en/sql-reference/statements/grant/#grant-access-management) privileges. [#17804](https://github.com/ClickHouse/ClickHouse/pull/17804) ([MyroTk](https://github.com/MyroTk)).
* TestFlows: RBAC tests for SHOW, TRUNCATE, KILL, and OPTIMIZE. - Updates to old tests. - Resolved comments from #https://github.com/ClickHouse/ClickHouse/pull/16977. [#17657](https://github.com/ClickHouse/ClickHouse/pull/17657) ([MyroTk](https://github.com/MyroTk)).
* TestFlows: Added RBAC tests for `ATTACH`, `CREATE`, `DROP`, and `DETACH`. [#16977](https://github.com/ClickHouse/ClickHouse/pull/16977) ([MyroTk](https://github.com/MyroTk)).
## [Changelog for 2020](https://github.com/ClickHouse/ClickHouse/blob/master/docs/en/whats-new/changelog/2020.md) ## [Changelog for 2020](https://github.com/ClickHouse/ClickHouse/blob/master/docs/en/whats-new/changelog/2020.md)

View File

@ -271,12 +271,6 @@ endif()
include(cmake/cpu_features.cmake) include(cmake/cpu_features.cmake)
option(ARCH_NATIVE "Add -march=native compiler flag. This makes your binaries non-portable but more performant code may be generated.")
if (ARCH_NATIVE)
set (COMPILER_FLAGS "${COMPILER_FLAGS} -march=native")
endif ()
# Asynchronous unwind tables are needed for Query Profiler. # Asynchronous unwind tables are needed for Query Profiler.
# They are already by default on some platforms but possibly not on all platforms. # They are already by default on some platforms but possibly not on all platforms.
# Enable it explicitly. # Enable it explicitly.
@ -401,9 +395,10 @@ endif ()
# Turns on all external libs like s3, kafka, ODBC, ... # Turns on all external libs like s3, kafka, ODBC, ...
option(ENABLE_LIBRARIES "Enable all external libraries by default" ON) option(ENABLE_LIBRARIES "Enable all external libraries by default" ON)
# We recommend avoiding this mode for production builds because we can't guarantee all needed libraries exist in your # We recommend avoiding this mode for production builds because we can't guarantee
# system. # all needed libraries exist in your system.
# This mode exists for enthusiastic developers who are searching for trouble. # This mode exists for enthusiastic developers who are searching for trouble.
# The whole idea of using unknown version of libraries from the OS distribution is deeply flawed.
# Useful for maintainers of OS packages. # Useful for maintainers of OS packages.
option (UNBUNDLED "Use system libraries instead of ones in contrib/" OFF) option (UNBUNDLED "Use system libraries instead of ones in contrib/" OFF)
@ -536,11 +531,15 @@ include (cmake/find/rapidjson.cmake)
include (cmake/find/fastops.cmake) include (cmake/find/fastops.cmake)
include (cmake/find/odbc.cmake) include (cmake/find/odbc.cmake)
include (cmake/find/nanodbc.cmake) include (cmake/find/nanodbc.cmake)
include (cmake/find/sqlite.cmake)
include (cmake/find/rocksdb.cmake) include (cmake/find/rocksdb.cmake)
include (cmake/find/libpqxx.cmake) include (cmake/find/libpqxx.cmake)
include (cmake/find/nuraft.cmake) include (cmake/find/nuraft.cmake)
include (cmake/find/yaml-cpp.cmake) include (cmake/find/yaml-cpp.cmake)
include (cmake/find/lz4.cmake) include (cmake/find/lz4.cmake)
include (cmake/find/s2geometry.cmake)
include (cmake/find/nlp.cmake)
include (cmake/find/bzip2.cmake)
if(NOT USE_INTERNAL_PARQUET_LIBRARY) if(NOT USE_INTERNAL_PARQUET_LIBRARY)
set (ENABLE_ORC OFF CACHE INTERNAL "") set (ENABLE_ORC OFF CACHE INTERNAL "")

View File

@ -15,4 +15,4 @@ ClickHouse® is an open-source column-oriented database management system that a
* You can also [fill this form](https://clickhouse.tech/#meet) to meet Yandex ClickHouse team in person. * You can also [fill this form](https://clickhouse.tech/#meet) to meet Yandex ClickHouse team in person.
## Upcoming Events ## Upcoming Events
* [China ClickHouse Community Meetup (online)](http://hdxu.cn/rhbfZ) on 26 June 2021. * [SF Bay Area ClickHouse August Community Meetup (online)](https://www.meetup.com/San-Francisco-Bay-Area-ClickHouse-Meetup/events/279109379/) on 25 August 2021.

View File

@ -60,6 +60,7 @@ DateLUTImpl::DateLUTImpl(const std::string & time_zone_)
offset_at_start_of_epoch = cctz_time_zone.lookup(cctz_time_zone.lookup(epoch).pre).offset; offset_at_start_of_epoch = cctz_time_zone.lookup(cctz_time_zone.lookup(epoch).pre).offset;
offset_at_start_of_lut = cctz_time_zone.lookup(cctz_time_zone.lookup(lut_start).pre).offset; offset_at_start_of_lut = cctz_time_zone.lookup(cctz_time_zone.lookup(lut_start).pre).offset;
offset_is_whole_number_of_hours_during_epoch = true; offset_is_whole_number_of_hours_during_epoch = true;
offset_is_whole_number_of_minutes_during_epoch = true;
cctz::civil_day date = lut_start; cctz::civil_day date = lut_start;
@ -108,6 +109,9 @@ DateLUTImpl::DateLUTImpl(const std::string & time_zone_)
if (offset_is_whole_number_of_hours_during_epoch && start_of_day > 0 && start_of_day % 3600) if (offset_is_whole_number_of_hours_during_epoch && start_of_day > 0 && start_of_day % 3600)
offset_is_whole_number_of_hours_during_epoch = false; offset_is_whole_number_of_hours_during_epoch = false;
if (offset_is_whole_number_of_minutes_during_epoch && start_of_day > 0 && start_of_day % 60)
offset_is_whole_number_of_minutes_during_epoch = false;
/// If UTC offset was changed this day. /// If UTC offset was changed this day.
/// Change in time zone without transition is possible, e.g. Moscow 1991 Sun, 31 Mar, 02:00 MSK to EEST /// Change in time zone without transition is possible, e.g. Moscow 1991 Sun, 31 Mar, 02:00 MSK to EEST
cctz::time_zone::civil_transition transition{}; cctz::time_zone::civil_transition transition{};

View File

@ -18,6 +18,8 @@
#define DATE_LUT_MAX (0xFFFFFFFFU - 86400) #define DATE_LUT_MAX (0xFFFFFFFFU - 86400)
#define DATE_LUT_MAX_DAY_NUM 0xFFFF #define DATE_LUT_MAX_DAY_NUM 0xFFFF
/// Max int value of Date32, DATE LUT cache size minus daynum_offset_epoch
#define DATE_LUT_MAX_EXTEND_DAY_NUM (DATE_LUT_SIZE - 16436)
/// A constant to add to time_t so every supported time point becomes non-negative and still has the same remainder of division by 3600. /// A constant to add to time_t so every supported time point becomes non-negative and still has the same remainder of division by 3600.
/// If we treat "remainder of division" operation in the sense of modular arithmetic (not like in C++). /// If we treat "remainder of division" operation in the sense of modular arithmetic (not like in C++).
@ -191,6 +193,7 @@ private:
/// UTC offset at the beginning of the first supported year. /// UTC offset at the beginning of the first supported year.
Time offset_at_start_of_lut; Time offset_at_start_of_lut;
bool offset_is_whole_number_of_hours_during_epoch; bool offset_is_whole_number_of_hours_during_epoch;
bool offset_is_whole_number_of_minutes_during_epoch;
/// Time zone name. /// Time zone name.
std::string time_zone; std::string time_zone;
@ -249,18 +252,23 @@ private:
} }
template <typename T, typename Divisor> template <typename T, typename Divisor>
static inline T roundDown(T x, Divisor divisor) inline T roundDown(T x, Divisor divisor) const
{ {
static_assert(std::is_integral_v<T> && std::is_integral_v<Divisor>); static_assert(std::is_integral_v<T> && std::is_integral_v<Divisor>);
assert(divisor > 0); assert(divisor > 0);
if (likely(x >= 0)) if (likely(offset_is_whole_number_of_hours_during_epoch))
return x / divisor * divisor; {
if (likely(x >= 0))
return x / divisor * divisor;
/// Integer division for negative numbers rounds them towards zero (up). /// Integer division for negative numbers rounds them towards zero (up).
/// We will shift the number so it will be rounded towards -inf (down). /// We will shift the number so it will be rounded towards -inf (down).
return (x + 1 - divisor) / divisor * divisor;
}
return (x + 1 - divisor) / divisor * divisor; Time date = find(x).date;
return date + (x - date) / divisor * divisor;
} }
public: public:
@ -270,6 +278,8 @@ public:
auto getOffsetAtStartOfEpoch() const { return offset_at_start_of_epoch; } auto getOffsetAtStartOfEpoch() const { return offset_at_start_of_epoch; }
auto getTimeOffsetAtStartOfLUT() const { return offset_at_start_of_lut; } auto getTimeOffsetAtStartOfLUT() const { return offset_at_start_of_lut; }
auto getDayNumOffsetEpoch() const { return daynum_offset_epoch; }
/// All functions below are thread-safe; arguments are not checked. /// All functions below are thread-safe; arguments are not checked.
inline ExtendedDayNum toDayNum(ExtendedDayNum d) const inline ExtendedDayNum toDayNum(ExtendedDayNum d) const
@ -455,10 +465,21 @@ public:
inline unsigned toSecond(Time t) const inline unsigned toSecond(Time t) const
{ {
auto res = t % 60; if (likely(offset_is_whole_number_of_minutes_during_epoch))
if (likely(res >= 0)) {
return res; Time res = t % 60;
return res + 60; if (likely(res >= 0))
return res;
return res + 60;
}
LUTIndex index = findIndex(t);
Time time = t - lut[index].date;
if (time >= lut[index].time_at_offset_change())
time += lut[index].amount_of_offset_change();
return time % 60;
} }
inline unsigned toMinute(Time t) const inline unsigned toMinute(Time t) const
@ -479,29 +500,11 @@ public:
} }
/// NOTE: Assuming timezone offset is a multiple of 15 minutes. /// NOTE: Assuming timezone offset is a multiple of 15 minutes.
inline Time toStartOfMinute(Time t) const { return roundDown(t, 60); } inline Time toStartOfMinute(Time t) const { return toStartOfMinuteInterval(t, 1); }
inline Time toStartOfFiveMinute(Time t) const { return roundDown(t, 300); } inline Time toStartOfFiveMinute(Time t) const { return toStartOfMinuteInterval(t, 5); }
inline Time toStartOfFifteenMinutes(Time t) const { return roundDown(t, 900); } inline Time toStartOfFifteenMinutes(Time t) const { return toStartOfMinuteInterval(t, 15); }
inline Time toStartOfTenMinutes(Time t) const { return toStartOfMinuteInterval(t, 10); }
inline Time toStartOfTenMinutes(Time t) const inline Time toStartOfHour(Time t) const { return roundDown(t, 3600); }
{
if (t >= 0 && offset_is_whole_number_of_hours_during_epoch)
return t / 600 * 600;
/// More complex logic is for Nepal - it has offset 05:45. Australia/Eucla is also unfortunate.
Time date = find(t).date;
return date + (t - date) / 600 * 600;
}
/// NOTE: Assuming timezone transitions are multiple of hours. Lord Howe Island in Australia is a notable exception.
inline Time toStartOfHour(Time t) const
{
if (t >= 0 && offset_is_whole_number_of_hours_during_epoch)
return t / 3600 * 3600;
Time date = find(t).date;
return date + (t - date) / 3600 * 3600;
}
/** Number of calendar day since the beginning of UNIX epoch (1970-01-01 is zero) /** Number of calendar day since the beginning of UNIX epoch (1970-01-01 is zero)
* We use just two bytes for it. It covers the range up to 2105 and slightly more. * We use just two bytes for it. It covers the range up to 2105 and slightly more.
@ -899,25 +902,24 @@ public:
inline Time toStartOfMinuteInterval(Time t, UInt64 minutes) const inline Time toStartOfMinuteInterval(Time t, UInt64 minutes) const
{ {
if (minutes == 1) UInt64 divisor = 60 * minutes;
return toStartOfMinute(t); if (likely(offset_is_whole_number_of_minutes_during_epoch))
{
if (likely(t >= 0))
return t / divisor * divisor;
return (t + 1 - divisor) / divisor * divisor;
}
/** In contrast to "toStartOfHourInterval" function above, Time date = find(t).date;
* the minute intervals are not aligned to the midnight. return date + (t - date) / divisor * divisor;
* You will get unexpected results if for example, you round down to 60 minute interval
* and there was a time shift to 30 minutes.
*
* But this is not specified in docs and can be changed in future.
*/
UInt64 seconds = 60 * minutes;
return roundDown(t, seconds);
} }
inline Time toStartOfSecondInterval(Time t, UInt64 seconds) const inline Time toStartOfSecondInterval(Time t, UInt64 seconds) const
{ {
if (seconds == 1) if (seconds == 1)
return t; return t;
if (seconds % 60 == 0)
return toStartOfMinuteInterval(t, seconds / 60);
return roundDown(t, seconds); return roundDown(t, seconds);
} }
@ -926,15 +928,17 @@ public:
{ {
if (unlikely(year < DATE_LUT_MIN_YEAR || year > DATE_LUT_MAX_YEAR || month < 1 || month > 12 || day_of_month < 1 || day_of_month > 31)) if (unlikely(year < DATE_LUT_MIN_YEAR || year > DATE_LUT_MAX_YEAR || month < 1 || month > 12 || day_of_month < 1 || day_of_month > 31))
return LUTIndex(0); return LUTIndex(0);
auto year_lut_index = (year - DATE_LUT_MIN_YEAR) * 12 + month - 1;
return LUTIndex{years_months_lut[(year - DATE_LUT_MIN_YEAR) * 12 + month - 1] + day_of_month - 1}; UInt32 index = years_months_lut[year_lut_index].toUnderType() + day_of_month - 1;
/// When date is out of range, default value is DATE_LUT_SIZE - 1 (2283-11-11)
return LUTIndex{std::min(index, static_cast<UInt32>(DATE_LUT_SIZE - 1))};
} }
/// Create DayNum from year, month, day of month. /// Create DayNum from year, month, day of month.
inline ExtendedDayNum makeDayNum(Int16 year, UInt8 month, UInt8 day_of_month) const inline ExtendedDayNum makeDayNum(Int16 year, UInt8 month, UInt8 day_of_month, Int32 default_error_day_num = 0) const
{ {
if (unlikely(year < DATE_LUT_MIN_YEAR || year > DATE_LUT_MAX_YEAR || month < 1 || month > 12 || day_of_month < 1 || day_of_month > 31)) if (unlikely(year < DATE_LUT_MIN_YEAR || year > DATE_LUT_MAX_YEAR || month < 1 || month > 12 || day_of_month < 1 || day_of_month > 31))
return ExtendedDayNum(0); return ExtendedDayNum(default_error_day_num);
return toDayNum(makeLUTIndex(year, month, day_of_month)); return toDayNum(makeLUTIndex(year, month, day_of_month));
} }
@ -949,7 +953,7 @@ public:
inline Time makeDateTime(Int16 year, UInt8 month, UInt8 day_of_month, UInt8 hour, UInt8 minute, UInt8 second) const inline Time makeDateTime(Int16 year, UInt8 month, UInt8 day_of_month, UInt8 hour, UInt8 minute, UInt8 second) const
{ {
size_t index = makeLUTIndex(year, month, day_of_month); size_t index = makeLUTIndex(year, month, day_of_month);
UInt32 time_offset = hour * 3600 + minute * 60 + second; Time time_offset = hour * 3600 + minute * 60 + second;
if (time_offset >= lut[index].time_at_offset_change()) if (time_offset >= lut[index].time_at_offset_change())
time_offset -= lut[index].amount_of_offset_change(); time_offset -= lut[index].amount_of_offset_change();
@ -1091,9 +1095,9 @@ public:
return lut[new_index].date + time; return lut[new_index].date + time;
} }
inline NO_SANITIZE_UNDEFINED Time addWeeks(Time t, Int64 delta) const inline NO_SANITIZE_UNDEFINED Time addWeeks(Time t, Int32 delta) const
{ {
return addDays(t, delta * 7); return addDays(t, static_cast<Int64>(delta) * 7);
} }
inline UInt8 saturateDayOfMonth(Int16 year, UInt8 month, UInt8 day_of_month) const inline UInt8 saturateDayOfMonth(Int16 year, UInt8 month, UInt8 day_of_month) const
@ -1158,14 +1162,14 @@ public:
return toDayNum(addMonthsIndex(d, delta)); return toDayNum(addMonthsIndex(d, delta));
} }
inline Time NO_SANITIZE_UNDEFINED addQuarters(Time t, Int64 delta) const inline Time NO_SANITIZE_UNDEFINED addQuarters(Time t, Int32 delta) const
{ {
return addMonths(t, delta * 3); return addMonths(t, static_cast<Int64>(delta) * 3);
} }
inline ExtendedDayNum addQuarters(ExtendedDayNum d, Int64 delta) const inline ExtendedDayNum addQuarters(ExtendedDayNum d, Int32 delta) const
{ {
return addMonths(d, delta * 3); return addMonths(d, static_cast<Int64>(delta) * 3);
} }
template <typename DateOrTime> template <typename DateOrTime>

View File

@ -70,6 +70,14 @@ public:
m_day = values.day_of_month; m_day = values.day_of_month;
} }
explicit LocalDate(ExtendedDayNum day_num)
{
const auto & values = DateLUT::instance().getValues(day_num);
m_year = values.year;
m_month = values.month;
m_day = values.day_of_month;
}
LocalDate(unsigned short year_, unsigned char month_, unsigned char day_) LocalDate(unsigned short year_, unsigned char month_, unsigned char day_)
: m_year(year_), m_month(month_), m_day(day_) : m_year(year_), m_month(month_), m_day(day_)
{ {
@ -98,6 +106,12 @@ public:
return DayNum(lut.makeDayNum(m_year, m_month, m_day).toUnderType()); return DayNum(lut.makeDayNum(m_year, m_month, m_day).toUnderType());
} }
ExtendedDayNum getExtenedDayNum() const
{
const auto & lut = DateLUT::instance();
return ExtendedDayNum (lut.makeDayNum(m_year, m_month, m_day).toUnderType());
}
operator DayNum() const operator DayNum() const
{ {
return getDayNum(); return getDayNum();

View File

@ -69,7 +69,7 @@ void convertHistoryFile(const std::string & path, replxx::Replxx & rx)
} }
std::string line; std::string line;
if (!getline(in, line).good()) if (getline(in, line).bad())
{ {
rx.print("Cannot read from %s (for conversion): %s\n", rx.print("Cannot read from %s (for conversion): %s\n",
path.c_str(), errnoToString(errno).c_str()); path.c_str(), errnoToString(errno).c_str());
@ -78,7 +78,7 @@ void convertHistoryFile(const std::string & path, replxx::Replxx & rx)
/// This is the marker of the date, no need to convert. /// This is the marker of the date, no need to convert.
static char const REPLXX_TIMESTAMP_PATTERN[] = "### dddd-dd-dd dd:dd:dd.ddd"; static char const REPLXX_TIMESTAMP_PATTERN[] = "### dddd-dd-dd dd:dd:dd.ddd";
if (line.starts_with("### ") && line.size() == strlen(REPLXX_TIMESTAMP_PATTERN)) if (line.empty() || (line.starts_with("### ") && line.size() == strlen(REPLXX_TIMESTAMP_PATTERN)))
{ {
return; return;
} }

View File

@ -1,57 +0,0 @@
#pragma once
#include <new>
#include "defines.h"
#if USE_JEMALLOC
# include <jemalloc/jemalloc.h>
#endif
#if !USE_JEMALLOC || JEMALLOC_VERSION_MAJOR < 4
# include <cstdlib>
#endif
namespace Memory
{
inline ALWAYS_INLINE void * newImpl(std::size_t size)
{
auto * ptr = malloc(size);
if (likely(ptr != nullptr))
return ptr;
/// @note no std::get_new_handler logic implemented
throw std::bad_alloc{};
}
inline ALWAYS_INLINE void * newNoExept(std::size_t size) noexcept
{
return malloc(size);
}
inline ALWAYS_INLINE void deleteImpl(void * ptr) noexcept
{
free(ptr);
}
#if USE_JEMALLOC && JEMALLOC_VERSION_MAJOR >= 4
inline ALWAYS_INLINE void deleteSized(void * ptr, std::size_t size) noexcept
{
if (unlikely(ptr == nullptr))
return;
sdallocx(ptr, size, 0);
}
#else
inline ALWAYS_INLINE void deleteSized(void * ptr, std::size_t size [[maybe_unused]]) noexcept
{
free(ptr);
}
#endif
}

View File

@ -0,0 +1,24 @@
#pragma once
#include <vector>
/// Removes duplicates from a container without changing the order of its elements.
/// Keeps the last occurrence of each element.
/// Should NOT be used for containers with a lot of elements because it has O(N^2) complexity.
template <typename T>
void removeDuplicatesKeepLast(std::vector<T> & vec)
{
auto begin = vec.begin();
auto end = vec.end();
auto new_begin = end;
for (auto current = end; current != begin;)
{
--current;
if (std::find(new_begin, end, *current) == end)
{
--new_begin;
if (new_begin != current)
*new_begin = *current;
}
}
vec.erase(begin, new_begin);
}

View File

@ -259,10 +259,25 @@ private:
Poco::Logger * log; Poco::Logger * log;
BaseDaemon & daemon; BaseDaemon & daemon;
void onTerminate(const std::string & message, UInt32 thread_num) const void onTerminate(std::string_view message, UInt32 thread_num) const
{ {
size_t pos = message.find('\n');
LOG_FATAL(log, "(version {}{}, {}) (from thread {}) {}", LOG_FATAL(log, "(version {}{}, {}) (from thread {}) {}",
VERSION_STRING, VERSION_OFFICIAL, daemon.build_id_info, thread_num, message); VERSION_STRING, VERSION_OFFICIAL, daemon.build_id_info, thread_num, message.substr(0, pos));
/// Print trace from std::terminate exception line-by-line to make it easy for grep.
while (pos != std::string_view::npos)
{
++pos;
size_t next_pos = message.find('\n', pos);
size_t size = next_pos;
if (next_pos != std::string_view::npos)
size = next_pos - pos;
LOG_FATAL(log, "{}", message.substr(pos, size));
pos = next_pos;
}
} }
void onFault( void onFault(

View File

@ -9,10 +9,6 @@ if (GLIBC_COMPATIBILITY)
check_include_file("sys/random.h" HAVE_SYS_RANDOM_H) check_include_file("sys/random.h" HAVE_SYS_RANDOM_H)
if(COMPILER_CLANG)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Wno-builtin-requires-header")
endif()
add_headers_and_sources(glibc_compatibility .) add_headers_and_sources(glibc_compatibility .)
add_headers_and_sources(glibc_compatibility musl) add_headers_and_sources(glibc_compatibility musl)
if (ARCH_AARCH64) if (ARCH_AARCH64)
@ -35,11 +31,9 @@ if (GLIBC_COMPATIBILITY)
add_library(glibc-compatibility STATIC ${glibc_compatibility_sources}) add_library(glibc-compatibility STATIC ${glibc_compatibility_sources})
if (COMPILER_CLANG) target_no_warning(glibc-compatibility unused-command-line-argument)
target_compile_options(glibc-compatibility PRIVATE -Wno-unused-command-line-argument) target_no_warning(glibc-compatibility unused-but-set-variable)
elseif (COMPILER_GCC) target_no_warning(glibc-compatibility builtin-requires-header)
target_compile_options(glibc-compatibility PRIVATE -Wno-unused-but-set-variable)
endif ()
target_include_directories(glibc-compatibility PRIVATE libcxxabi ${musl_arch_include_dir}) target_include_directories(glibc-compatibility PRIVATE libcxxabi ${musl_arch_include_dir})

View File

@ -1,4 +1,5 @@
#include <sys/auxv.h> #include <sys/auxv.h>
#include "atomic.h"
#include <unistd.h> // __environ #include <unistd.h> // __environ
#include <errno.h> #include <errno.h>
@ -17,18 +18,7 @@ static size_t __find_auxv(unsigned long type)
return (size_t) -1; return (size_t) -1;
} }
__attribute__((constructor)) static void __auxv_init() unsigned long __getauxval(unsigned long type)
{
size_t i;
for (i = 0; __environ[i]; i++);
__auxv = (unsigned long *) (__environ + i + 1);
size_t secure_idx = __find_auxv(AT_SECURE);
if (secure_idx != ((size_t) -1))
__auxv_secure = __auxv[secure_idx];
}
unsigned long getauxval(unsigned long type)
{ {
if (type == AT_SECURE) if (type == AT_SECURE)
return __auxv_secure; return __auxv_secure;
@ -43,3 +33,38 @@ unsigned long getauxval(unsigned long type)
errno = ENOENT; errno = ENOENT;
return 0; return 0;
} }
static void * volatile getauxval_func;
static unsigned long __auxv_init(unsigned long type)
{
if (!__environ)
{
// __environ is not initialized yet so we can't initialize __auxv right now.
// That's normally occurred only when getauxval() is called from some sanitizer's internal code.
errno = ENOENT;
return 0;
}
// Initialize __auxv and __auxv_secure.
size_t i;
for (i = 0; __environ[i]; i++);
__auxv = (unsigned long *) (__environ + i + 1);
size_t secure_idx = __find_auxv(AT_SECURE);
if (secure_idx != ((size_t) -1))
__auxv_secure = __auxv[secure_idx];
// Now we've initialized __auxv, next time getauxval() will only call __get_auxval().
a_cas_p(&getauxval_func, (void *)__auxv_init, (void *)__getauxval);
return __getauxval(type);
}
// First time getauxval() will call __auxv_init().
static void * volatile getauxval_func = (void *)__auxv_init;
unsigned long getauxval(unsigned long type)
{
return ((unsigned long (*)(unsigned long))getauxval_func)(type);
}

View File

@ -296,7 +296,7 @@ void Pool::initialize()
Pool::Connection * Pool::allocConnection(bool dont_throw_if_failed_first_time) Pool::Connection * Pool::allocConnection(bool dont_throw_if_failed_first_time)
{ {
std::unique_ptr<Connection> conn_ptr{new Connection}; std::unique_ptr conn_ptr = std::make_unique<Connection>();
try try
{ {

View File

@ -4,13 +4,24 @@ QUERIES_FILE="queries.sql"
TABLE=$1 TABLE=$1
TRIES=3 TRIES=3
if [ -x ./clickhouse ]
then
CLICKHOUSE_CLIENT="./clickhouse client"
elif command -v clickhouse-client >/dev/null 2>&1
then
CLICKHOUSE_CLIENT="clickhouse-client"
else
echo "clickhouse-client is not found"
exit 1
fi
cat "$QUERIES_FILE" | sed "s/{table}/${TABLE}/g" | while read query; do cat "$QUERIES_FILE" | sed "s/{table}/${TABLE}/g" | while read query; do
sync sync
echo 3 | sudo tee /proc/sys/vm/drop_caches >/dev/null echo 3 | sudo tee /proc/sys/vm/drop_caches >/dev/null
echo -n "[" echo -n "["
for i in $(seq 1 $TRIES); do for i in $(seq 1 $TRIES); do
RES=$(clickhouse-client --time --format=Null --query="$query" 2>&1) RES=$(${CLICKHOUSE_CLIENT} --time --format=Null --max_memory_usage=100G --query="$query" 2>&1)
[[ "$?" == "0" ]] && echo -n "${RES}" || echo -n "null" [[ "$?" == "0" ]] && echo -n "${RES}" || echo -n "null"
[[ "$i" != $TRIES ]] && echo -n ", " [[ "$i" != $TRIES ]] && echo -n ", "
done done

View File

@ -11,8 +11,8 @@ DATASET="${TABLE}_v1.tar.xz"
QUERIES_FILE="queries.sql" QUERIES_FILE="queries.sql"
TRIES=3 TRIES=3
AMD64_BIN_URL="https://clickhouse-builds.s3.yandex.net/0/e29c4c3cc47ab2a6c4516486c1b77d57e7d42643/clickhouse_build_check/gcc-10_relwithdebuginfo_none_bundled_unsplitted_disable_False_binary/clickhouse" AMD64_BIN_URL="https://builds.clickhouse.tech/master/amd64/clickhouse"
AARCH64_BIN_URL="https://clickhouse-builds.s3.yandex.net/0/e29c4c3cc47ab2a6c4516486c1b77d57e7d42643/clickhouse_special_build_check/clang-10-aarch64_relwithdebuginfo_none_bundled_unsplitted_disable_False_binary/clickhouse" AARCH64_BIN_URL="https://builds.clickhouse.tech/master/aarch64/clickhouse"
# Note: on older Ubuntu versions, 'axel' does not support IPv6. If you are using IPv6-only servers on very old Ubuntu, just don't install 'axel'. # Note: on older Ubuntu versions, 'axel' does not support IPv6. If you are using IPv6-only servers on very old Ubuntu, just don't install 'axel'.
@ -89,7 +89,7 @@ cat "$QUERIES_FILE" | sed "s/{table}/${TABLE}/g" | while read query; do
echo -n "[" echo -n "["
for i in $(seq 1 $TRIES); do for i in $(seq 1 $TRIES); do
RES=$(./clickhouse client --max_memory_usage 100000000000 --time --format=Null --query="$query" 2>&1 ||:) RES=$(./clickhouse client --max_memory_usage 100G --time --format=Null --query="$query" 2>&1 ||:)
[[ "$?" == "0" ]] && echo -n "${RES}" || echo -n "null" [[ "$?" == "0" ]] && echo -n "${RES}" || echo -n "null"
[[ "$i" != $TRIES ]] && echo -n ", " [[ "$i" != $TRIES ]] && echo -n ", "
done done

View File

@ -27,3 +27,22 @@ endmacro ()
macro (no_warning flag) macro (no_warning flag)
add_warning(no-${flag}) add_warning(no-${flag})
endmacro () endmacro ()
# The same but only for specified target.
macro (target_add_warning target flag)
string (REPLACE "-" "_" underscored_flag ${flag})
string (REPLACE "+" "x" underscored_flag ${underscored_flag})
check_cxx_compiler_flag("-W${flag}" SUPPORTS_CXXFLAG_${underscored_flag})
if (SUPPORTS_CXXFLAG_${underscored_flag})
target_compile_options (${target} PRIVATE "-W${flag}")
else ()
message (WARNING "Flag -W${flag} is unsupported")
endif ()
endmacro ()
macro (target_no_warning target flag)
target_add_warning(${target} no-${flag})
endmacro ()

View File

@ -2,11 +2,11 @@
# NOTE: has nothing common with DBMS_TCP_PROTOCOL_VERSION, # NOTE: has nothing common with DBMS_TCP_PROTOCOL_VERSION,
# only DBMS_TCP_PROTOCOL_VERSION should be incremented on protocol changes. # only DBMS_TCP_PROTOCOL_VERSION should be incremented on protocol changes.
SET(VERSION_REVISION 54453) SET(VERSION_REVISION 54454)
SET(VERSION_MAJOR 21) SET(VERSION_MAJOR 21)
SET(VERSION_MINOR 8) SET(VERSION_MINOR 9)
SET(VERSION_PATCH 1) SET(VERSION_PATCH 1)
SET(VERSION_GITHASH fb895056568e26200629c7d19626e92d2dedc70d) SET(VERSION_GITHASH f063e44131a048ba2d9af8075f03700fd5ec3e69)
SET(VERSION_DESCRIBE v21.8.1.1-prestable) SET(VERSION_DESCRIBE v21.9.1.7770-prestable)
SET(VERSION_STRING 21.8.1.1) SET(VERSION_STRING 21.9.1.7770)
# end of autochange # end of autochange

View File

@ -5,109 +5,128 @@ include (CMakePushCheckState)
cmake_push_check_state () cmake_push_check_state ()
# gcc -dM -E -mno-sse2 - < /dev/null | sort > gcc-dump-nosse2 # The variables HAVE_* determine if compiler has support for the flag to use the corresponding instruction set.
# gcc -dM -E -msse2 - < /dev/null | sort > gcc-dump-sse2 # The options ENABLE_* determine if we will tell compiler to actually use the corresponding instruction set if compiler can do it.
#define __SSE2__ 1
#define __SSE2_MATH__ 1
# gcc -dM -E -msse4.1 - < /dev/null | sort > gcc-dump-sse41 # All of them are unrelated to the instruction set at the host machine
#define __SSE4_1__ 1 # (you can compile for newer instruction set on old machines and vice versa).
set (TEST_FLAG "-msse4.1") option (ENABLE_SSSE3 "Use SSSE3 instructions on x86_64" 1)
set (CMAKE_REQUIRED_FLAGS "${TEST_FLAG} -O0") option (ENABLE_SSE41 "Use SSE4.1 instructions on x86_64" 1)
check_cxx_source_compiles(" option (ENABLE_SSE42 "Use SSE4.2 instructions on x86_64" 1)
#include <smmintrin.h> option (ENABLE_PCLMULQDQ "Use pclmulqdq instructions on x86_64" 1)
int main() { option (ENABLE_POPCNT "Use popcnt instructions on x86_64" 1)
auto a = _mm_insert_epi8(__m128i(), 0, 0); option (ENABLE_AVX "Use AVX instructions on x86_64" 0)
(void)a; option (ENABLE_AVX2 "Use AVX2 instructions on x86_64" 0)
return 0;
}
" HAVE_SSE41)
if (HAVE_SSE41)
set (COMPILER_FLAGS "${COMPILER_FLAGS} ${TEST_FLAG}")
endif ()
if (ARCH_PPC64LE) option (ARCH_NATIVE "Add -march=native compiler flag. This makes your binaries non-portable but more performant code may be generated. This option overrides ENABLE_* options for specific instruction set. Highly not recommended to use." 0)
set (COMPILER_FLAGS "${COMPILER_FLAGS} -maltivec -D__SSE2__=1 -DNO_WARN_X86_INTRINSICS")
endif ()
# gcc -dM -E -msse4.2 - < /dev/null | sort > gcc-dump-sse42 if (ARCH_NATIVE)
#define __SSE4_2__ 1 set (COMPILER_FLAGS "${COMPILER_FLAGS} -march=native")
set (TEST_FLAG "-msse4.2") else ()
set (CMAKE_REQUIRED_FLAGS "${TEST_FLAG} -O0") set (TEST_FLAG "-mssse3")
check_cxx_source_compiles(" set (CMAKE_REQUIRED_FLAGS "${TEST_FLAG} -O0")
#include <nmmintrin.h> check_cxx_source_compiles("
int main() { #include <tmmintrin.h>
auto a = _mm_crc32_u64(0, 0); int main() {
(void)a; __m64 a = _mm_abs_pi8(__m64());
return 0; (void)a;
} return 0;
" HAVE_SSE42) }
if (HAVE_SSE42) " HAVE_SSSE3)
set (COMPILER_FLAGS "${COMPILER_FLAGS} ${TEST_FLAG}") if (HAVE_SSSE3 AND ENABLE_SSSE3)
endif () set (COMPILER_FLAGS "${COMPILER_FLAGS} ${TEST_FLAG}")
endif ()
set (TEST_FLAG "-mssse3")
set (CMAKE_REQUIRED_FLAGS "${TEST_FLAG} -O0")
check_cxx_source_compiles("
#include <tmmintrin.h>
int main() {
__m64 a = _mm_abs_pi8(__m64());
(void)a;
return 0;
}
" HAVE_SSSE3)
set (TEST_FLAG "-mavx") set (TEST_FLAG "-msse4.1")
set (CMAKE_REQUIRED_FLAGS "${TEST_FLAG} -O0") set (CMAKE_REQUIRED_FLAGS "${TEST_FLAG} -O0")
check_cxx_source_compiles(" check_cxx_source_compiles("
#include <immintrin.h> #include <smmintrin.h>
int main() { int main() {
auto a = _mm256_insert_epi8(__m256i(), 0, 0); auto a = _mm_insert_epi8(__m128i(), 0, 0);
(void)a; (void)a;
return 0; return 0;
} }
" HAVE_AVX) " HAVE_SSE41)
if (HAVE_SSE41 AND ENABLE_SSE41)
set (COMPILER_FLAGS "${COMPILER_FLAGS} ${TEST_FLAG}")
endif ()
set (TEST_FLAG "-mavx2") if (ARCH_PPC64LE)
set (CMAKE_REQUIRED_FLAGS "${TEST_FLAG} -O0") set (COMPILER_FLAGS "${COMPILER_FLAGS} -maltivec -D__SSE2__=1 -DNO_WARN_X86_INTRINSICS")
check_cxx_source_compiles(" endif ()
#include <immintrin.h>
int main() {
auto a = _mm256_add_epi16(__m256i(), __m256i());
(void)a;
return 0;
}
" HAVE_AVX2)
set (TEST_FLAG "-mpclmul") set (TEST_FLAG "-msse4.2")
set (CMAKE_REQUIRED_FLAGS "${TEST_FLAG} -O0") set (CMAKE_REQUIRED_FLAGS "${TEST_FLAG} -O0")
check_cxx_source_compiles(" check_cxx_source_compiles("
#include <wmmintrin.h> #include <nmmintrin.h>
int main() { int main() {
auto a = _mm_clmulepi64_si128(__m128i(), __m128i(), 0); auto a = _mm_crc32_u64(0, 0);
(void)a; (void)a;
return 0; return 0;
} }
" HAVE_PCLMULQDQ) " HAVE_SSE42)
if (HAVE_SSE42 AND ENABLE_SSE42)
set (COMPILER_FLAGS "${COMPILER_FLAGS} ${TEST_FLAG}")
endif ()
# gcc -dM -E -mpopcnt - < /dev/null | sort > gcc-dump-popcnt set (TEST_FLAG "-mpclmul")
#define __POPCNT__ 1 set (CMAKE_REQUIRED_FLAGS "${TEST_FLAG} -O0")
check_cxx_source_compiles("
#include <wmmintrin.h>
int main() {
auto a = _mm_clmulepi64_si128(__m128i(), __m128i(), 0);
(void)a;
return 0;
}
" HAVE_PCLMULQDQ)
if (HAVE_PCLMULQDQ AND ENABLE_PCLMULQDQ)
set (COMPILER_FLAGS "${COMPILER_FLAGS} ${TEST_FLAG}")
endif ()
set (TEST_FLAG "-mpopcnt") set (TEST_FLAG "-mpopcnt")
set (CMAKE_REQUIRED_FLAGS "${TEST_FLAG} -O0") set (CMAKE_REQUIRED_FLAGS "${TEST_FLAG} -O0")
check_cxx_source_compiles(" check_cxx_source_compiles("
int main() { int main() {
auto a = __builtin_popcountll(0); auto a = __builtin_popcountll(0);
(void)a; (void)a;
return 0; return 0;
} }
" HAVE_POPCNT) " HAVE_POPCNT)
if (HAVE_POPCNT AND ENABLE_POPCNT)
set (COMPILER_FLAGS "${COMPILER_FLAGS} ${TEST_FLAG}")
endif ()
if (HAVE_POPCNT AND NOT ARCH_AARCH64) set (TEST_FLAG "-mavx")
set (COMPILER_FLAGS "${COMPILER_FLAGS} ${TEST_FLAG}") set (CMAKE_REQUIRED_FLAGS "${TEST_FLAG} -O0")
check_cxx_source_compiles("
#include <immintrin.h>
int main() {
auto a = _mm256_insert_epi8(__m256i(), 0, 0);
(void)a;
return 0;
}
" HAVE_AVX)
if (HAVE_AVX AND ENABLE_AVX)
set (COMPILER_FLAGS "${COMPILER_FLAGS} ${TEST_FLAG}")
endif ()
set (TEST_FLAG "-mavx2")
set (CMAKE_REQUIRED_FLAGS "${TEST_FLAG} -O0")
check_cxx_source_compiles("
#include <immintrin.h>
int main() {
auto a = _mm256_add_epi16(__m256i(), __m256i());
(void)a;
return 0;
}
" HAVE_AVX2)
if (HAVE_AVX2 AND ENABLE_AVX2)
set (COMPILER_FLAGS "${COMPILER_FLAGS} ${TEST_FLAG}")
endif ()
endif () endif ()
cmake_pop_check_state () cmake_pop_check_state ()

View File

@ -53,5 +53,6 @@ macro(clickhouse_embed_binaries)
set_property(SOURCE "${CMAKE_CURRENT_BINARY_DIR}/${ASSEMBLY_FILE_NAME}" APPEND PROPERTY INCLUDE_DIRECTORIES "${EMBED_RESOURCE_DIR}") set_property(SOURCE "${CMAKE_CURRENT_BINARY_DIR}/${ASSEMBLY_FILE_NAME}" APPEND PROPERTY INCLUDE_DIRECTORIES "${EMBED_RESOURCE_DIR}")
target_sources("${EMBED_TARGET}" PRIVATE "${CMAKE_CURRENT_BINARY_DIR}/${ASSEMBLY_FILE_NAME}") target_sources("${EMBED_TARGET}" PRIVATE "${CMAKE_CURRENT_BINARY_DIR}/${ASSEMBLY_FILE_NAME}")
set_target_properties("${EMBED_TARGET}" PROPERTIES OBJECT_DEPENDS "${RESOURCE_FILE}")
endforeach() endforeach()
endmacro() endmacro()

19
cmake/find/bzip2.cmake Normal file
View File

@ -0,0 +1,19 @@
option(ENABLE_BZIP2 "Enable bzip2 compression support" ${ENABLE_LIBRARIES})
if (NOT ENABLE_BZIP2)
message (STATUS "bzip2 compression disabled")
return()
endif()
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/bzip2/bzlib.h")
message (WARNING "submodule contrib/bzip2 is missing. to fix try run: \n git submodule update --init --recursive")
message (${RECONFIGURE_MESSAGE_LEVEL} "Can't find internal bzip2 library")
set (USE_NLP 0)
return()
endif ()
set (USE_BZIP2 1)
set (BZIP2_INCLUDE_DIR "${ClickHouse_SOURCE_DIR}/contrib/bzip2")
set (BZIP2_LIBRARY bzip2)
message (STATUS "Using bzip2=${USE_BZIP2}: ${BZIP2_INCLUDE_DIR} : ${BZIP2_LIBRARY}")

32
cmake/find/nlp.cmake Normal file
View File

@ -0,0 +1,32 @@
option(ENABLE_NLP "Enable NLP functions support" ${ENABLE_LIBRARIES})
if (NOT ENABLE_NLP)
message (STATUS "NLP functions disabled")
return()
endif()
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/libstemmer_c/Makefile")
message (WARNING "submodule contrib/libstemmer_c is missing. to fix try run: \n git submodule update --init --recursive")
message (${RECONFIGURE_MESSAGE_LEVEL} "Can't find internal libstemmer_c library, NLP functions will be disabled")
set (USE_NLP 0)
return()
endif ()
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/wordnet-blast/CMakeLists.txt")
message (WARNING "submodule contrib/wordnet-blast is missing. to fix try run: \n git submodule update --init --recursive")
message (${RECONFIGURE_MESSAGE_LEVEL} "Can't find internal wordnet-blast library, NLP functions will be disabled")
set (USE_NLP 0)
return()
endif ()
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/lemmagen-c/README.md")
message (WARNING "submodule contrib/lemmagen-c is missing. to fix try run: \n git submodule update --init --recursive")
message (${RECONFIGURE_MESSAGE_LEVEL} "Can't find internal lemmagen-c library, NLP functions will be disabled")
set (USE_NLP 0)
return()
endif ()
set (USE_NLP 1)
message (STATUS "Using Libraries for NLP functions: contrib/wordnet-blast, contrib/libstemmer_c, contrib/lemmagen-c")

View File

@ -0,0 +1,24 @@
option(ENABLE_S2_GEOMETRY "Enable S2 geometry library" ${ENABLE_LIBRARIES})
if (ENABLE_S2_GEOMETRY)
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/s2geometry")
message (WARNING "submodule contrib/s2geometry is missing. to fix try run: \n git submodule update --init --recursive")
set (ENABLE_S2_GEOMETRY 0)
set (USE_S2_GEOMETRY 0)
else()
if (OPENSSL_FOUND)
set (S2_GEOMETRY_LIBRARY s2)
set (S2_GEOMETRY_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/contrib/s2geometry/src/s2)
set (USE_S2_GEOMETRY 1)
else()
message (WARNING "S2 uses OpenSSL, but the latter is absent.")
endif()
endif()
if (NOT USE_S2_GEOMETRY)
message (${RECONFIGURE_MESSAGE_LEVEL} "Can't enable S2 geometry library")
endif()
endif()
message (STATUS "Using s2geometry=${USE_S2_GEOMETRY} : ${S2_GEOMETRY_INCLUDE_DIR}")

16
cmake/find/sqlite.cmake Normal file
View File

@ -0,0 +1,16 @@
option(ENABLE_SQLITE "Enable sqlite" ${ENABLE_LIBRARIES})
if (NOT ENABLE_SQLITE)
return()
endif()
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/sqlite-amalgamation/sqlite3.c")
message (WARNING "submodule contrib/sqlite3-amalgamation is missing. to fix try run: \n git submodule update --init --recursive")
message (${RECONFIGURE_MESSAGE_LEVEL} "Can't find internal sqlite library")
set (USE_SQLITE 0)
return()
endif()
set (USE_SQLITE 1)
set (SQLITE_LIBRARY sqlite)
message (STATUS "Using sqlite=${USE_SQLITE}")

View File

@ -1,4 +1,4 @@
option(ENABLE_STATS "Enalbe StatsLib library" ${ENABLE_LIBRARIES}) option(ENABLE_STATS "Enable StatsLib library" ${ENABLE_LIBRARIES})
if (ENABLE_STATS) if (ENABLE_STATS)
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/stats") if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/stats")

2
contrib/AMQP-CPP vendored

@ -1 +1 @@
Subproject commit 03781aaff0f10ef41f902b8cf865fe0067180c10 Subproject commit 1a6c51f4ac51ac56610fa95081bd2f349911375a

View File

@ -1,3 +1,4 @@
# Third-party libraries may have substandard code.
# Put all targets defined here and in added subfolders under "contrib/" folder in GUI-based IDEs by default. # Put all targets defined here and in added subfolders under "contrib/" folder in GUI-based IDEs by default.
# Some of third-party projects may override CMAKE_FOLDER or FOLDER property of their targets, so they will # Some of third-party projects may override CMAKE_FOLDER or FOLDER property of their targets, so they will
@ -10,10 +11,8 @@ else ()
endif () endif ()
unset (_current_dir_name) unset (_current_dir_name)
# Third-party libraries may have substandard code. set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -w")
# Also remove a possible source of nondeterminism. set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -w")
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -w -D__DATE__= -D__TIME__= -D__TIMESTAMP__=")
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -w -D__DATE__= -D__TIME__= -D__TIMESTAMP__=")
if (WITH_COVERAGE) if (WITH_COVERAGE)
set (WITHOUT_COVERAGE_LIST ${WITHOUT_COVERAGE}) set (WITHOUT_COVERAGE_LIST ${WITHOUT_COVERAGE})
@ -329,3 +328,20 @@ endif()
add_subdirectory(fast_float) add_subdirectory(fast_float)
if (USE_NLP)
add_subdirectory(libstemmer-c-cmake)
add_subdirectory(wordnet-blast-cmake)
add_subdirectory(lemmagen-c-cmake)
endif()
if (USE_BZIP2)
add_subdirectory(bzip2-cmake)
endif()
if (USE_SQLITE)
add_subdirectory(sqlite-cmake)
endif()
if (USE_S2_GEOMETRY)
add_subdirectory(s2geometry-cmake)
endif()

2
contrib/NuRaft vendored

@ -1 +1 @@
Subproject commit 976874b7aa7f422bf4ea595bb7d1166c617b1c26 Subproject commit 7ecb16844af6a9c283ad432d85ecc2e7d1544676

View File

@ -10,11 +10,12 @@ set (SRCS
"${LIBRARY_DIR}/src/deferredconsumer.cpp" "${LIBRARY_DIR}/src/deferredconsumer.cpp"
"${LIBRARY_DIR}/src/deferredextreceiver.cpp" "${LIBRARY_DIR}/src/deferredextreceiver.cpp"
"${LIBRARY_DIR}/src/deferredget.cpp" "${LIBRARY_DIR}/src/deferredget.cpp"
"${LIBRARY_DIR}/src/deferredpublisher.cpp" "${LIBRARY_DIR}/src/deferredrecall.cpp"
"${LIBRARY_DIR}/src/deferredreceiver.cpp" "${LIBRARY_DIR}/src/deferredreceiver.cpp"
"${LIBRARY_DIR}/src/field.cpp" "${LIBRARY_DIR}/src/field.cpp"
"${LIBRARY_DIR}/src/flags.cpp" "${LIBRARY_DIR}/src/flags.cpp"
"${LIBRARY_DIR}/src/linux_tcp/openssl.cpp" "${LIBRARY_DIR}/src/linux_tcp/openssl.cpp"
"${LIBRARY_DIR}/src/linux_tcp/sslerrorprinter.cpp"
"${LIBRARY_DIR}/src/linux_tcp/tcpconnection.cpp" "${LIBRARY_DIR}/src/linux_tcp/tcpconnection.cpp"
"${LIBRARY_DIR}/src/inbuffer.cpp" "${LIBRARY_DIR}/src/inbuffer.cpp"
"${LIBRARY_DIR}/src/receivedframe.cpp" "${LIBRARY_DIR}/src/receivedframe.cpp"

2
contrib/arrow vendored

@ -1 +1 @@
Subproject commit debf751a129bdda9ff4d1e895e08957ff77000a1 Subproject commit 078e21bad344747b7656ef2d7a4f7410a0a303eb

View File

@ -119,12 +119,9 @@ set(ORC_SRCS
"${ORC_SOURCE_SRC_DIR}/ColumnWriter.cc" "${ORC_SOURCE_SRC_DIR}/ColumnWriter.cc"
"${ORC_SOURCE_SRC_DIR}/Common.cc" "${ORC_SOURCE_SRC_DIR}/Common.cc"
"${ORC_SOURCE_SRC_DIR}/Compression.cc" "${ORC_SOURCE_SRC_DIR}/Compression.cc"
"${ORC_SOURCE_SRC_DIR}/Exceptions.cc"
"${ORC_SOURCE_SRC_DIR}/Int128.cc" "${ORC_SOURCE_SRC_DIR}/Int128.cc"
"${ORC_SOURCE_SRC_DIR}/LzoDecompressor.cc" "${ORC_SOURCE_SRC_DIR}/LzoDecompressor.cc"
"${ORC_SOURCE_SRC_DIR}/MemoryPool.cc" "${ORC_SOURCE_SRC_DIR}/MemoryPool.cc"
"${ORC_SOURCE_SRC_DIR}/OrcFile.cc"
"${ORC_SOURCE_SRC_DIR}/Reader.cc"
"${ORC_SOURCE_SRC_DIR}/RLE.cc" "${ORC_SOURCE_SRC_DIR}/RLE.cc"
"${ORC_SOURCE_SRC_DIR}/RLEv1.cc" "${ORC_SOURCE_SRC_DIR}/RLEv1.cc"
"${ORC_SOURCE_SRC_DIR}/RLEv2.cc" "${ORC_SOURCE_SRC_DIR}/RLEv2.cc"
@ -194,9 +191,18 @@ set(ARROW_SRCS
"${LIBRARY_DIR}/compute/cast.cc" "${LIBRARY_DIR}/compute/cast.cc"
"${LIBRARY_DIR}/compute/exec.cc" "${LIBRARY_DIR}/compute/exec.cc"
"${LIBRARY_DIR}/compute/function.cc" "${LIBRARY_DIR}/compute/function.cc"
"${LIBRARY_DIR}/compute/function_internal.cc"
"${LIBRARY_DIR}/compute/kernel.cc" "${LIBRARY_DIR}/compute/kernel.cc"
"${LIBRARY_DIR}/compute/registry.cc" "${LIBRARY_DIR}/compute/registry.cc"
"${LIBRARY_DIR}/compute/exec/exec_plan.cc"
"${LIBRARY_DIR}/compute/exec/expression.cc"
"${LIBRARY_DIR}/compute/exec/key_compare.cc"
"${LIBRARY_DIR}/compute/exec/key_encode.cc"
"${LIBRARY_DIR}/compute/exec/key_hash.cc"
"${LIBRARY_DIR}/compute/exec/key_map.cc"
"${LIBRARY_DIR}/compute/exec/util.cc"
"${LIBRARY_DIR}/compute/kernels/aggregate_basic.cc" "${LIBRARY_DIR}/compute/kernels/aggregate_basic.cc"
"${LIBRARY_DIR}/compute/kernels/aggregate_mode.cc" "${LIBRARY_DIR}/compute/kernels/aggregate_mode.cc"
"${LIBRARY_DIR}/compute/kernels/aggregate_quantile.cc" "${LIBRARY_DIR}/compute/kernels/aggregate_quantile.cc"
@ -207,6 +213,7 @@ set(ARROW_SRCS
"${LIBRARY_DIR}/compute/kernels/scalar_arithmetic.cc" "${LIBRARY_DIR}/compute/kernels/scalar_arithmetic.cc"
"${LIBRARY_DIR}/compute/kernels/scalar_boolean.cc" "${LIBRARY_DIR}/compute/kernels/scalar_boolean.cc"
"${LIBRARY_DIR}/compute/kernels/scalar_cast_boolean.cc" "${LIBRARY_DIR}/compute/kernels/scalar_cast_boolean.cc"
"${LIBRARY_DIR}/compute/kernels/scalar_cast_dictionary.cc"
"${LIBRARY_DIR}/compute/kernels/scalar_cast_internal.cc" "${LIBRARY_DIR}/compute/kernels/scalar_cast_internal.cc"
"${LIBRARY_DIR}/compute/kernels/scalar_cast_nested.cc" "${LIBRARY_DIR}/compute/kernels/scalar_cast_nested.cc"
"${LIBRARY_DIR}/compute/kernels/scalar_cast_numeric.cc" "${LIBRARY_DIR}/compute/kernels/scalar_cast_numeric.cc"
@ -214,15 +221,18 @@ set(ARROW_SRCS
"${LIBRARY_DIR}/compute/kernels/scalar_cast_temporal.cc" "${LIBRARY_DIR}/compute/kernels/scalar_cast_temporal.cc"
"${LIBRARY_DIR}/compute/kernels/scalar_compare.cc" "${LIBRARY_DIR}/compute/kernels/scalar_compare.cc"
"${LIBRARY_DIR}/compute/kernels/scalar_fill_null.cc" "${LIBRARY_DIR}/compute/kernels/scalar_fill_null.cc"
"${LIBRARY_DIR}/compute/kernels/scalar_if_else.cc"
"${LIBRARY_DIR}/compute/kernels/scalar_nested.cc" "${LIBRARY_DIR}/compute/kernels/scalar_nested.cc"
"${LIBRARY_DIR}/compute/kernels/scalar_set_lookup.cc" "${LIBRARY_DIR}/compute/kernels/scalar_set_lookup.cc"
"${LIBRARY_DIR}/compute/kernels/scalar_string.cc" "${LIBRARY_DIR}/compute/kernels/scalar_string.cc"
"${LIBRARY_DIR}/compute/kernels/scalar_temporal.cc"
"${LIBRARY_DIR}/compute/kernels/scalar_validity.cc" "${LIBRARY_DIR}/compute/kernels/scalar_validity.cc"
"${LIBRARY_DIR}/compute/kernels/util_internal.cc"
"${LIBRARY_DIR}/compute/kernels/vector_hash.cc" "${LIBRARY_DIR}/compute/kernels/vector_hash.cc"
"${LIBRARY_DIR}/compute/kernels/vector_nested.cc" "${LIBRARY_DIR}/compute/kernels/vector_nested.cc"
"${LIBRARY_DIR}/compute/kernels/vector_replace.cc"
"${LIBRARY_DIR}/compute/kernels/vector_selection.cc" "${LIBRARY_DIR}/compute/kernels/vector_selection.cc"
"${LIBRARY_DIR}/compute/kernels/vector_sort.cc" "${LIBRARY_DIR}/compute/kernels/vector_sort.cc"
"${LIBRARY_DIR}/compute/kernels/util_internal.cc"
"${LIBRARY_DIR}/csv/chunker.cc" "${LIBRARY_DIR}/csv/chunker.cc"
"${LIBRARY_DIR}/csv/column_builder.cc" "${LIBRARY_DIR}/csv/column_builder.cc"
@ -231,6 +241,7 @@ set(ARROW_SRCS
"${LIBRARY_DIR}/csv/options.cc" "${LIBRARY_DIR}/csv/options.cc"
"${LIBRARY_DIR}/csv/parser.cc" "${LIBRARY_DIR}/csv/parser.cc"
"${LIBRARY_DIR}/csv/reader.cc" "${LIBRARY_DIR}/csv/reader.cc"
"${LIBRARY_DIR}/csv/writer.cc"
"${LIBRARY_DIR}/ipc/dictionary.cc" "${LIBRARY_DIR}/ipc/dictionary.cc"
"${LIBRARY_DIR}/ipc/feather.cc" "${LIBRARY_DIR}/ipc/feather.cc"
@ -247,6 +258,7 @@ set(ARROW_SRCS
"${LIBRARY_DIR}/io/interfaces.cc" "${LIBRARY_DIR}/io/interfaces.cc"
"${LIBRARY_DIR}/io/memory.cc" "${LIBRARY_DIR}/io/memory.cc"
"${LIBRARY_DIR}/io/slow.cc" "${LIBRARY_DIR}/io/slow.cc"
"${LIBRARY_DIR}/io/stdio.cc"
"${LIBRARY_DIR}/io/transform.cc" "${LIBRARY_DIR}/io/transform.cc"
"${LIBRARY_DIR}/tensor/coo_converter.cc" "${LIBRARY_DIR}/tensor/coo_converter.cc"
@ -257,9 +269,9 @@ set(ARROW_SRCS
"${LIBRARY_DIR}/util/bit_block_counter.cc" "${LIBRARY_DIR}/util/bit_block_counter.cc"
"${LIBRARY_DIR}/util/bit_run_reader.cc" "${LIBRARY_DIR}/util/bit_run_reader.cc"
"${LIBRARY_DIR}/util/bit_util.cc" "${LIBRARY_DIR}/util/bit_util.cc"
"${LIBRARY_DIR}/util/bitmap.cc"
"${LIBRARY_DIR}/util/bitmap_builders.cc" "${LIBRARY_DIR}/util/bitmap_builders.cc"
"${LIBRARY_DIR}/util/bitmap_ops.cc" "${LIBRARY_DIR}/util/bitmap_ops.cc"
"${LIBRARY_DIR}/util/bitmap.cc"
"${LIBRARY_DIR}/util/bpacking.cc" "${LIBRARY_DIR}/util/bpacking.cc"
"${LIBRARY_DIR}/util/cancel.cc" "${LIBRARY_DIR}/util/cancel.cc"
"${LIBRARY_DIR}/util/compression.cc" "${LIBRARY_DIR}/util/compression.cc"

2
contrib/boost vendored

@ -1 +1 @@
Subproject commit 1ccbb5a522a571ce83b606dbc2e1011c42ecccfb Subproject commit 9cf09dbfd55a5c6202dedbdf40781a51b02c2675

View File

@ -13,11 +13,12 @@ if (NOT USE_INTERNAL_BOOST_LIBRARY)
regex regex
context context
coroutine coroutine
graph
) )
if(Boost_INCLUDE_DIR AND Boost_FILESYSTEM_LIBRARY AND Boost_FILESYSTEM_LIBRARY AND if(Boost_INCLUDE_DIR AND Boost_FILESYSTEM_LIBRARY AND Boost_FILESYSTEM_LIBRARY AND
Boost_PROGRAM_OPTIONS_LIBRARY AND Boost_REGEX_LIBRARY AND Boost_SYSTEM_LIBRARY AND Boost_CONTEXT_LIBRARY AND Boost_PROGRAM_OPTIONS_LIBRARY AND Boost_REGEX_LIBRARY AND Boost_SYSTEM_LIBRARY AND Boost_CONTEXT_LIBRARY AND
Boost_COROUTINE_LIBRARY) Boost_COROUTINE_LIBRARY AND Boost_GRAPH_LIBRARY)
set(EXTERNAL_BOOST_FOUND 1) set(EXTERNAL_BOOST_FOUND 1)
@ -32,6 +33,7 @@ if (NOT USE_INTERNAL_BOOST_LIBRARY)
add_library (_boost_system INTERFACE) add_library (_boost_system INTERFACE)
add_library (_boost_context INTERFACE) add_library (_boost_context INTERFACE)
add_library (_boost_coroutine INTERFACE) add_library (_boost_coroutine INTERFACE)
add_library (_boost_graph INTERFACE)
target_link_libraries (_boost_filesystem INTERFACE ${Boost_FILESYSTEM_LIBRARY}) target_link_libraries (_boost_filesystem INTERFACE ${Boost_FILESYSTEM_LIBRARY})
target_link_libraries (_boost_iostreams INTERFACE ${Boost_IOSTREAMS_LIBRARY}) target_link_libraries (_boost_iostreams INTERFACE ${Boost_IOSTREAMS_LIBRARY})
@ -40,6 +42,7 @@ if (NOT USE_INTERNAL_BOOST_LIBRARY)
target_link_libraries (_boost_system INTERFACE ${Boost_SYSTEM_LIBRARY}) target_link_libraries (_boost_system INTERFACE ${Boost_SYSTEM_LIBRARY})
target_link_libraries (_boost_context INTERFACE ${Boost_CONTEXT_LIBRARY}) target_link_libraries (_boost_context INTERFACE ${Boost_CONTEXT_LIBRARY})
target_link_libraries (_boost_coroutine INTERFACE ${Boost_COROUTINE_LIBRARY}) target_link_libraries (_boost_coroutine INTERFACE ${Boost_COROUTINE_LIBRARY})
target_link_libraries (_boost_graph INTERFACE ${Boost_GRAPH_LIBRARY})
add_library (boost::filesystem ALIAS _boost_filesystem) add_library (boost::filesystem ALIAS _boost_filesystem)
add_library (boost::iostreams ALIAS _boost_iostreams) add_library (boost::iostreams ALIAS _boost_iostreams)
@ -48,6 +51,7 @@ if (NOT USE_INTERNAL_BOOST_LIBRARY)
add_library (boost::system ALIAS _boost_system) add_library (boost::system ALIAS _boost_system)
add_library (boost::context ALIAS _boost_context) add_library (boost::context ALIAS _boost_context)
add_library (boost::coroutine ALIAS _boost_coroutine) add_library (boost::coroutine ALIAS _boost_coroutine)
add_library (boost::graph ALIAS _boost_graph)
else() else()
set(EXTERNAL_BOOST_FOUND 0) set(EXTERNAL_BOOST_FOUND 0)
message (${RECONFIGURE_MESSAGE_LEVEL} "Can't find system boost") message (${RECONFIGURE_MESSAGE_LEVEL} "Can't find system boost")
@ -221,4 +225,17 @@ if (NOT EXTERNAL_BOOST_FOUND)
add_library (boost::coroutine ALIAS _boost_coroutine) add_library (boost::coroutine ALIAS _boost_coroutine)
target_include_directories (_boost_coroutine PRIVATE ${LIBRARY_DIR}) target_include_directories (_boost_coroutine PRIVATE ${LIBRARY_DIR})
target_link_libraries(_boost_coroutine PRIVATE _boost_context) target_link_libraries(_boost_coroutine PRIVATE _boost_context)
# graph
set (SRCS_GRAPH
"${LIBRARY_DIR}/libs/graph/src/graphml.cpp"
"${LIBRARY_DIR}/libs/graph/src/read_graphviz_new.cpp"
)
add_library (_boost_graph ${SRCS_GRAPH})
add_library (boost::graph ALIAS _boost_graph)
target_include_directories (_boost_graph PRIVATE ${LIBRARY_DIR})
target_link_libraries(_boost_graph PRIVATE _boost_regex)
endif () endif ()

1
contrib/bzip2 vendored Submodule

@ -0,0 +1 @@
Subproject commit bf905ea2251191ff9911ae7ec0cfc35d41f9f7f6

View File

@ -0,0 +1,23 @@
set(BZIP2_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/bzip2")
set(BZIP2_BINARY_DIR "${ClickHouse_BINARY_DIR}/contrib/bzip2")
set(SRCS
"${BZIP2_SOURCE_DIR}/blocksort.c"
"${BZIP2_SOURCE_DIR}/huffman.c"
"${BZIP2_SOURCE_DIR}/crctable.c"
"${BZIP2_SOURCE_DIR}/randtable.c"
"${BZIP2_SOURCE_DIR}/compress.c"
"${BZIP2_SOURCE_DIR}/decompress.c"
"${BZIP2_SOURCE_DIR}/bzlib.c"
)
# From bzip2/CMakeLists.txt
set(BZ_VERSION "1.0.7")
configure_file (
"${BZIP2_SOURCE_DIR}/bz_version.h.in"
"${BZIP2_BINARY_DIR}/bz_version.h"
)
add_library(bzip2 ${SRCS})
target_include_directories(bzip2 PUBLIC "${BZIP2_SOURCE_DIR}" "${BZIP2_BINARY_DIR}")

View File

@ -24,3 +24,15 @@ add_library(roaring ${SRCS})
target_include_directories(roaring PRIVATE "${LIBRARY_DIR}/include/roaring") target_include_directories(roaring PRIVATE "${LIBRARY_DIR}/include/roaring")
target_include_directories(roaring SYSTEM BEFORE PUBLIC "${LIBRARY_DIR}/include") target_include_directories(roaring SYSTEM BEFORE PUBLIC "${LIBRARY_DIR}/include")
target_include_directories(roaring SYSTEM BEFORE PUBLIC "${LIBRARY_DIR}/cpp") target_include_directories(roaring SYSTEM BEFORE PUBLIC "${LIBRARY_DIR}/cpp")
# We redirect malloc/free family of functions to different functions that will track memory in ClickHouse.
# Also note that we exploit implicit function declarations.
target_compile_definitions(roaring PRIVATE
-Dmalloc=clickhouse_malloc
-Dcalloc=clickhouse_calloc
-Drealloc=clickhouse_realloc
-Dreallocarray=clickhouse_reallocarray
-Dfree=clickhouse_free
-Dposix_memalign=clickhouse_posix_memalign)
target_link_libraries(roaring PUBLIC clickhouse_common_io)

2
contrib/h3 vendored

@ -1 +1 @@
Subproject commit e209086ae1b5477307f545a0f6111780edc59940 Subproject commit c7f46cfd71fb60e2fefc90e28abe81657deff735

View File

@ -3,21 +3,22 @@ set(H3_BINARY_DIR "${ClickHouse_BINARY_DIR}/contrib/h3/src/h3lib")
set(SRCS set(SRCS
"${H3_SOURCE_DIR}/lib/algos.c" "${H3_SOURCE_DIR}/lib/algos.c"
"${H3_SOURCE_DIR}/lib/baseCells.c"
"${H3_SOURCE_DIR}/lib/bbox.c"
"${H3_SOURCE_DIR}/lib/coordijk.c" "${H3_SOURCE_DIR}/lib/coordijk.c"
"${H3_SOURCE_DIR}/lib/faceijk.c" "${H3_SOURCE_DIR}/lib/bbox.c"
"${H3_SOURCE_DIR}/lib/geoCoord.c"
"${H3_SOURCE_DIR}/lib/h3Index.c"
"${H3_SOURCE_DIR}/lib/h3UniEdge.c"
"${H3_SOURCE_DIR}/lib/linkedGeo.c"
"${H3_SOURCE_DIR}/lib/localij.c"
"${H3_SOURCE_DIR}/lib/mathExtensions.c"
"${H3_SOURCE_DIR}/lib/polygon.c" "${H3_SOURCE_DIR}/lib/polygon.c"
"${H3_SOURCE_DIR}/lib/h3Index.c"
"${H3_SOURCE_DIR}/lib/vec2d.c" "${H3_SOURCE_DIR}/lib/vec2d.c"
"${H3_SOURCE_DIR}/lib/vec3d.c" "${H3_SOURCE_DIR}/lib/vec3d.c"
"${H3_SOURCE_DIR}/lib/vertex.c" "${H3_SOURCE_DIR}/lib/vertex.c"
"${H3_SOURCE_DIR}/lib/linkedGeo.c"
"${H3_SOURCE_DIR}/lib/localij.c"
"${H3_SOURCE_DIR}/lib/latLng.c"
"${H3_SOURCE_DIR}/lib/directedEdge.c"
"${H3_SOURCE_DIR}/lib/mathExtensions.c"
"${H3_SOURCE_DIR}/lib/iterators.c"
"${H3_SOURCE_DIR}/lib/vertexGraph.c" "${H3_SOURCE_DIR}/lib/vertexGraph.c"
"${H3_SOURCE_DIR}/lib/faceijk.c"
"${H3_SOURCE_DIR}/lib/baseCells.c"
) )
configure_file("${H3_SOURCE_DIR}/include/h3api.h.in" "${H3_BINARY_DIR}/include/h3api.h") configure_file("${H3_SOURCE_DIR}/include/h3api.h.in" "${H3_BINARY_DIR}/include/h3api.h")

View File

@ -1,10 +1,9 @@
# Disabled under OSX until https://github.com/ClickHouse/ClickHouse/issues/27568 is fixed
if (SANITIZE OR NOT ( if (SANITIZE OR NOT (
((OS_LINUX OR OS_FREEBSD) AND (ARCH_AMD64 OR ARCH_ARM OR ARCH_PPC64LE)) OR ((OS_LINUX OR OS_FREEBSD) AND (ARCH_AMD64 OR ARCH_ARM OR ARCH_PPC64LE))))
(OS_DARWIN AND (CMAKE_BUILD_TYPE STREQUAL "RelWithDebInfo" OR CMAKE_BUILD_TYPE STREQUAL "Debug"))
))
if (ENABLE_JEMALLOC) if (ENABLE_JEMALLOC)
message (${RECONFIGURE_MESSAGE_LEVEL} message (${RECONFIGURE_MESSAGE_LEVEL}
"jemalloc is disabled implicitly: it doesn't work with sanitizers and can only be used with x86_64, aarch64, or ppc64le Linux or FreeBSD builds and RelWithDebInfo macOS builds.") "jemalloc is disabled implicitly: it doesn't work with sanitizers and can only be used with x86_64, aarch64, or ppc64le Linux or FreeBSD builds")
endif () endif ()
set (ENABLE_JEMALLOC OFF) set (ENABLE_JEMALLOC OFF)
else () else ()

1
contrib/lemmagen-c vendored Submodule

@ -0,0 +1 @@
Subproject commit 59537bdcf57bbed17913292cb4502d15657231f1

View File

@ -0,0 +1,9 @@
set(LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/lemmagen-c")
set(LEMMAGEN_INCLUDE_DIR "${LIBRARY_DIR}/include")
set(SRCS
"${LIBRARY_DIR}/src/RdrLemmatizer.cpp"
)
add_library(lemmagen STATIC ${SRCS})
target_include_directories(lemmagen PUBLIC "${LEMMAGEN_INCLUDE_DIR}")

View File

@ -2,9 +2,5 @@ set (SRCS
src/metrohash64.cpp src/metrohash64.cpp
src/metrohash128.cpp src/metrohash128.cpp
) )
if (HAVE_SSE42) # Not used. Pretty easy to port.
list (APPEND SRCS src/metrohash128crc.cpp)
endif ()
add_library(metrohash ${SRCS}) add_library(metrohash ${SRCS})
target_include_directories(metrohash PUBLIC src) target_include_directories(metrohash PUBLIC src)

View File

@ -0,0 +1,31 @@
set(LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/libstemmer_c")
set(STEMMER_INCLUDE_DIR "${LIBRARY_DIR}/include")
FILE ( READ "${LIBRARY_DIR}/mkinc.mak" _CONTENT )
# replace '\ ' into one big line
STRING ( REGEX REPLACE "\\\\\n " " ${LIBRARY_DIR}/" _CONTENT "${_CONTENT}" )
# escape ';' (if any)
STRING ( REGEX REPLACE ";" "\\\\;" _CONTENT "${_CONTENT}" )
# now replace lf into ';' (it makes list from the line)
STRING ( REGEX REPLACE "\n" ";" _CONTENT "${_CONTENT}" )
FOREACH ( LINE ${_CONTENT} )
# skip comments (beginning with #)
IF ( NOT "${LINE}" MATCHES "^#.*" )
# parse 'name=value1 value2..." - extract the 'name' part
STRING ( REGEX REPLACE "=.*$" "" _NAME "${LINE}" )
# extract the list of values part
STRING ( REGEX REPLACE "^.*=" "" _LIST "${LINE}" )
# replace (multi)spaces into ';' (it makes list from the line)
STRING ( REGEX REPLACE " +" ";" _LIST "${_LIST}" )
# finally get our two variables
IF ( "${_NAME}" MATCHES "snowball_sources" )
SET ( _SOURCES "${_LIST}" )
ELSEIF ( "${_NAME}" MATCHES "snowball_headers" )
SET ( _HEADERS "${_LIST}" )
ENDIF ()
endif ()
endforeach ()
# all the sources parsed. Now just add the lib
add_library ( stemmer STATIC ${_SOURCES} ${_HEADERS} )
target_include_directories (stemmer PUBLIC "${STEMMER_INCLUDE_DIR}")

1
contrib/libstemmer_c vendored Submodule

@ -0,0 +1 @@
Subproject commit c753054304d87daf460057c1a649c482aa094835

View File

@ -22,6 +22,7 @@ set(SRCS
"${LIBRARY_DIR}/src/launcher.cxx" "${LIBRARY_DIR}/src/launcher.cxx"
"${LIBRARY_DIR}/src/srv_config.cxx" "${LIBRARY_DIR}/src/srv_config.cxx"
"${LIBRARY_DIR}/src/snapshot_sync_req.cxx" "${LIBRARY_DIR}/src/snapshot_sync_req.cxx"
"${LIBRARY_DIR}/src/snapshot_sync_ctx.cxx"
"${LIBRARY_DIR}/src/handle_timeout.cxx" "${LIBRARY_DIR}/src/handle_timeout.cxx"
"${LIBRARY_DIR}/src/handle_append_entries.cxx" "${LIBRARY_DIR}/src/handle_append_entries.cxx"
"${LIBRARY_DIR}/src/cluster_config.cxx" "${LIBRARY_DIR}/src/cluster_config.cxx"

2
contrib/poco vendored

@ -1 +1 @@
Subproject commit 5994506908028612869fee627d68d8212dfe7c1e Subproject commit 7351c4691b5d401f59e3959adfc5b4fa263b32da

2
contrib/protobuf vendored

@ -1 +1 @@
Subproject commit 73b12814204ad9068ba352914d0dc244648b48ee Subproject commit 75601841d172c73ae6bf4ce8121f42b875cdbabd

2
contrib/rocksdb vendored

@ -1 +1 @@
Subproject commit 07c77549a20b63ff6981b400085eba36bb5c80c4 Subproject commit b6480c69bf3ab6e298e0d019a07fd4f69029b26a

View File

@ -70,11 +70,6 @@ else()
endif() endif()
endif() endif()
set(BUILD_VERSION_CC rocksdb_build_version.cc)
add_library(rocksdb_build_version OBJECT ${BUILD_VERSION_CC})
target_include_directories(rocksdb_build_version PRIVATE "${ROCKSDB_SOURCE_DIR}/util")
include(CheckCCompilerFlag) include(CheckCCompilerFlag)
if(CMAKE_SYSTEM_PROCESSOR MATCHES "^(powerpc|ppc)64") if(CMAKE_SYSTEM_PROCESSOR MATCHES "^(powerpc|ppc)64")
CHECK_C_COMPILER_FLAG("-mcpu=power9" HAS_POWER9) CHECK_C_COMPILER_FLAG("-mcpu=power9" HAS_POWER9)
@ -243,272 +238,293 @@ find_package(Threads REQUIRED)
# Main library source code # Main library source code
set(SOURCES set(SOURCES
"${ROCKSDB_SOURCE_DIR}/cache/cache.cc" ${ROCKSDB_SOURCE_DIR}/cache/cache.cc
"${ROCKSDB_SOURCE_DIR}/cache/clock_cache.cc" ${ROCKSDB_SOURCE_DIR}/cache/cache_entry_roles.cc
"${ROCKSDB_SOURCE_DIR}/cache/lru_cache.cc" ${ROCKSDB_SOURCE_DIR}/cache/clock_cache.cc
"${ROCKSDB_SOURCE_DIR}/cache/sharded_cache.cc" ${ROCKSDB_SOURCE_DIR}/cache/lru_cache.cc
"${ROCKSDB_SOURCE_DIR}/db/arena_wrapped_db_iter.cc" ${ROCKSDB_SOURCE_DIR}/cache/sharded_cache.cc
"${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_addition.cc" ${ROCKSDB_SOURCE_DIR}/db/arena_wrapped_db_iter.cc
"${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_builder.cc" ${ROCKSDB_SOURCE_DIR}/db/blob/blob_fetcher.cc
"${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_cache.cc" ${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_addition.cc
"${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_garbage.cc" ${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_builder.cc
"${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_meta.cc" ${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_cache.cc
"${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_reader.cc" ${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_garbage.cc
"${ROCKSDB_SOURCE_DIR}/db/blob/blob_log_format.cc" ${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_meta.cc
"${ROCKSDB_SOURCE_DIR}/db/blob/blob_log_sequential_reader.cc" ${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_reader.cc
"${ROCKSDB_SOURCE_DIR}/db/blob/blob_log_writer.cc" ${ROCKSDB_SOURCE_DIR}/db/blob/blob_garbage_meter.cc
"${ROCKSDB_SOURCE_DIR}/db/builder.cc" ${ROCKSDB_SOURCE_DIR}/db/blob/blob_log_format.cc
"${ROCKSDB_SOURCE_DIR}/db/c.cc" ${ROCKSDB_SOURCE_DIR}/db/blob/blob_log_sequential_reader.cc
"${ROCKSDB_SOURCE_DIR}/db/column_family.cc" ${ROCKSDB_SOURCE_DIR}/db/blob/blob_log_writer.cc
"${ROCKSDB_SOURCE_DIR}/db/compacted_db_impl.cc" ${ROCKSDB_SOURCE_DIR}/db/builder.cc
"${ROCKSDB_SOURCE_DIR}/db/compaction/compaction.cc" ${ROCKSDB_SOURCE_DIR}/db/c.cc
"${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_iterator.cc" ${ROCKSDB_SOURCE_DIR}/db/column_family.cc
"${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker.cc" ${ROCKSDB_SOURCE_DIR}/db/compaction/compaction.cc
"${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_job.cc" ${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_iterator.cc
"${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker_fifo.cc" ${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker.cc
"${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker_level.cc" ${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_job.cc
"${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker_universal.cc" ${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker_fifo.cc
"${ROCKSDB_SOURCE_DIR}/db/compaction/sst_partitioner.cc" ${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker_level.cc
"${ROCKSDB_SOURCE_DIR}/db/convenience.cc" ${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker_universal.cc
"${ROCKSDB_SOURCE_DIR}/db/db_filesnapshot.cc" ${ROCKSDB_SOURCE_DIR}/db/compaction/sst_partitioner.cc
"${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl.cc" ${ROCKSDB_SOURCE_DIR}/db/convenience.cc
"${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_write.cc" ${ROCKSDB_SOURCE_DIR}/db/db_filesnapshot.cc
"${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_compaction_flush.cc" ${ROCKSDB_SOURCE_DIR}/db/db_impl/compacted_db_impl.cc
"${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_files.cc" ${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl.cc
"${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_open.cc" ${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_write.cc
"${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_debug.cc" ${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_compaction_flush.cc
"${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_experimental.cc" ${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_files.cc
"${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_readonly.cc" ${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_open.cc
"${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_secondary.cc" ${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_debug.cc
"${ROCKSDB_SOURCE_DIR}/db/db_info_dumper.cc" ${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_experimental.cc
"${ROCKSDB_SOURCE_DIR}/db/db_iter.cc" ${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_readonly.cc
"${ROCKSDB_SOURCE_DIR}/db/dbformat.cc" ${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_secondary.cc
"${ROCKSDB_SOURCE_DIR}/db/error_handler.cc" ${ROCKSDB_SOURCE_DIR}/db/db_info_dumper.cc
"${ROCKSDB_SOURCE_DIR}/db/event_helpers.cc" ${ROCKSDB_SOURCE_DIR}/db/db_iter.cc
"${ROCKSDB_SOURCE_DIR}/db/experimental.cc" ${ROCKSDB_SOURCE_DIR}/db/dbformat.cc
"${ROCKSDB_SOURCE_DIR}/db/external_sst_file_ingestion_job.cc" ${ROCKSDB_SOURCE_DIR}/db/error_handler.cc
"${ROCKSDB_SOURCE_DIR}/db/file_indexer.cc" ${ROCKSDB_SOURCE_DIR}/db/event_helpers.cc
"${ROCKSDB_SOURCE_DIR}/db/flush_job.cc" ${ROCKSDB_SOURCE_DIR}/db/experimental.cc
"${ROCKSDB_SOURCE_DIR}/db/flush_scheduler.cc" ${ROCKSDB_SOURCE_DIR}/db/external_sst_file_ingestion_job.cc
"${ROCKSDB_SOURCE_DIR}/db/forward_iterator.cc" ${ROCKSDB_SOURCE_DIR}/db/file_indexer.cc
"${ROCKSDB_SOURCE_DIR}/db/import_column_family_job.cc" ${ROCKSDB_SOURCE_DIR}/db/flush_job.cc
"${ROCKSDB_SOURCE_DIR}/db/internal_stats.cc" ${ROCKSDB_SOURCE_DIR}/db/flush_scheduler.cc
"${ROCKSDB_SOURCE_DIR}/db/logs_with_prep_tracker.cc" ${ROCKSDB_SOURCE_DIR}/db/forward_iterator.cc
"${ROCKSDB_SOURCE_DIR}/db/log_reader.cc" ${ROCKSDB_SOURCE_DIR}/db/import_column_family_job.cc
"${ROCKSDB_SOURCE_DIR}/db/log_writer.cc" ${ROCKSDB_SOURCE_DIR}/db/internal_stats.cc
"${ROCKSDB_SOURCE_DIR}/db/malloc_stats.cc" ${ROCKSDB_SOURCE_DIR}/db/logs_with_prep_tracker.cc
"${ROCKSDB_SOURCE_DIR}/db/memtable.cc" ${ROCKSDB_SOURCE_DIR}/db/log_reader.cc
"${ROCKSDB_SOURCE_DIR}/db/memtable_list.cc" ${ROCKSDB_SOURCE_DIR}/db/log_writer.cc
"${ROCKSDB_SOURCE_DIR}/db/merge_helper.cc" ${ROCKSDB_SOURCE_DIR}/db/malloc_stats.cc
"${ROCKSDB_SOURCE_DIR}/db/merge_operator.cc" ${ROCKSDB_SOURCE_DIR}/db/memtable.cc
"${ROCKSDB_SOURCE_DIR}/db/output_validator.cc" ${ROCKSDB_SOURCE_DIR}/db/memtable_list.cc
"${ROCKSDB_SOURCE_DIR}/db/periodic_work_scheduler.cc" ${ROCKSDB_SOURCE_DIR}/db/merge_helper.cc
"${ROCKSDB_SOURCE_DIR}/db/range_del_aggregator.cc" ${ROCKSDB_SOURCE_DIR}/db/merge_operator.cc
"${ROCKSDB_SOURCE_DIR}/db/range_tombstone_fragmenter.cc" ${ROCKSDB_SOURCE_DIR}/db/output_validator.cc
"${ROCKSDB_SOURCE_DIR}/db/repair.cc" ${ROCKSDB_SOURCE_DIR}/db/periodic_work_scheduler.cc
"${ROCKSDB_SOURCE_DIR}/db/snapshot_impl.cc" ${ROCKSDB_SOURCE_DIR}/db/range_del_aggregator.cc
"${ROCKSDB_SOURCE_DIR}/db/table_cache.cc" ${ROCKSDB_SOURCE_DIR}/db/range_tombstone_fragmenter.cc
"${ROCKSDB_SOURCE_DIR}/db/table_properties_collector.cc" ${ROCKSDB_SOURCE_DIR}/db/repair.cc
"${ROCKSDB_SOURCE_DIR}/db/transaction_log_impl.cc" ${ROCKSDB_SOURCE_DIR}/db/snapshot_impl.cc
"${ROCKSDB_SOURCE_DIR}/db/trim_history_scheduler.cc" ${ROCKSDB_SOURCE_DIR}/db/table_cache.cc
"${ROCKSDB_SOURCE_DIR}/db/version_builder.cc" ${ROCKSDB_SOURCE_DIR}/db/table_properties_collector.cc
"${ROCKSDB_SOURCE_DIR}/db/version_edit.cc" ${ROCKSDB_SOURCE_DIR}/db/transaction_log_impl.cc
"${ROCKSDB_SOURCE_DIR}/db/version_edit_handler.cc" ${ROCKSDB_SOURCE_DIR}/db/trim_history_scheduler.cc
"${ROCKSDB_SOURCE_DIR}/db/version_set.cc" ${ROCKSDB_SOURCE_DIR}/db/version_builder.cc
"${ROCKSDB_SOURCE_DIR}/db/wal_edit.cc" ${ROCKSDB_SOURCE_DIR}/db/version_edit.cc
"${ROCKSDB_SOURCE_DIR}/db/wal_manager.cc" ${ROCKSDB_SOURCE_DIR}/db/version_edit_handler.cc
"${ROCKSDB_SOURCE_DIR}/db/write_batch.cc" ${ROCKSDB_SOURCE_DIR}/db/version_set.cc
"${ROCKSDB_SOURCE_DIR}/db/write_batch_base.cc" ${ROCKSDB_SOURCE_DIR}/db/wal_edit.cc
"${ROCKSDB_SOURCE_DIR}/db/write_controller.cc" ${ROCKSDB_SOURCE_DIR}/db/wal_manager.cc
"${ROCKSDB_SOURCE_DIR}/db/write_thread.cc" ${ROCKSDB_SOURCE_DIR}/db/write_batch.cc
"${ROCKSDB_SOURCE_DIR}/env/env.cc" ${ROCKSDB_SOURCE_DIR}/db/write_batch_base.cc
"${ROCKSDB_SOURCE_DIR}/env/env_chroot.cc" ${ROCKSDB_SOURCE_DIR}/db/write_controller.cc
"${ROCKSDB_SOURCE_DIR}/env/env_encryption.cc" ${ROCKSDB_SOURCE_DIR}/db/write_thread.cc
"${ROCKSDB_SOURCE_DIR}/env/env_hdfs.cc" ${ROCKSDB_SOURCE_DIR}/env/composite_env.cc
"${ROCKSDB_SOURCE_DIR}/env/file_system.cc" ${ROCKSDB_SOURCE_DIR}/env/env.cc
"${ROCKSDB_SOURCE_DIR}/env/file_system_tracer.cc" ${ROCKSDB_SOURCE_DIR}/env/env_chroot.cc
"${ROCKSDB_SOURCE_DIR}/env/mock_env.cc" ${ROCKSDB_SOURCE_DIR}/env/env_encryption.cc
"${ROCKSDB_SOURCE_DIR}/file/delete_scheduler.cc" ${ROCKSDB_SOURCE_DIR}/env/env_hdfs.cc
"${ROCKSDB_SOURCE_DIR}/file/file_prefetch_buffer.cc" ${ROCKSDB_SOURCE_DIR}/env/file_system.cc
"${ROCKSDB_SOURCE_DIR}/file/file_util.cc" ${ROCKSDB_SOURCE_DIR}/env/file_system_tracer.cc
"${ROCKSDB_SOURCE_DIR}/file/filename.cc" ${ROCKSDB_SOURCE_DIR}/env/fs_remap.cc
"${ROCKSDB_SOURCE_DIR}/file/random_access_file_reader.cc" ${ROCKSDB_SOURCE_DIR}/env/mock_env.cc
"${ROCKSDB_SOURCE_DIR}/file/read_write_util.cc" ${ROCKSDB_SOURCE_DIR}/file/delete_scheduler.cc
"${ROCKSDB_SOURCE_DIR}/file/readahead_raf.cc" ${ROCKSDB_SOURCE_DIR}/file/file_prefetch_buffer.cc
"${ROCKSDB_SOURCE_DIR}/file/sequence_file_reader.cc" ${ROCKSDB_SOURCE_DIR}/file/file_util.cc
"${ROCKSDB_SOURCE_DIR}/file/sst_file_manager_impl.cc" ${ROCKSDB_SOURCE_DIR}/file/filename.cc
"${ROCKSDB_SOURCE_DIR}/file/writable_file_writer.cc" ${ROCKSDB_SOURCE_DIR}/file/line_file_reader.cc
"${ROCKSDB_SOURCE_DIR}/logging/auto_roll_logger.cc" ${ROCKSDB_SOURCE_DIR}/file/random_access_file_reader.cc
"${ROCKSDB_SOURCE_DIR}/logging/event_logger.cc" ${ROCKSDB_SOURCE_DIR}/file/read_write_util.cc
"${ROCKSDB_SOURCE_DIR}/logging/log_buffer.cc" ${ROCKSDB_SOURCE_DIR}/file/readahead_raf.cc
"${ROCKSDB_SOURCE_DIR}/memory/arena.cc" ${ROCKSDB_SOURCE_DIR}/file/sequence_file_reader.cc
"${ROCKSDB_SOURCE_DIR}/memory/concurrent_arena.cc" ${ROCKSDB_SOURCE_DIR}/file/sst_file_manager_impl.cc
"${ROCKSDB_SOURCE_DIR}/memory/jemalloc_nodump_allocator.cc" ${ROCKSDB_SOURCE_DIR}/file/writable_file_writer.cc
"${ROCKSDB_SOURCE_DIR}/memory/memkind_kmem_allocator.cc" ${ROCKSDB_SOURCE_DIR}/logging/auto_roll_logger.cc
"${ROCKSDB_SOURCE_DIR}/memtable/alloc_tracker.cc" ${ROCKSDB_SOURCE_DIR}/logging/event_logger.cc
"${ROCKSDB_SOURCE_DIR}/memtable/hash_linklist_rep.cc" ${ROCKSDB_SOURCE_DIR}/logging/log_buffer.cc
"${ROCKSDB_SOURCE_DIR}/memtable/hash_skiplist_rep.cc" ${ROCKSDB_SOURCE_DIR}/memory/arena.cc
"${ROCKSDB_SOURCE_DIR}/memtable/skiplistrep.cc" ${ROCKSDB_SOURCE_DIR}/memory/concurrent_arena.cc
"${ROCKSDB_SOURCE_DIR}/memtable/vectorrep.cc" ${ROCKSDB_SOURCE_DIR}/memory/jemalloc_nodump_allocator.cc
"${ROCKSDB_SOURCE_DIR}/memtable/write_buffer_manager.cc" ${ROCKSDB_SOURCE_DIR}/memory/memkind_kmem_allocator.cc
"${ROCKSDB_SOURCE_DIR}/monitoring/histogram.cc" ${ROCKSDB_SOURCE_DIR}/memtable/alloc_tracker.cc
"${ROCKSDB_SOURCE_DIR}/monitoring/histogram_windowing.cc" ${ROCKSDB_SOURCE_DIR}/memtable/hash_linklist_rep.cc
"${ROCKSDB_SOURCE_DIR}/monitoring/in_memory_stats_history.cc" ${ROCKSDB_SOURCE_DIR}/memtable/hash_skiplist_rep.cc
"${ROCKSDB_SOURCE_DIR}/monitoring/instrumented_mutex.cc" ${ROCKSDB_SOURCE_DIR}/memtable/skiplistrep.cc
"${ROCKSDB_SOURCE_DIR}/monitoring/iostats_context.cc" ${ROCKSDB_SOURCE_DIR}/memtable/vectorrep.cc
"${ROCKSDB_SOURCE_DIR}/monitoring/perf_context.cc" ${ROCKSDB_SOURCE_DIR}/memtable/write_buffer_manager.cc
"${ROCKSDB_SOURCE_DIR}/monitoring/perf_level.cc" ${ROCKSDB_SOURCE_DIR}/monitoring/histogram.cc
"${ROCKSDB_SOURCE_DIR}/monitoring/persistent_stats_history.cc" ${ROCKSDB_SOURCE_DIR}/monitoring/histogram_windowing.cc
"${ROCKSDB_SOURCE_DIR}/monitoring/statistics.cc" ${ROCKSDB_SOURCE_DIR}/monitoring/in_memory_stats_history.cc
"${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_impl.cc" ${ROCKSDB_SOURCE_DIR}/monitoring/instrumented_mutex.cc
"${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_updater.cc" ${ROCKSDB_SOURCE_DIR}/monitoring/iostats_context.cc
"${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_util.cc" ${ROCKSDB_SOURCE_DIR}/monitoring/perf_context.cc
"${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_util_debug.cc" ${ROCKSDB_SOURCE_DIR}/monitoring/perf_level.cc
"${ROCKSDB_SOURCE_DIR}/options/cf_options.cc" ${ROCKSDB_SOURCE_DIR}/monitoring/persistent_stats_history.cc
"${ROCKSDB_SOURCE_DIR}/options/configurable.cc" ${ROCKSDB_SOURCE_DIR}/monitoring/statistics.cc
"${ROCKSDB_SOURCE_DIR}/options/customizable.cc" ${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_impl.cc
"${ROCKSDB_SOURCE_DIR}/options/db_options.cc" ${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_updater.cc
"${ROCKSDB_SOURCE_DIR}/options/options.cc" ${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_util.cc
"${ROCKSDB_SOURCE_DIR}/options/options_helper.cc" ${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_util_debug.cc
"${ROCKSDB_SOURCE_DIR}/options/options_parser.cc" ${ROCKSDB_SOURCE_DIR}/options/cf_options.cc
"${ROCKSDB_SOURCE_DIR}/port/stack_trace.cc" ${ROCKSDB_SOURCE_DIR}/options/configurable.cc
"${ROCKSDB_SOURCE_DIR}/table/adaptive/adaptive_table_factory.cc" ${ROCKSDB_SOURCE_DIR}/options/customizable.cc
"${ROCKSDB_SOURCE_DIR}/table/block_based/binary_search_index_reader.cc" ${ROCKSDB_SOURCE_DIR}/options/db_options.cc
"${ROCKSDB_SOURCE_DIR}/table/block_based/block.cc" ${ROCKSDB_SOURCE_DIR}/options/options.cc
"${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_filter_block.cc" ${ROCKSDB_SOURCE_DIR}/options/options_helper.cc
"${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_builder.cc" ${ROCKSDB_SOURCE_DIR}/options/options_parser.cc
"${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_factory.cc" ${ROCKSDB_SOURCE_DIR}/port/stack_trace.cc
"${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_iterator.cc" ${ROCKSDB_SOURCE_DIR}/table/adaptive/adaptive_table_factory.cc
"${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_reader.cc" ${ROCKSDB_SOURCE_DIR}/table/block_based/binary_search_index_reader.cc
"${ROCKSDB_SOURCE_DIR}/table/block_based/block_builder.cc" ${ROCKSDB_SOURCE_DIR}/table/block_based/block.cc
"${ROCKSDB_SOURCE_DIR}/table/block_based/block_prefetcher.cc" ${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_filter_block.cc
"${ROCKSDB_SOURCE_DIR}/table/block_based/block_prefix_index.cc" ${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_builder.cc
"${ROCKSDB_SOURCE_DIR}/table/block_based/data_block_hash_index.cc" ${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_factory.cc
"${ROCKSDB_SOURCE_DIR}/table/block_based/data_block_footer.cc" ${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_iterator.cc
"${ROCKSDB_SOURCE_DIR}/table/block_based/filter_block_reader_common.cc" ${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_reader.cc
"${ROCKSDB_SOURCE_DIR}/table/block_based/filter_policy.cc" ${ROCKSDB_SOURCE_DIR}/table/block_based/block_builder.cc
"${ROCKSDB_SOURCE_DIR}/table/block_based/flush_block_policy.cc" ${ROCKSDB_SOURCE_DIR}/table/block_based/block_prefetcher.cc
"${ROCKSDB_SOURCE_DIR}/table/block_based/full_filter_block.cc" ${ROCKSDB_SOURCE_DIR}/table/block_based/block_prefix_index.cc
"${ROCKSDB_SOURCE_DIR}/table/block_based/hash_index_reader.cc" ${ROCKSDB_SOURCE_DIR}/table/block_based/data_block_hash_index.cc
"${ROCKSDB_SOURCE_DIR}/table/block_based/index_builder.cc" ${ROCKSDB_SOURCE_DIR}/table/block_based/data_block_footer.cc
"${ROCKSDB_SOURCE_DIR}/table/block_based/index_reader_common.cc" ${ROCKSDB_SOURCE_DIR}/table/block_based/filter_block_reader_common.cc
"${ROCKSDB_SOURCE_DIR}/table/block_based/parsed_full_filter_block.cc" ${ROCKSDB_SOURCE_DIR}/table/block_based/filter_policy.cc
"${ROCKSDB_SOURCE_DIR}/table/block_based/partitioned_filter_block.cc" ${ROCKSDB_SOURCE_DIR}/table/block_based/flush_block_policy.cc
"${ROCKSDB_SOURCE_DIR}/table/block_based/partitioned_index_iterator.cc" ${ROCKSDB_SOURCE_DIR}/table/block_based/full_filter_block.cc
"${ROCKSDB_SOURCE_DIR}/table/block_based/partitioned_index_reader.cc" ${ROCKSDB_SOURCE_DIR}/table/block_based/hash_index_reader.cc
"${ROCKSDB_SOURCE_DIR}/table/block_based/reader_common.cc" ${ROCKSDB_SOURCE_DIR}/table/block_based/index_builder.cc
"${ROCKSDB_SOURCE_DIR}/table/block_based/uncompression_dict_reader.cc" ${ROCKSDB_SOURCE_DIR}/table/block_based/index_reader_common.cc
"${ROCKSDB_SOURCE_DIR}/table/block_fetcher.cc" ${ROCKSDB_SOURCE_DIR}/table/block_based/parsed_full_filter_block.cc
"${ROCKSDB_SOURCE_DIR}/table/cuckoo/cuckoo_table_builder.cc" ${ROCKSDB_SOURCE_DIR}/table/block_based/partitioned_filter_block.cc
"${ROCKSDB_SOURCE_DIR}/table/cuckoo/cuckoo_table_factory.cc" ${ROCKSDB_SOURCE_DIR}/table/block_based/partitioned_index_iterator.cc
"${ROCKSDB_SOURCE_DIR}/table/cuckoo/cuckoo_table_reader.cc" ${ROCKSDB_SOURCE_DIR}/table/block_based/partitioned_index_reader.cc
"${ROCKSDB_SOURCE_DIR}/table/format.cc" ${ROCKSDB_SOURCE_DIR}/table/block_based/reader_common.cc
"${ROCKSDB_SOURCE_DIR}/table/get_context.cc" ${ROCKSDB_SOURCE_DIR}/table/block_based/uncompression_dict_reader.cc
"${ROCKSDB_SOURCE_DIR}/table/iterator.cc" ${ROCKSDB_SOURCE_DIR}/table/block_fetcher.cc
"${ROCKSDB_SOURCE_DIR}/table/merging_iterator.cc" ${ROCKSDB_SOURCE_DIR}/table/cuckoo/cuckoo_table_builder.cc
"${ROCKSDB_SOURCE_DIR}/table/meta_blocks.cc" ${ROCKSDB_SOURCE_DIR}/table/cuckoo/cuckoo_table_factory.cc
"${ROCKSDB_SOURCE_DIR}/table/persistent_cache_helper.cc" ${ROCKSDB_SOURCE_DIR}/table/cuckoo/cuckoo_table_reader.cc
"${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_bloom.cc" ${ROCKSDB_SOURCE_DIR}/table/format.cc
"${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_builder.cc" ${ROCKSDB_SOURCE_DIR}/table/get_context.cc
"${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_factory.cc" ${ROCKSDB_SOURCE_DIR}/table/iterator.cc
"${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_index.cc" ${ROCKSDB_SOURCE_DIR}/table/merging_iterator.cc
"${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_key_coding.cc" ${ROCKSDB_SOURCE_DIR}/table/meta_blocks.cc
"${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_reader.cc" ${ROCKSDB_SOURCE_DIR}/table/persistent_cache_helper.cc
"${ROCKSDB_SOURCE_DIR}/table/sst_file_dumper.cc" ${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_bloom.cc
"${ROCKSDB_SOURCE_DIR}/table/sst_file_reader.cc" ${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_builder.cc
"${ROCKSDB_SOURCE_DIR}/table/sst_file_writer.cc" ${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_factory.cc
"${ROCKSDB_SOURCE_DIR}/table/table_factory.cc" ${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_index.cc
"${ROCKSDB_SOURCE_DIR}/table/table_properties.cc" ${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_key_coding.cc
"${ROCKSDB_SOURCE_DIR}/table/two_level_iterator.cc" ${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_reader.cc
"${ROCKSDB_SOURCE_DIR}/test_util/sync_point.cc" ${ROCKSDB_SOURCE_DIR}/table/sst_file_dumper.cc
"${ROCKSDB_SOURCE_DIR}/test_util/sync_point_impl.cc" ${ROCKSDB_SOURCE_DIR}/table/sst_file_reader.cc
"${ROCKSDB_SOURCE_DIR}/test_util/testutil.cc" ${ROCKSDB_SOURCE_DIR}/table/sst_file_writer.cc
"${ROCKSDB_SOURCE_DIR}/test_util/transaction_test_util.cc" ${ROCKSDB_SOURCE_DIR}/table/table_factory.cc
"${ROCKSDB_SOURCE_DIR}/tools/block_cache_analyzer/block_cache_trace_analyzer.cc" ${ROCKSDB_SOURCE_DIR}/table/table_properties.cc
"${ROCKSDB_SOURCE_DIR}/tools/dump/db_dump_tool.cc" ${ROCKSDB_SOURCE_DIR}/table/two_level_iterator.cc
"${ROCKSDB_SOURCE_DIR}/tools/io_tracer_parser_tool.cc" ${ROCKSDB_SOURCE_DIR}/test_util/sync_point.cc
"${ROCKSDB_SOURCE_DIR}/tools/ldb_cmd.cc" ${ROCKSDB_SOURCE_DIR}/test_util/sync_point_impl.cc
"${ROCKSDB_SOURCE_DIR}/tools/ldb_tool.cc" ${ROCKSDB_SOURCE_DIR}/test_util/testutil.cc
"${ROCKSDB_SOURCE_DIR}/tools/sst_dump_tool.cc" ${ROCKSDB_SOURCE_DIR}/test_util/transaction_test_util.cc
"${ROCKSDB_SOURCE_DIR}/tools/trace_analyzer_tool.cc" ${ROCKSDB_SOURCE_DIR}/tools/block_cache_analyzer/block_cache_trace_analyzer.cc
"${ROCKSDB_SOURCE_DIR}/trace_replay/trace_replay.cc" ${ROCKSDB_SOURCE_DIR}/tools/dump/db_dump_tool.cc
"${ROCKSDB_SOURCE_DIR}/trace_replay/block_cache_tracer.cc" ${ROCKSDB_SOURCE_DIR}/tools/io_tracer_parser_tool.cc
"${ROCKSDB_SOURCE_DIR}/trace_replay/io_tracer.cc" ${ROCKSDB_SOURCE_DIR}/tools/ldb_cmd.cc
"${ROCKSDB_SOURCE_DIR}/util/coding.cc" ${ROCKSDB_SOURCE_DIR}/tools/ldb_tool.cc
"${ROCKSDB_SOURCE_DIR}/util/compaction_job_stats_impl.cc" ${ROCKSDB_SOURCE_DIR}/tools/sst_dump_tool.cc
"${ROCKSDB_SOURCE_DIR}/util/comparator.cc" ${ROCKSDB_SOURCE_DIR}/tools/trace_analyzer_tool.cc
"${ROCKSDB_SOURCE_DIR}/util/compression_context_cache.cc" ${ROCKSDB_SOURCE_DIR}/trace_replay/trace_replay.cc
"${ROCKSDB_SOURCE_DIR}/util/concurrent_task_limiter_impl.cc" ${ROCKSDB_SOURCE_DIR}/trace_replay/block_cache_tracer.cc
"${ROCKSDB_SOURCE_DIR}/util/crc32c.cc" ${ROCKSDB_SOURCE_DIR}/trace_replay/io_tracer.cc
"${ROCKSDB_SOURCE_DIR}/util/dynamic_bloom.cc" ${ROCKSDB_SOURCE_DIR}/util/coding.cc
"${ROCKSDB_SOURCE_DIR}/util/hash.cc" ${ROCKSDB_SOURCE_DIR}/util/compaction_job_stats_impl.cc
"${ROCKSDB_SOURCE_DIR}/util/murmurhash.cc" ${ROCKSDB_SOURCE_DIR}/util/comparator.cc
"${ROCKSDB_SOURCE_DIR}/util/random.cc" ${ROCKSDB_SOURCE_DIR}/util/compression_context_cache.cc
"${ROCKSDB_SOURCE_DIR}/util/rate_limiter.cc" ${ROCKSDB_SOURCE_DIR}/util/concurrent_task_limiter_impl.cc
"${ROCKSDB_SOURCE_DIR}/util/slice.cc" ${ROCKSDB_SOURCE_DIR}/util/crc32c.cc
"${ROCKSDB_SOURCE_DIR}/util/file_checksum_helper.cc" ${ROCKSDB_SOURCE_DIR}/util/dynamic_bloom.cc
"${ROCKSDB_SOURCE_DIR}/util/status.cc" ${ROCKSDB_SOURCE_DIR}/util/hash.cc
"${ROCKSDB_SOURCE_DIR}/util/string_util.cc" ${ROCKSDB_SOURCE_DIR}/util/murmurhash.cc
"${ROCKSDB_SOURCE_DIR}/util/thread_local.cc" ${ROCKSDB_SOURCE_DIR}/util/random.cc
"${ROCKSDB_SOURCE_DIR}/util/threadpool_imp.cc" ${ROCKSDB_SOURCE_DIR}/util/rate_limiter.cc
"${ROCKSDB_SOURCE_DIR}/util/xxhash.cc" ${ROCKSDB_SOURCE_DIR}/util/ribbon_config.cc
"${ROCKSDB_SOURCE_DIR}/utilities/backupable/backupable_db.cc" ${ROCKSDB_SOURCE_DIR}/util/slice.cc
"${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_compaction_filter.cc" ${ROCKSDB_SOURCE_DIR}/util/file_checksum_helper.cc
"${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_db.cc" ${ROCKSDB_SOURCE_DIR}/util/status.cc
"${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_db_impl.cc" ${ROCKSDB_SOURCE_DIR}/util/string_util.cc
"${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_db_impl_filesnapshot.cc" ${ROCKSDB_SOURCE_DIR}/util/thread_local.cc
"${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_dump_tool.cc" ${ROCKSDB_SOURCE_DIR}/util/threadpool_imp.cc
"${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_file.cc" ${ROCKSDB_SOURCE_DIR}/util/xxhash.cc
"${ROCKSDB_SOURCE_DIR}/utilities/cassandra/cassandra_compaction_filter.cc" ${ROCKSDB_SOURCE_DIR}/utilities/backupable/backupable_db.cc
"${ROCKSDB_SOURCE_DIR}/utilities/cassandra/format.cc" ${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_compaction_filter.cc
"${ROCKSDB_SOURCE_DIR}/utilities/cassandra/merge_operator.cc" ${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_db.cc
"${ROCKSDB_SOURCE_DIR}/utilities/checkpoint/checkpoint_impl.cc" ${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_db_impl.cc
"${ROCKSDB_SOURCE_DIR}/utilities/compaction_filters/remove_emptyvalue_compactionfilter.cc" ${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_db_impl_filesnapshot.cc
"${ROCKSDB_SOURCE_DIR}/utilities/debug.cc" ${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_dump_tool.cc
"${ROCKSDB_SOURCE_DIR}/utilities/env_mirror.cc" ${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_file.cc
"${ROCKSDB_SOURCE_DIR}/utilities/env_timed.cc" ${ROCKSDB_SOURCE_DIR}/utilities/cassandra/cassandra_compaction_filter.cc
"${ROCKSDB_SOURCE_DIR}/utilities/fault_injection_env.cc" ${ROCKSDB_SOURCE_DIR}/utilities/cassandra/format.cc
"${ROCKSDB_SOURCE_DIR}/utilities/fault_injection_fs.cc" ${ROCKSDB_SOURCE_DIR}/utilities/cassandra/merge_operator.cc
"${ROCKSDB_SOURCE_DIR}/utilities/leveldb_options/leveldb_options.cc" ${ROCKSDB_SOURCE_DIR}/utilities/checkpoint/checkpoint_impl.cc
"${ROCKSDB_SOURCE_DIR}/utilities/memory/memory_util.cc" ${ROCKSDB_SOURCE_DIR}/utilities/compaction_filters/remove_emptyvalue_compactionfilter.cc
"${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/bytesxor.cc" ${ROCKSDB_SOURCE_DIR}/utilities/debug.cc
"${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/max.cc" ${ROCKSDB_SOURCE_DIR}/utilities/env_mirror.cc
"${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/put.cc" ${ROCKSDB_SOURCE_DIR}/utilities/env_timed.cc
"${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/sortlist.cc" ${ROCKSDB_SOURCE_DIR}/utilities/fault_injection_env.cc
"${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/string_append/stringappend.cc" ${ROCKSDB_SOURCE_DIR}/utilities/fault_injection_fs.cc
"${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/string_append/stringappend2.cc" ${ROCKSDB_SOURCE_DIR}/utilities/leveldb_options/leveldb_options.cc
"${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/uint64add.cc" ${ROCKSDB_SOURCE_DIR}/utilities/memory/memory_util.cc
"${ROCKSDB_SOURCE_DIR}/utilities/object_registry.cc" ${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/bytesxor.cc
"${ROCKSDB_SOURCE_DIR}/utilities/option_change_migration/option_change_migration.cc" ${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/max.cc
"${ROCKSDB_SOURCE_DIR}/utilities/options/options_util.cc" ${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/put.cc
"${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/block_cache_tier.cc" ${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/sortlist.cc
"${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/block_cache_tier_file.cc" ${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/string_append/stringappend.cc
"${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/block_cache_tier_metadata.cc" ${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/string_append/stringappend2.cc
"${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/persistent_cache_tier.cc" ${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/uint64add.cc
"${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/volatile_tier_impl.cc" ${ROCKSDB_SOURCE_DIR}/utilities/object_registry.cc
"${ROCKSDB_SOURCE_DIR}/utilities/simulator_cache/cache_simulator.cc" ${ROCKSDB_SOURCE_DIR}/utilities/option_change_migration/option_change_migration.cc
"${ROCKSDB_SOURCE_DIR}/utilities/simulator_cache/sim_cache.cc" ${ROCKSDB_SOURCE_DIR}/utilities/options/options_util.cc
"${ROCKSDB_SOURCE_DIR}/utilities/table_properties_collectors/compact_on_deletion_collector.cc" ${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/block_cache_tier.cc
"${ROCKSDB_SOURCE_DIR}/utilities/trace/file_trace_reader_writer.cc" ${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/block_cache_tier_file.cc
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/lock_manager.cc" ${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/block_cache_tier_metadata.cc
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/point/point_lock_tracker.cc" ${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/persistent_cache_tier.cc
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/point/point_lock_manager.cc" ${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/volatile_tier_impl.cc
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/optimistic_transaction_db_impl.cc" ${ROCKSDB_SOURCE_DIR}/utilities/simulator_cache/cache_simulator.cc
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/optimistic_transaction.cc" ${ROCKSDB_SOURCE_DIR}/utilities/simulator_cache/sim_cache.cc
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/pessimistic_transaction.cc" ${ROCKSDB_SOURCE_DIR}/utilities/table_properties_collectors/compact_on_deletion_collector.cc
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/pessimistic_transaction_db.cc" ${ROCKSDB_SOURCE_DIR}/utilities/trace/file_trace_reader_writer.cc
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/snapshot_checker.cc" ${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/lock_manager.cc
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_base.cc" ${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/point/point_lock_tracker.cc
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_db_mutex_impl.cc" ${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/point/point_lock_manager.cc
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_util.cc" ${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/range_tree_lock_manager.cc
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_prepared_txn.cc" ${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/range_tree_lock_tracker.cc
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_prepared_txn_db.cc" ${ROCKSDB_SOURCE_DIR}/utilities/transactions/optimistic_transaction_db_impl.cc
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_unprepared_txn.cc" ${ROCKSDB_SOURCE_DIR}/utilities/transactions/optimistic_transaction.cc
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_unprepared_txn_db.cc" ${ROCKSDB_SOURCE_DIR}/utilities/transactions/pessimistic_transaction.cc
"${ROCKSDB_SOURCE_DIR}/utilities/ttl/db_ttl_impl.cc" ${ROCKSDB_SOURCE_DIR}/utilities/transactions/pessimistic_transaction_db.cc
"${ROCKSDB_SOURCE_DIR}/utilities/write_batch_with_index/write_batch_with_index.cc" ${ROCKSDB_SOURCE_DIR}/utilities/transactions/snapshot_checker.cc
"${ROCKSDB_SOURCE_DIR}/utilities/write_batch_with_index/write_batch_with_index_internal.cc" ${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_base.cc
$<TARGET_OBJECTS:rocksdb_build_version>) ${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_db_mutex_impl.cc
${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_util.cc
${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_prepared_txn.cc
${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_prepared_txn_db.cc
${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_unprepared_txn.cc
${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_unprepared_txn_db.cc
${ROCKSDB_SOURCE_DIR}/utilities/ttl/db_ttl_impl.cc
${ROCKSDB_SOURCE_DIR}/utilities/write_batch_with_index/write_batch_with_index.cc
${ROCKSDB_SOURCE_DIR}/utilities/write_batch_with_index/write_batch_with_index_internal.cc
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/locktree/concurrent_tree.cc
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/locktree/keyrange.cc
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/locktree/lock_request.cc
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/locktree/locktree.cc
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/locktree/manager.cc
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/locktree/range_buffer.cc
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/locktree/treenode.cc
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/locktree/txnid_set.cc
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/locktree/wfg.cc
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/standalone_port.cc
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/util/dbt.cc
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/util/memarena.cc
rocksdb_build_version.cc)
if(HAVE_SSE42 AND NOT MSVC) if(HAVE_SSE42 AND NOT MSVC)
set_source_files_properties( set_source_files_properties(

View File

@ -1,3 +1,62 @@
const char* rocksdb_build_git_sha = "rocksdb_build_git_sha:0"; // Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
const char* rocksdb_build_git_date = "rocksdb_build_git_date:2000-01-01"; /// This file was edited for ClickHouse.
const char* rocksdb_build_compile_date = "2000-01-01";
#include <memory>
#include "rocksdb/version.h"
#include "util/string_util.h"
// The build script may replace these values with real values based
// on whether or not GIT is available and the platform settings
static const std::string rocksdb_build_git_sha = "rocksdb_build_git_sha:0";
static const std::string rocksdb_build_git_tag = "rocksdb_build_git_tag:master";
static const std::string rocksdb_build_date = "rocksdb_build_date:2000-01-01";
namespace ROCKSDB_NAMESPACE {
static void AddProperty(std::unordered_map<std::string, std::string> *props, const std::string& name) {
size_t colon = name.find(":");
if (colon != std::string::npos && colon > 0 && colon < name.length() - 1) {
// If we found a "@:", then this property was a build-time substitution that failed. Skip it
size_t at = name.find("@", colon);
if (at != colon + 1) {
// Everything before the colon is the name, after is the value
(*props)[name.substr(0, colon)] = name.substr(colon + 1);
}
}
}
static std::unordered_map<std::string, std::string>* LoadPropertiesSet() {
auto * properties = new std::unordered_map<std::string, std::string>();
AddProperty(properties, rocksdb_build_git_sha);
AddProperty(properties, rocksdb_build_git_tag);
AddProperty(properties, rocksdb_build_date);
return properties;
}
const std::unordered_map<std::string, std::string>& GetRocksBuildProperties() {
static std::unique_ptr<std::unordered_map<std::string, std::string>> props(LoadPropertiesSet());
return *props;
}
std::string GetRocksVersionAsString(bool with_patch) {
std::string version = ToString(ROCKSDB_MAJOR) + "." + ToString(ROCKSDB_MINOR);
if (with_patch) {
return version + "." + ToString(ROCKSDB_PATCH);
} else {
return version;
}
}
std::string GetRocksBuildInfoAsString(const std::string& program, bool verbose) {
std::string info = program + " (RocksDB) " + GetRocksVersionAsString(true);
if (verbose) {
for (const auto& it : GetRocksBuildProperties()) {
info.append("\n ");
info.append(it.first);
info.append(": ");
info.append(it.second);
}
}
return info;
}
} // namespace ROCKSDB_NAMESPACE

1
contrib/s2geometry vendored Submodule

@ -0,0 +1 @@
Subproject commit 20ea540d81f4575a3fc0aea585aac611bcd03ede

View File

@ -0,0 +1,128 @@
set(S2_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/s2geometry/src")
set(S2_SRCS
"${S2_SOURCE_DIR}/s2/base/stringprintf.cc"
"${S2_SOURCE_DIR}/s2/base/strtoint.cc"
"${S2_SOURCE_DIR}/s2/encoded_s2cell_id_vector.cc"
"${S2_SOURCE_DIR}/s2/encoded_s2point_vector.cc"
"${S2_SOURCE_DIR}/s2/encoded_s2shape_index.cc"
"${S2_SOURCE_DIR}/s2/encoded_string_vector.cc"
"${S2_SOURCE_DIR}/s2/id_set_lexicon.cc"
"${S2_SOURCE_DIR}/s2/mutable_s2shape_index.cc"
"${S2_SOURCE_DIR}/s2/r2rect.cc"
"${S2_SOURCE_DIR}/s2/s1angle.cc"
"${S2_SOURCE_DIR}/s2/s1chord_angle.cc"
"${S2_SOURCE_DIR}/s2/s1interval.cc"
"${S2_SOURCE_DIR}/s2/s2boolean_operation.cc"
"${S2_SOURCE_DIR}/s2/s2builder.cc"
"${S2_SOURCE_DIR}/s2/s2builder_graph.cc"
"${S2_SOURCE_DIR}/s2/s2builderutil_closed_set_normalizer.cc"
"${S2_SOURCE_DIR}/s2/s2builderutil_find_polygon_degeneracies.cc"
"${S2_SOURCE_DIR}/s2/s2builderutil_lax_polygon_layer.cc"
"${S2_SOURCE_DIR}/s2/s2builderutil_s2point_vector_layer.cc"
"${S2_SOURCE_DIR}/s2/s2builderutil_s2polygon_layer.cc"
"${S2_SOURCE_DIR}/s2/s2builderutil_s2polyline_layer.cc"
"${S2_SOURCE_DIR}/s2/s2builderutil_s2polyline_vector_layer.cc"
"${S2_SOURCE_DIR}/s2/s2builderutil_snap_functions.cc"
"${S2_SOURCE_DIR}/s2/s2cap.cc"
"${S2_SOURCE_DIR}/s2/s2cell.cc"
"${S2_SOURCE_DIR}/s2/s2cell_id.cc"
"${S2_SOURCE_DIR}/s2/s2cell_index.cc"
"${S2_SOURCE_DIR}/s2/s2cell_union.cc"
"${S2_SOURCE_DIR}/s2/s2centroids.cc"
"${S2_SOURCE_DIR}/s2/s2closest_cell_query.cc"
"${S2_SOURCE_DIR}/s2/s2closest_edge_query.cc"
"${S2_SOURCE_DIR}/s2/s2closest_point_query.cc"
"${S2_SOURCE_DIR}/s2/s2contains_vertex_query.cc"
"${S2_SOURCE_DIR}/s2/s2convex_hull_query.cc"
"${S2_SOURCE_DIR}/s2/s2coords.cc"
"${S2_SOURCE_DIR}/s2/s2crossing_edge_query.cc"
"${S2_SOURCE_DIR}/s2/s2debug.cc"
"${S2_SOURCE_DIR}/s2/s2earth.cc"
"${S2_SOURCE_DIR}/s2/s2edge_clipping.cc"
"${S2_SOURCE_DIR}/s2/s2edge_crosser.cc"
"${S2_SOURCE_DIR}/s2/s2edge_crossings.cc"
"${S2_SOURCE_DIR}/s2/s2edge_distances.cc"
"${S2_SOURCE_DIR}/s2/s2edge_tessellator.cc"
"${S2_SOURCE_DIR}/s2/s2error.cc"
"${S2_SOURCE_DIR}/s2/s2furthest_edge_query.cc"
"${S2_SOURCE_DIR}/s2/s2latlng.cc"
"${S2_SOURCE_DIR}/s2/s2latlng_rect.cc"
"${S2_SOURCE_DIR}/s2/s2latlng_rect_bounder.cc"
"${S2_SOURCE_DIR}/s2/s2lax_loop_shape.cc"
"${S2_SOURCE_DIR}/s2/s2lax_polygon_shape.cc"
"${S2_SOURCE_DIR}/s2/s2lax_polyline_shape.cc"
"${S2_SOURCE_DIR}/s2/s2loop.cc"
"${S2_SOURCE_DIR}/s2/s2loop_measures.cc"
"${S2_SOURCE_DIR}/s2/s2measures.cc"
"${S2_SOURCE_DIR}/s2/s2metrics.cc"
"${S2_SOURCE_DIR}/s2/s2max_distance_targets.cc"
"${S2_SOURCE_DIR}/s2/s2min_distance_targets.cc"
"${S2_SOURCE_DIR}/s2/s2padded_cell.cc"
"${S2_SOURCE_DIR}/s2/s2point_compression.cc"
"${S2_SOURCE_DIR}/s2/s2point_region.cc"
"${S2_SOURCE_DIR}/s2/s2pointutil.cc"
"${S2_SOURCE_DIR}/s2/s2polygon.cc"
"${S2_SOURCE_DIR}/s2/s2polyline.cc"
"${S2_SOURCE_DIR}/s2/s2polyline_alignment.cc"
"${S2_SOURCE_DIR}/s2/s2polyline_measures.cc"
"${S2_SOURCE_DIR}/s2/s2polyline_simplifier.cc"
"${S2_SOURCE_DIR}/s2/s2predicates.cc"
"${S2_SOURCE_DIR}/s2/s2projections.cc"
"${S2_SOURCE_DIR}/s2/s2r2rect.cc"
"${S2_SOURCE_DIR}/s2/s2region.cc"
"${S2_SOURCE_DIR}/s2/s2region_term_indexer.cc"
"${S2_SOURCE_DIR}/s2/s2region_coverer.cc"
"${S2_SOURCE_DIR}/s2/s2region_intersection.cc"
"${S2_SOURCE_DIR}/s2/s2region_union.cc"
"${S2_SOURCE_DIR}/s2/s2shape_index.cc"
"${S2_SOURCE_DIR}/s2/s2shape_index_buffered_region.cc"
"${S2_SOURCE_DIR}/s2/s2shape_index_measures.cc"
"${S2_SOURCE_DIR}/s2/s2shape_measures.cc"
"${S2_SOURCE_DIR}/s2/s2shapeutil_build_polygon_boundaries.cc"
"${S2_SOURCE_DIR}/s2/s2shapeutil_coding.cc"
"${S2_SOURCE_DIR}/s2/s2shapeutil_contains_brute_force.cc"
"${S2_SOURCE_DIR}/s2/s2shapeutil_edge_iterator.cc"
"${S2_SOURCE_DIR}/s2/s2shapeutil_get_reference_point.cc"
"${S2_SOURCE_DIR}/s2/s2shapeutil_range_iterator.cc"
"${S2_SOURCE_DIR}/s2/s2shapeutil_visit_crossing_edge_pairs.cc"
"${S2_SOURCE_DIR}/s2/s2text_format.cc"
"${S2_SOURCE_DIR}/s2/s2wedge_relations.cc"
"${S2_SOURCE_DIR}/s2/strings/ostringstream.cc"
"${S2_SOURCE_DIR}/s2/strings/serialize.cc"
# ClickHouse doesn't use strings from abseil.
# So, there is no duplicate symbols.
"${S2_SOURCE_DIR}/s2/third_party/absl/base/dynamic_annotations.cc"
"${S2_SOURCE_DIR}/s2/third_party/absl/base/internal/raw_logging.cc"
"${S2_SOURCE_DIR}/s2/third_party/absl/base/internal/throw_delegate.cc"
"${S2_SOURCE_DIR}/s2/third_party/absl/numeric/int128.cc"
"${S2_SOURCE_DIR}/s2/third_party/absl/strings/ascii.cc"
"${S2_SOURCE_DIR}/s2/third_party/absl/strings/match.cc"
"${S2_SOURCE_DIR}/s2/third_party/absl/strings/numbers.cc"
"${S2_SOURCE_DIR}/s2/third_party/absl/strings/str_cat.cc"
"${S2_SOURCE_DIR}/s2/third_party/absl/strings/str_split.cc"
"${S2_SOURCE_DIR}/s2/third_party/absl/strings/string_view.cc"
"${S2_SOURCE_DIR}/s2/third_party/absl/strings/strip.cc"
"${S2_SOURCE_DIR}/s2/third_party/absl/strings/internal/memutil.cc"
"${S2_SOURCE_DIR}/s2/util/bits/bit-interleave.cc"
"${S2_SOURCE_DIR}/s2/util/bits/bits.cc"
"${S2_SOURCE_DIR}/s2/util/coding/coder.cc"
"${S2_SOURCE_DIR}/s2/util/coding/varint.cc"
"${S2_SOURCE_DIR}/s2/util/math/exactfloat/exactfloat.cc"
"${S2_SOURCE_DIR}/s2/util/math/mathutil.cc"
"${S2_SOURCE_DIR}/s2/util/units/length-units.cc"
)
add_library(s2 ${S2_SRCS})
set_property(TARGET s2 PROPERTY CXX_STANDARD 11)
if (OPENSSL_FOUND)
target_link_libraries(s2 PRIVATE ${OPENSSL_LIBRARIES})
endif()
target_include_directories(s2 SYSTEM BEFORE PUBLIC "${S2_SOURCE_DIR}/")
if(M_LIBRARY)
target_link_libraries(s2 PRIVATE ${M_LIBRARY})
endif()

View File

@ -4,3 +4,6 @@ set(SIMDJSON_SRC "${SIMDJSON_SRC_DIR}/simdjson.cpp")
add_library(simdjson ${SIMDJSON_SRC}) add_library(simdjson ${SIMDJSON_SRC})
target_include_directories(simdjson SYSTEM PUBLIC "${SIMDJSON_INCLUDE_DIR}" PRIVATE "${SIMDJSON_SRC_DIR}") target_include_directories(simdjson SYSTEM PUBLIC "${SIMDJSON_INCLUDE_DIR}" PRIVATE "${SIMDJSON_SRC_DIR}")
# simdjson is using its own CPU dispatching and get confused if we enable AVX/AVX2 flags.
target_compile_options(simdjson PRIVATE -mno-avx -mno-avx2)

1
contrib/sqlite-amalgamation vendored Submodule

@ -0,0 +1 @@
Subproject commit 9818baa5d027ffb26d57f810dc4c597d4946781c

View File

@ -0,0 +1,6 @@
set (LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/sqlite-amalgamation")
set(SRCS ${LIBRARY_DIR}/sqlite3.c)
add_library(sqlite ${SRCS})
target_include_directories(sqlite SYSTEM PUBLIC "${LIBRARY_DIR}")

1
contrib/wordnet-blast vendored Submodule

@ -0,0 +1 @@
Subproject commit 1d16ac28036e19fe8da7ba72c16a307fbdf8c87e

View File

@ -0,0 +1,13 @@
set(LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/wordnet-blast")
set(SRCS
"${LIBRARY_DIR}/wnb/core/info_helper.cc"
"${LIBRARY_DIR}/wnb/core/load_wordnet.cc"
"${LIBRARY_DIR}/wnb/core/wordnet.cc"
)
add_library(wnb ${SRCS})
target_link_libraries(wnb PRIVATE boost::headers_only boost::graph)
target_include_directories(wnb PUBLIC "${LIBRARY_DIR}")

2
contrib/zlib-ng vendored

@ -1 +1 @@
Subproject commit db232d30b4c72fd58e6d7eae2d12cebf9c3d90db Subproject commit 6a5e93b9007782115f7f7e5235dedc81c4f1facb

4
debian/changelog vendored
View File

@ -1,5 +1,5 @@
clickhouse (21.8.1.1) unstable; urgency=low clickhouse (21.9.1.1) unstable; urgency=low
* Modified source code * Modified source code
-- clickhouse-release <clickhouse-release@yandex-team.ru> Mon, 28 Jun 2021 00:50:15 +0300 -- clickhouse-release <clickhouse-release@yandex-team.ru> Sat, 10 Jul 2021 08:22:49 +0300

View File

@ -1,7 +1,7 @@
FROM ubuntu:18.04 FROM ubuntu:18.04
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/" ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
ARG version=21.8.1.* ARG version=21.9.1.*
RUN apt-get update \ RUN apt-get update \
&& apt-get install --yes --no-install-recommends \ && apt-get install --yes --no-install-recommends \

View File

@ -27,7 +27,7 @@ RUN apt-get update \
# Special dpkg-deb (https://github.com/ClickHouse-Extras/dpkg) version which is able # Special dpkg-deb (https://github.com/ClickHouse-Extras/dpkg) version which is able
# to compress files using pigz (https://zlib.net/pigz/) instead of gzip. # to compress files using pigz (https://zlib.net/pigz/) instead of gzip.
# Significantly increase deb packaging speed and compatible with old systems # Significantly increase deb packaging speed and compatible with old systems
RUN curl -O https://clickhouse-builds.s3.yandex.net/utils/1/dpkg-deb \ RUN curl -O https://clickhouse-datasets.s3.yandex.net/utils/1/dpkg-deb \
&& chmod +x dpkg-deb \ && chmod +x dpkg-deb \
&& cp dpkg-deb /usr/bin && cp dpkg-deb /usr/bin

View File

@ -151,8 +151,14 @@ def parse_env_variables(build_type, compiler, sanitizer, package_type, image_typ
cmake_flags.append('-DENABLE_TESTS=1') cmake_flags.append('-DENABLE_TESTS=1')
cmake_flags.append('-DUSE_GTEST=1') cmake_flags.append('-DUSE_GTEST=1')
# "Unbundled" build is not suitable for any production usage.
# But it is occasionally used by some developers.
# The whole idea of using unknown version of libraries from the OS distribution is deeply flawed.
# We wish these developers good luck.
if unbundled: if unbundled:
cmake_flags.append('-DUNBUNDLED=1 -DUSE_INTERNAL_RDKAFKA_LIBRARY=1 -DENABLE_ARROW=0 -DENABLE_AVRO=0 -DENABLE_ORC=0 -DENABLE_PARQUET=0') # We also disable all CPU features except basic x86_64.
# It is only slightly related to "unbundled" build, but it is a good place to test if code compiles without these instruction sets.
cmake_flags.append('-DUNBUNDLED=1 -DUSE_INTERNAL_RDKAFKA_LIBRARY=1 -DENABLE_ARROW=0 -DENABLE_AVRO=0 -DENABLE_ORC=0 -DENABLE_PARQUET=0 -DENABLE_SSSE3=0 -DENABLE_SSE41=0 -DENABLE_SSE42=0 -DENABLE_PCLMULQDQ=0 -DENABLE_POPCNT=0 -DENABLE_AVX=0 -DENABLE_AVX2=0')
if split_binary: if split_binary:
cmake_flags.append('-DUSE_STATIC_LIBRARIES=0 -DSPLIT_SHARED_LIBRARIES=1 -DCLICKHOUSE_SPLIT_BINARY=1') cmake_flags.append('-DUSE_STATIC_LIBRARIES=0 -DSPLIT_SHARED_LIBRARIES=1 -DCLICKHOUSE_SPLIT_BINARY=1')

View File

@ -2,7 +2,7 @@
FROM yandex/clickhouse-deb-builder FROM yandex/clickhouse-deb-builder
RUN export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \ RUN export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \
&& wget -nv -O /tmp/arrow-keyring.deb "https://apache.bintray.com/arrow/ubuntu/apache-arrow-archive-keyring-latest-${CODENAME}.deb" \ && wget -nv -O /tmp/arrow-keyring.deb "https://apache.jfrog.io/artifactory/arrow/ubuntu/apache-arrow-apt-source-latest-${CODENAME}.deb" \
&& dpkg -i /tmp/arrow-keyring.deb && dpkg -i /tmp/arrow-keyring.deb
# Libraries from OS are only needed to test the "unbundled" build (that is not used in production). # Libraries from OS are only needed to test the "unbundled" build (that is not used in production).
@ -23,6 +23,7 @@ RUN apt-get update \
libboost-regex-dev \ libboost-regex-dev \
libboost-context-dev \ libboost-context-dev \
libboost-coroutine-dev \ libboost-coroutine-dev \
libboost-graph-dev \
zlib1g-dev \ zlib1g-dev \
liblz4-dev \ liblz4-dev \
libdouble-conversion-dev \ libdouble-conversion-dev \

View File

@ -1,7 +1,7 @@
FROM ubuntu:20.04 FROM ubuntu:20.04
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/" ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
ARG version=21.8.1.* ARG version=21.9.1.*
ARG gosu_ver=1.10 ARG gosu_ver=1.10
# set non-empty deb_location_url url to create a docker image # set non-empty deb_location_url url to create a docker image

View File

@ -72,7 +72,10 @@ do
if [ "$DO_CHOWN" = "1" ]; then if [ "$DO_CHOWN" = "1" ]; then
# ensure proper directories permissions # ensure proper directories permissions
chown -R "$USER:$GROUP" "$dir" # but skip it for if directory already has proper premissions, cause recursive chown may be slow
if [ "$(stat -c %u "$dir")" != "$USER" ] || [ "$(stat -c %g "$dir")" != "$GROUP" ]; then
chown -R "$USER:$GROUP" "$dir"
fi
elif ! $gosu test -d "$dir" -a -w "$dir" -a -r "$dir"; then elif ! $gosu test -d "$dir" -a -w "$dir" -a -r "$dir"; then
echo "Necessary directory '$dir' isn't accessible by user with id '$USER'" echo "Necessary directory '$dir' isn't accessible by user with id '$USER'"
exit 1 exit 1
@ -161,6 +164,10 @@ fi
# if no args passed to `docker run` or first argument start with `--`, then the user is passing clickhouse-server arguments # if no args passed to `docker run` or first argument start with `--`, then the user is passing clickhouse-server arguments
if [[ $# -lt 1 ]] || [[ "$1" == "--"* ]]; then if [[ $# -lt 1 ]] || [[ "$1" == "--"* ]]; then
# Watchdog is launched by default, but does not send SIGINT to the main process,
# so the container can't be finished by ctrl+c
CLICKHOUSE_WATCHDOG_ENABLE=${CLICKHOUSE_WATCHDOG_ENABLE:-0}
export CLICKHOUSE_WATCHDOG_ENABLE
exec $gosu /usr/bin/clickhouse-server --config-file="$CLICKHOUSE_CONFIG" "$@" exec $gosu /usr/bin/clickhouse-server --config-file="$CLICKHOUSE_CONFIG" "$@"
fi fi

View File

@ -1,7 +1,7 @@
FROM ubuntu:18.04 FROM ubuntu:18.04
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/" ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
ARG version=21.8.1.* ARG version=21.9.1.*
RUN apt-get update && \ RUN apt-get update && \
apt-get install -y apt-transport-https dirmngr && \ apt-get install -y apt-transport-https dirmngr && \

View File

@ -27,7 +27,7 @@ RUN apt-get update \
# Special dpkg-deb (https://github.com/ClickHouse-Extras/dpkg) version which is able # Special dpkg-deb (https://github.com/ClickHouse-Extras/dpkg) version which is able
# to compress files using pigz (https://zlib.net/pigz/) instead of gzip. # to compress files using pigz (https://zlib.net/pigz/) instead of gzip.
# Significantly increase deb packaging speed and compatible with old systems # Significantly increase deb packaging speed and compatible with old systems
RUN curl -O https://clickhouse-builds.s3.yandex.net/utils/1/dpkg-deb \ RUN curl -O https://clickhouse-datasets.s3.yandex.net/utils/1/dpkg-deb \
&& chmod +x dpkg-deb \ && chmod +x dpkg-deb \
&& cp dpkg-deb /usr/bin && cp dpkg-deb /usr/bin
@ -61,4 +61,7 @@ ENV TSAN_OPTIONS='halt_on_error=1 history_size=7'
ENV UBSAN_OPTIONS='print_stacktrace=1' ENV UBSAN_OPTIONS='print_stacktrace=1'
ENV MSAN_OPTIONS='abort_on_error=1 poison_in_dtor=1' ENV MSAN_OPTIONS='abort_on_error=1 poison_in_dtor=1'
ENV TZ=Europe/Moscow
RUN ln -snf "/usr/share/zoneinfo/$TZ" /etc/localtime && echo "$TZ" > /etc/timezone
CMD sleep 1 CMD sleep 1

View File

@ -27,7 +27,7 @@ RUN apt-get update \
# Special dpkg-deb (https://github.com/ClickHouse-Extras/dpkg) version which is able # Special dpkg-deb (https://github.com/ClickHouse-Extras/dpkg) version which is able
# to compress files using pigz (https://zlib.net/pigz/) instead of gzip. # to compress files using pigz (https://zlib.net/pigz/) instead of gzip.
# Significantly increase deb packaging speed and compatible with old systems # Significantly increase deb packaging speed and compatible with old systems
RUN curl -O https://clickhouse-builds.s3.yandex.net/utils/1/dpkg-deb \ RUN curl -O https://clickhouse-datasets.s3.yandex.net/utils/1/dpkg-deb \
&& chmod +x dpkg-deb \ && chmod +x dpkg-deb \
&& cp dpkg-deb /usr/bin && cp dpkg-deb /usr/bin
@ -65,7 +65,7 @@ RUN apt-get update \
unixodbc \ unixodbc \
--yes --no-install-recommends --yes --no-install-recommends
RUN pip3 install numpy scipy pandas RUN pip3 install numpy scipy pandas Jinja2
# This symlink required by gcc to find lld compiler # This symlink required by gcc to find lld compiler
RUN ln -s /usr/bin/lld-${LLVM_VERSION} /usr/bin/ld.lld RUN ln -s /usr/bin/lld-${LLVM_VERSION} /usr/bin/ld.lld

View File

@ -279,6 +279,7 @@ function run_tests
00926_multimatch 00926_multimatch
00929_multi_match_edit_distance 00929_multi_match_edit_distance
01681_hyperscan_debug_assertion 01681_hyperscan_debug_assertion
02004_max_hyperscan_regex_length
01176_mysql_client_interactive # requires mysql client 01176_mysql_client_interactive # requires mysql client
01031_mutations_interpreter_and_context 01031_mutations_interpreter_and_context
@ -299,6 +300,7 @@ function run_tests
01318_decrypt # Depends on OpenSSL 01318_decrypt # Depends on OpenSSL
01663_aes_msan # Depends on OpenSSL 01663_aes_msan # Depends on OpenSSL
01667_aes_args_check # Depends on OpenSSL 01667_aes_args_check # Depends on OpenSSL
01683_codec_encrypted # Depends on OpenSSL
01776_decrypt_aead_size_check # Depends on OpenSSL 01776_decrypt_aead_size_check # Depends on OpenSSL
01811_filter_by_null # Depends on OpenSSL 01811_filter_by_null # Depends on OpenSSL
01281_unsucceeded_insert_select_queries_counter 01281_unsucceeded_insert_select_queries_counter
@ -310,6 +312,8 @@ function run_tests
01411_bayesian_ab_testing 01411_bayesian_ab_testing
01798_uniq_theta_sketch 01798_uniq_theta_sketch
01799_long_uniq_theta_sketch 01799_long_uniq_theta_sketch
01890_stem # depends on libstemmer_c
02003_compress_bz2 # depends on bzip2
collate collate
collation collation
_orc_ _orc_
@ -378,6 +382,16 @@ function run_tests
# needs pv # needs pv
01923_network_receive_time_metric_insert 01923_network_receive_time_metric_insert
01889_sqlite_read_write
# needs s2
01849_geoToS2
01851_s2_to_geo
01852_s2_get_neighbours
01853_s2_cells_intersect
01854_s2_cap_contains
01854_s2_cap_union
) )
time clickhouse-test --hung-check -j 8 --order=random --use-skip-list \ time clickhouse-test --hung-check -j 8 --order=random --use-skip-list \

View File

@ -194,6 +194,10 @@ continue
jobs jobs
pstree -aspgT pstree -aspgT
server_exit_code=0
wait $server_pid || server_exit_code=$?
echo "Server exit code is $server_exit_code"
# Make files with status and description we'll show for this check on Github. # Make files with status and description we'll show for this check on Github.
task_exit_code=$fuzzer_exit_code task_exit_code=$fuzzer_exit_code
if [ "$server_died" == 1 ] if [ "$server_died" == 1 ]
@ -222,7 +226,7 @@ continue
task_exit_code=$fuzzer_exit_code task_exit_code=$fuzzer_exit_code
echo "failure" > status.txt echo "failure" > status.txt
{ grep --text -o "Found error:.*" fuzzer.log \ { grep --text -o "Found error:.*" fuzzer.log \
|| grep --text -o "Exception.*" fuzzer.log \ || grep --text -ao "Exception:.*" fuzzer.log \
|| echo "Fuzzer failed ($fuzzer_exit_code). See the logs." ; } \ || echo "Fuzzer failed ($fuzzer_exit_code). See the logs." ; } \
| tail -1 > description.txt | tail -1 > description.txt
fi fi

View File

@ -32,7 +32,7 @@ RUN rm -rf \
RUN apt-get clean RUN apt-get clean
# Install MySQL ODBC driver # Install MySQL ODBC driver
RUN curl 'https://cdn.mysql.com//Downloads/Connector-ODBC/8.0/mysql-connector-odbc-8.0.21-linux-glibc2.12-x86-64bit.tar.gz' --output 'mysql-connector.tar.gz' && tar -xzf mysql-connector.tar.gz && cd mysql-connector-odbc-8.0.21-linux-glibc2.12-x86-64bit/lib && mv * /usr/local/lib && ln -s /usr/local/lib/libmyodbc8a.so /usr/lib/x86_64-linux-gnu/odbc/libmyodbc.so RUN curl 'https://downloads.mysql.com/archives/get/p/10/file/mysql-connector-odbc-8.0.21-linux-glibc2.12-x86-64bit.tar.gz' --location --output 'mysql-connector.tar.gz' && tar -xzf mysql-connector.tar.gz && cd mysql-connector-odbc-8.0.21-linux-glibc2.12-x86-64bit/lib && mv * /usr/local/lib && ln -s /usr/local/lib/libmyodbc8a.so /usr/lib/x86_64-linux-gnu/odbc/libmyodbc.so
# Unfortunately this is required for a single test for conversion data from zookeeper to clickhouse-keeper. # Unfortunately this is required for a single test for conversion data from zookeeper to clickhouse-keeper.
# ZooKeeper is not started by default, but consumes some space in containers. # ZooKeeper is not started by default, but consumes some space in containers.
@ -49,4 +49,3 @@ RUN mkdir /zookeeper && chmod -R 777 /zookeeper
ENV TZ=Europe/Moscow ENV TZ=Europe/Moscow
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone

View File

@ -76,6 +76,7 @@ RUN python3 -m pip install \
pytest \ pytest \
pytest-timeout \ pytest-timeout \
pytest-xdist \ pytest-xdist \
pytest-repeat \
redis \ redis \
tzlocal \ tzlocal \
urllib3 \ urllib3 \

View File

@ -14,10 +14,14 @@ services:
} }
EOF EOF
./docker-entrypoint.sh' ./docker-entrypoint.sh'
ports: expose:
- 9020:9019 - 9019
healthcheck: healthcheck:
test: ["CMD", "curl", "-s", "localhost:9019/ping"] test: ["CMD", "curl", "-s", "localhost:9019/ping"]
interval: 5s interval: 5s
timeout: 3s timeout: 3s
retries: 30 retries: 30
volumes:
- type: ${JDBC_BRIDGE_FS:-tmpfs}
source: ${JDBC_BRIDGE_LOGS:-}
target: /app/logs

View File

@ -0,0 +1,13 @@
version: '2.3'
services:
mongo1:
image: mongo:3.6
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: clickhouse
volumes:
- ${MONGO_CONFIG_PATH}:/mongo/
ports:
- ${MONGO_EXTERNAL_PORT}:${MONGO_INTERNAL_PORT}
command: --config /mongo/mongo_secure.conf --profile=2 --verbose

View File

@ -2,7 +2,7 @@ version: '2.3'
services: services:
postgres1: postgres1:
image: postgres image: postgres
command: ["postgres", "-c", "logging_collector=on", "-c", "log_directory=/postgres/logs", "-c", "log_filename=postgresql.log", "-c", "log_statement=all"] command: ["postgres", "-c", "logging_collector=on", "-c", "log_directory=/postgres/logs", "-c", "log_filename=postgresql.log", "-c", "log_statement=all", "-c", "max_connections=200"]
restart: always restart: always
expose: expose:
- ${POSTGRES_PORT} - ${POSTGRES_PORT}

View File

@ -2,7 +2,7 @@ version: '2.3'
services: services:
rabbitmq1: rabbitmq1:
image: rabbitmq:3-management-alpine image: rabbitmq:3.8-management-alpine
hostname: rabbitmq1 hostname: rabbitmq1
expose: expose:
- ${RABBITMQ_PORT} - ${RABBITMQ_PORT}

View File

@ -628,6 +628,9 @@ cat analyze/errors.log >> report/errors.log ||:
cat profile-errors.log >> report/errors.log ||: cat profile-errors.log >> report/errors.log ||:
clickhouse-local --query " clickhouse-local --query "
-- We use decimals specifically to get fixed-point, fixed-width formatting.
set output_format_decimal_trailing_zeros = 1;
create view query_display_names as select * from create view query_display_names as select * from
file('analyze/query-display-names.tsv', TSV, file('analyze/query-display-names.tsv', TSV,
'test text, query_index int, query_display_name text') 'test text, query_index int, query_display_name text')
@ -975,6 +978,9 @@ for version in {right,left}
do do
rm -rf data rm -rf data
clickhouse-local --query " clickhouse-local --query "
-- We use decimals specifically to get fixed-point, fixed-width formatting.
set output_format_decimal_trailing_zeros = 1;
create view query_profiles as create view query_profiles as
with 0 as left, 1 as right with 0 as left, 1 as right
select * from file('analyze/query-profiles.tsv', TSV, select * from file('analyze/query-profiles.tsv', TSV,
@ -1170,6 +1176,9 @@ rm -rf metrics ||:
mkdir metrics mkdir metrics
clickhouse-local --query " clickhouse-local --query "
-- We use decimals specifically to get fixed-point, fixed-width formatting.
set output_format_decimal_trailing_zeros = 1;
create view right_async_metric_log as create view right_async_metric_log as
select * from file('right-async-metric-log.tsv', TSVWithNamesAndTypes, select * from file('right-async-metric-log.tsv', TSVWithNamesAndTypes,
'$(cat right-async-metric-log.tsv.columns)') '$(cat right-async-metric-log.tsv.columns)')
@ -1178,11 +1187,11 @@ create view right_async_metric_log as
-- Use the right log as time reference because it may have higher precision. -- Use the right log as time reference because it may have higher precision.
create table metrics engine File(TSV, 'metrics/metrics.tsv') as create table metrics engine File(TSV, 'metrics/metrics.tsv') as
with (select min(event_time) from right_async_metric_log) as min_time with (select min(event_time) from right_async_metric_log) as min_time
select name metric, r.event_time - min_time event_time, l.value as left, r.value as right select metric, r.event_time - min_time event_time, l.value as left, r.value as right
from right_async_metric_log r from right_async_metric_log r
asof join file('left-async-metric-log.tsv', TSVWithNamesAndTypes, asof join file('left-async-metric-log.tsv', TSVWithNamesAndTypes,
'$(cat left-async-metric-log.tsv.columns)') l '$(cat left-async-metric-log.tsv.columns)') l
on l.name = r.name and r.event_time <= l.event_time on l.metric = r.metric and r.event_time <= l.event_time
order by metric, event_time order by metric, event_time
; ;
@ -1196,7 +1205,7 @@ create table changes engine File(TSV, 'metrics/changes.tsv') as
if(left > right, left / right, right / left) times_diff if(left > right, left / right, right / left) times_diff
from metrics from metrics
group by metric group by metric
having abs(diff) > 0.05 and isFinite(diff) having abs(diff) > 0.05 and isFinite(diff) and isFinite(times_diff)
) )
order by diff desc order by diff desc
; ;

View File

@ -23,6 +23,7 @@
<!-- disable jit for perf tests --> <!-- disable jit for perf tests -->
<compile_expressions>0</compile_expressions> <compile_expressions>0</compile_expressions>
<compile_aggregate_expressions>0</compile_aggregate_expressions>
</default> </default>
</profiles> </profiles>
<users> <users>

View File

@ -183,6 +183,10 @@ for conn_index, c in enumerate(all_connections):
# requires clickhouse-driver >= 1.1.5 to accept arbitrary new settings # requires clickhouse-driver >= 1.1.5 to accept arbitrary new settings
# (https://github.com/mymarilyn/clickhouse-driver/pull/142) # (https://github.com/mymarilyn/clickhouse-driver/pull/142)
c.settings[s.tag] = s.text c.settings[s.tag] = s.text
# We have to perform a query to make sure the settings work. Otherwise an
# unknown setting will lead to failing precondition check, and we will skip
# the test, which is wrong.
c.execute("select 1")
reportStageEnd('settings') reportStageEnd('settings')

View File

@ -28,7 +28,7 @@ RUN apt-get update --yes \
ENV PKG_VERSION="pvs-studio-latest" ENV PKG_VERSION="pvs-studio-latest"
RUN set -x \ RUN set -x \
&& export PUBKEY_HASHSUM="486a0694c7f92e96190bbfac01c3b5ac2cb7823981db510a28f744c99eabbbf17a7bcee53ca42dc6d84d4323c2742761" \ && export PUBKEY_HASHSUM="686e5eb8b3c543a5c54442c39ec876b6c2d912fe8a729099e600017ae53c877dda3368fe38ed7a66024fe26df6b5892a" \
&& wget -nv https://files.viva64.com/etc/pubkey.txt -O /tmp/pubkey.txt \ && wget -nv https://files.viva64.com/etc/pubkey.txt -O /tmp/pubkey.txt \
&& echo "${PUBKEY_HASHSUM} /tmp/pubkey.txt" | sha384sum -c \ && echo "${PUBKEY_HASHSUM} /tmp/pubkey.txt" | sha384sum -c \
&& apt-key add /tmp/pubkey.txt \ && apt-key add /tmp/pubkey.txt \

View File

@ -2,6 +2,11 @@
set -e -x set -e -x
# Choose random timezone for this test run
TZ="$(grep -v '#' /usr/share/zoneinfo/zone.tab | awk '{print $3}' | shuf | head -n1)"
echo "Choosen random timezone $TZ"
ln -snf "/usr/share/zoneinfo/$TZ" /etc/localtime && echo "$TZ" > /etc/timezone
dpkg -i package_folder/clickhouse-common-static_*.deb; dpkg -i package_folder/clickhouse-common-static_*.deb;
dpkg -i package_folder/clickhouse-common-static-dbg_*.deb dpkg -i package_folder/clickhouse-common-static-dbg_*.deb
dpkg -i package_folder/clickhouse-server_*.deb dpkg -i package_folder/clickhouse-server_*.deb

View File

@ -29,9 +29,10 @@ RUN apt-get update -y \
unixodbc \ unixodbc \
wget \ wget \
mysql-client=5.7* \ mysql-client=5.7* \
postgresql-client postgresql-client \
sqlite3
RUN pip3 install numpy scipy pandas RUN pip3 install numpy scipy pandas Jinja2
RUN mkdir -p /tmp/clickhouse-odbc-tmp \ RUN mkdir -p /tmp/clickhouse-odbc-tmp \
&& wget -nv -O - ${odbc_driver_url} | tar --strip-components=1 -xz -C /tmp/clickhouse-odbc-tmp \ && wget -nv -O - ${odbc_driver_url} | tar --strip-components=1 -xz -C /tmp/clickhouse-odbc-tmp \

View File

@ -12,7 +12,7 @@ UNKNOWN_SIGN = "[ UNKNOWN "
SKIPPED_SIGN = "[ SKIPPED " SKIPPED_SIGN = "[ SKIPPED "
HUNG_SIGN = "Found hung queries in processlist" HUNG_SIGN = "Found hung queries in processlist"
NO_TASK_TIMEOUT_SIGN = "All tests have finished" NO_TASK_TIMEOUT_SIGNS = ["All tests have finished", "No tests were run"]
RETRIES_SIGN = "Some tests were restarted" RETRIES_SIGN = "Some tests were restarted"
@ -29,7 +29,7 @@ def process_test_log(log_path):
with open(log_path, 'r') as test_file: with open(log_path, 'r') as test_file:
for line in test_file: for line in test_file:
line = line.strip() line = line.strip()
if NO_TASK_TIMEOUT_SIGN in line: if any(s in line for s in NO_TASK_TIMEOUT_SIGNS):
task_timeout = False task_timeout = False
if HUNG_SIGN in line: if HUNG_SIGN in line:
hung = True hung = True
@ -80,6 +80,7 @@ def process_result(result_path):
if result_path and os.path.exists(result_path): if result_path and os.path.exists(result_path):
total, skipped, unknown, failed, success, hung, task_timeout, retries, test_results = process_test_log(result_path) total, skipped, unknown, failed, success, hung, task_timeout, retries, test_results = process_test_log(result_path)
is_flacky_check = 1 < int(os.environ.get('NUM_TRIES', 1)) is_flacky_check = 1 < int(os.environ.get('NUM_TRIES', 1))
logging.info("Is flacky check: %s", is_flacky_check)
# If no tests were run (success == 0) it indicates an error (e.g. server did not start or crashed immediately) # If no tests were run (success == 0) it indicates an error (e.g. server did not start or crashed immediately)
# But it's Ok for "flaky checks" - they can contain just one test for check which is marked as skipped. # But it's Ok for "flaky checks" - they can contain just one test for check which is marked as skipped.
if failed != 0 or unknown != 0 or (success == 0 and (not is_flacky_check)): if failed != 0 or unknown != 0 or (success == 0 and (not is_flacky_check)):

View File

@ -3,6 +3,11 @@
# fail on errors, verbose and export all env variables # fail on errors, verbose and export all env variables
set -e -x -a set -e -x -a
# Choose random timezone for this test run.
TZ="$(grep -v '#' /usr/share/zoneinfo/zone.tab | awk '{print $3}' | shuf | head -n1)"
echo "Choosen random timezone $TZ"
ln -snf "/usr/share/zoneinfo/$TZ" /etc/localtime && echo "$TZ" > /etc/timezone
dpkg -i package_folder/clickhouse-common-static_*.deb dpkg -i package_folder/clickhouse-common-static_*.deb
dpkg -i package_folder/clickhouse-common-static-dbg_*.deb dpkg -i package_folder/clickhouse-common-static-dbg_*.deb
dpkg -i package_folder/clickhouse-server_*.deb dpkg -i package_folder/clickhouse-server_*.deb
@ -138,15 +143,18 @@ if [[ -n "$WITH_COVERAGE" ]] && [[ "$WITH_COVERAGE" -eq 1 ]]; then
fi fi
tar -chf /test_output/text_log_dump.tar /var/lib/clickhouse/data/system/text_log ||: tar -chf /test_output/text_log_dump.tar /var/lib/clickhouse/data/system/text_log ||:
tar -chf /test_output/query_log_dump.tar /var/lib/clickhouse/data/system/query_log ||: tar -chf /test_output/query_log_dump.tar /var/lib/clickhouse/data/system/query_log ||:
tar -chf /test_output/zookeeper_log_dump.tar /var/lib/clickhouse/data/system/zookeeper_log ||:
tar -chf /test_output/coordination.tar /var/lib/clickhouse/coordination ||: tar -chf /test_output/coordination.tar /var/lib/clickhouse/coordination ||:
if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then
grep -Fa "Fatal" /var/log/clickhouse-server/clickhouse-server1.log ||: grep -Fa "Fatal" /var/log/clickhouse-server/clickhouse-server1.log ||:
grep -Fa "Fatal" /var/log/clickhouse-server/clickhouse-server2.log ||: grep -Fa "Fatal" /var/log/clickhouse-server/clickhouse-server2.log ||:
pigz < /var/log/clickhouse-server/clickhouse-server1.log > /test_output/clickhouse-server1.log.gz ||: pigz < /var/log/clickhouse-server/clickhouse-server1.log > /test_output/clickhouse-server1.log.gz ||:
pigz < /var/log/clickhouse-server/clickhouse-server2.log > /test_output/clickhouse-server2.log.gz ||: pigz < /var/log/clickhouse-server/clickhouse-server2.log > /test_output/clickhouse-server2.log.gz ||:
mv /var/log/clickhouse-server/stderr1.log /test_output/ ||: mv /var/log/clickhouse-server/stderr1.log /test_output/ ||:
mv /var/log/clickhouse-server/stderr2.log /test_output/ ||: mv /var/log/clickhouse-server/stderr2.log /test_output/ ||:
tar -chf /test_output/zookeeper_log_dump1.tar /var/lib/clickhouse1/data/system/zookeeper_log ||:
tar -chf /test_output/zookeeper_log_dump2.tar /var/lib/clickhouse2/data/system/zookeeper_log ||:
tar -chf /test_output/coordination1.tar /var/lib/clickhouse1/coordination ||: tar -chf /test_output/coordination1.tar /var/lib/clickhouse1/coordination ||:
tar -chf /test_output/coordination2.tar /var/lib/clickhouse2/coordination ||: tar -chf /test_output/coordination2.tar /var/lib/clickhouse2/coordination ||:
fi fi

View File

@ -77,9 +77,6 @@ RUN mkdir -p /tmp/clickhouse-odbc-tmp \
&& odbcinst -i -s -l -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbc.ini.sample \ && odbcinst -i -s -l -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbc.ini.sample \
&& rm -rf /tmp/clickhouse-odbc-tmp && rm -rf /tmp/clickhouse-odbc-tmp
ENV TZ=Europe/Moscow
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
COPY run.sh / COPY run.sh /
CMD ["/bin/bash", "/run.sh"] CMD ["/bin/bash", "/run.sh"]

View File

@ -58,11 +58,11 @@ function start()
echo "Cannot start clickhouse-server" echo "Cannot start clickhouse-server"
cat /var/log/clickhouse-server/stdout.log cat /var/log/clickhouse-server/stdout.log
tail -n1000 /var/log/clickhouse-server/stderr.log tail -n1000 /var/log/clickhouse-server/stderr.log
tail -n1000 /var/log/clickhouse-server/clickhouse-server.log tail -n100000 /var/log/clickhouse-server/clickhouse-server.log | grep -F -v '<Warning> RaftInstance:' -e '<Information> RaftInstance' | tail -n1000
break break
fi fi
# use root to match with current uid # use root to match with current uid
clickhouse start --user root >/var/log/clickhouse-server/stdout.log 2>/var/log/clickhouse-server/stderr.log clickhouse start --user root >/var/log/clickhouse-server/stdout.log 2>>/var/log/clickhouse-server/stderr.log
sleep 0.5 sleep 0.5
counter=$((counter + 1)) counter=$((counter + 1))
done done
@ -118,35 +118,35 @@ clickhouse-client --query "SELECT 'Server successfully started', 'OK'" >> /test_
[ -f /var/log/clickhouse-server/stderr.log ] || echo -e "Stderr log does not exist\tFAIL" [ -f /var/log/clickhouse-server/stderr.log ] || echo -e "Stderr log does not exist\tFAIL"
# Print Fatal log messages to stdout # Print Fatal log messages to stdout
zgrep -Fa " <Fatal> " /var/log/clickhouse-server/clickhouse-server.log zgrep -Fa " <Fatal> " /var/log/clickhouse-server/clickhouse-server.log*
# Grep logs for sanitizer asserts, crashes and other critical errors # Grep logs for sanitizer asserts, crashes and other critical errors
# Sanitizer asserts # Sanitizer asserts
zgrep -Fa "==================" /var/log/clickhouse-server/stderr.log >> /test_output/tmp zgrep -Fa "==================" /var/log/clickhouse-server/stderr.log >> /test_output/tmp
zgrep -Fa "WARNING" /var/log/clickhouse-server/stderr.log >> /test_output/tmp zgrep -Fa "WARNING" /var/log/clickhouse-server/stderr.log >> /test_output/tmp
zgrep -Fav "ASan doesn't fully support makecontext/swapcontext functions" > /dev/null \ zgrep -Fav "ASan doesn't fully support makecontext/swapcontext functions" /test_output/tmp > /dev/null \
&& echo -e 'Sanitizer assert (in stderr.log)\tFAIL' >> /test_output/test_results.tsv \ && echo -e 'Sanitizer assert (in stderr.log)\tFAIL' >> /test_output/test_results.tsv \
|| echo -e 'No sanitizer asserts\tOK' >> /test_output/test_results.tsv || echo -e 'No sanitizer asserts\tOK' >> /test_output/test_results.tsv
rm -f /test_output/tmp rm -f /test_output/tmp
# OOM # OOM
zgrep -Fa " <Fatal> Application: Child process was terminated by signal 9" /var/log/clickhouse-server/clickhouse-server.log > /dev/null \ zgrep -Fa " <Fatal> Application: Child process was terminated by signal 9" /var/log/clickhouse-server/clickhouse-server.log* > /dev/null \
&& echo -e 'OOM killer (or signal 9) in clickhouse-server.log\tFAIL' >> /test_output/test_results.tsv \ && echo -e 'OOM killer (or signal 9) in clickhouse-server.log\tFAIL' >> /test_output/test_results.tsv \
|| echo -e 'No OOM messages in clickhouse-server.log\tOK' >> /test_output/test_results.tsv || echo -e 'No OOM messages in clickhouse-server.log\tOK' >> /test_output/test_results.tsv
# Logical errors # Logical errors
zgrep -Fa "Code: 49, e.displayText() = DB::Exception:" /var/log/clickhouse-server/clickhouse-server.log > /dev/null \ zgrep -Fa "Code: 49, e.displayText() = DB::Exception:" /var/log/clickhouse-server/clickhouse-server.log* > /dev/null \
&& echo -e 'Logical error thrown (see clickhouse-server.log)\tFAIL' >> /test_output/test_results.tsv \ && echo -e 'Logical error thrown (see clickhouse-server.log)\tFAIL' >> /test_output/test_results.tsv \
|| echo -e 'No logical errors\tOK' >> /test_output/test_results.tsv || echo -e 'No logical errors\tOK' >> /test_output/test_results.tsv
# Crash # Crash
zgrep -Fa "########################################" /var/log/clickhouse-server/clickhouse-server.log > /dev/null \ zgrep -Fa "########################################" /var/log/clickhouse-server/clickhouse-server.log* > /dev/null \
&& echo -e 'Killed by signal (in clickhouse-server.log)\tFAIL' >> /test_output/test_results.tsv \ && echo -e 'Killed by signal (in clickhouse-server.log)\tFAIL' >> /test_output/test_results.tsv \
|| echo -e 'Not crashed\tOK' >> /test_output/test_results.tsv || echo -e 'Not crashed\tOK' >> /test_output/test_results.tsv
# It also checks for crash without stacktrace (printed by watchdog) # It also checks for crash without stacktrace (printed by watchdog)
zgrep -Fa " <Fatal> " /var/log/clickhouse-server/clickhouse-server.log > /dev/null \ zgrep -Fa " <Fatal> " /var/log/clickhouse-server/clickhouse-server.log* > /dev/null \
&& echo -e 'Fatal message in clickhouse-server.log\tFAIL' >> /test_output/test_results.tsv \ && echo -e 'Fatal message in clickhouse-server.log\tFAIL' >> /test_output/test_results.tsv \
|| echo -e 'No fatal messages in clickhouse-server.log\tOK' >> /test_output/test_results.tsv || echo -e 'No fatal messages in clickhouse-server.log\tOK' >> /test_output/test_results.tsv

View File

@ -20,6 +20,7 @@ def get_skip_list_cmd(path):
def get_options(i): def get_options(i):
options = [] options = []
client_options = []
if 0 < i: if 0 < i:
options.append("--order=random") options.append("--order=random")
@ -27,25 +28,29 @@ def get_options(i):
options.append("--db-engine=Ordinary") options.append("--db-engine=Ordinary")
if i % 3 == 2: if i % 3 == 2:
options.append('''--client-option='allow_experimental_database_replicated=1' --db-engine="Replicated('/test/db/test_{}', 's1', 'r1')"'''.format(i)) options.append('''--db-engine="Replicated('/test/db/test_{}', 's1', 'r1')"'''.format(i))
client_options.append('allow_experimental_database_replicated=1')
# If database name is not specified, new database is created for each functional test. # If database name is not specified, new database is created for each functional test.
# Run some threads with one database for all tests. # Run some threads with one database for all tests.
if i % 2 == 1: if i % 2 == 1:
options.append(" --database=test_{}".format(i)) options.append(" --database=test_{}".format(i))
if i % 7 == 0: if i % 5 == 1:
options.append(" --client-option='join_use_nulls=1'") client_options.append("join_use_nulls=1")
if i % 14 == 0: if i % 15 == 6:
options.append(' --client-option="join_algorithm=\'partial_merge\'"') client_options.append("join_algorithm='partial_merge'")
if i % 21 == 0: if i % 15 == 11:
options.append(' --client-option="join_algorithm=\'auto\'"') client_options.append("join_algorithm='auto'")
options.append(' --client-option="max_rows_in_join=1000"') client_options.append('max_rows_in_join=1000')
if i == 13: if i == 13:
options.append(" --client-option='memory_tracker_fault_probability=0.00001'") client_options.append('memory_tracker_fault_probability=0.001')
if client_options:
options.append(" --client-option " + ' '.join(client_options))
return ' '.join(options) return ' '.join(options)

View File

@ -35,7 +35,7 @@ RUN apt-get update \
ENV TZ=Europe/Moscow ENV TZ=Europe/Moscow
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN pip3 install urllib3 testflows==1.6.90 docker-compose==1.29.1 docker==5.0.0 dicttoxml kazoo tzlocal python-dateutil numpy RUN pip3 install urllib3 testflows==1.7.20 docker-compose==1.29.1 docker==5.0.0 dicttoxml kazoo tzlocal python-dateutil numpy
ENV DOCKER_CHANNEL stable ENV DOCKER_CHANNEL stable
ENV DOCKER_VERSION 20.10.6 ENV DOCKER_VERSION 20.10.6

View File

@ -1,8 +1,6 @@
# docker build -t yandex/clickhouse-unit-test . # docker build -t yandex/clickhouse-unit-test .
FROM yandex/clickhouse-stateless-test FROM yandex/clickhouse-stateless-test
ENV TZ=Europe/Moscow
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get install gdb RUN apt-get install gdb
COPY run.sh / COPY run.sh /

View File

@ -105,11 +105,11 @@ clickhouse-client -nmT < tests/queries/0_stateless/01521_dummy_test.sql | tee te
5) ensure everything is correct, if the test output is incorrect (due to some bug for example), adjust the reference file using text editor. 5) ensure everything is correct, if the test output is incorrect (due to some bug for example), adjust the reference file using text editor.
#### How to create good test #### How to create a good test
- test should be - A test should be
- minimal - create only tables related to tested functionality, remove unrelated columns and parts of query - minimal - create only tables related to tested functionality, remove unrelated columns and parts of query
- fast - should not take longer than few seconds (better subseconds) - fast - should not take longer than a few seconds (better subseconds)
- correct - fails then feature is not working - correct - fails then feature is not working
- deterministic - deterministic
- isolated / stateless - isolated / stateless
@ -126,6 +126,16 @@ clickhouse-client -nmT < tests/queries/0_stateless/01521_dummy_test.sql | tee te
- use other SQL files in the `0_stateless` folder as an example - use other SQL files in the `0_stateless` folder as an example
- ensure the feature / feature combination you want to test is not yet covered with existing tests - ensure the feature / feature combination you want to test is not yet covered with existing tests
#### Test naming rules
It's important to name tests correctly, so one could turn some tests subset off in clickhouse-test invocation.
| Tester flag| What should be in test name | When flag should be added |
|---|---|---|---|
| `--[no-]zookeeper`| "zookeeper" or "replica" | Test uses tables from ReplicatedMergeTree family |
| `--[no-]shard` | "shard" or "distributed" or "global"| Test using connections to 127.0.0.2 or similar |
| `--[no-]long` | "long" or "deadlock" or "race" | Test runs longer than 60 seconds |
#### Commit / push / create PR. #### Commit / push / create PR.
1) commit & push your changes 1) commit & push your changes

View File

@ -134,10 +134,10 @@ $ ./release
## Faster builds for development ## Faster builds for development
Normally all tools of the ClickHouse bundle, such as `clickhouse-server`, `clickhouse-client` etc., are linked into a single static executable, `clickhouse`. This executable must be re-linked on every change, which might be slow. Two common ways to improve linking time are to use `lld` linker, and use the 'split' build configuration, which builds a separate binary for every tool, and further splits the code into several shared libraries. To enable these tweaks, pass the following flags to `cmake`: Normally all tools of the ClickHouse bundle, such as `clickhouse-server`, `clickhouse-client` etc., are linked into a single static executable, `clickhouse`. This executable must be re-linked on every change, which might be slow. One common way to improve build time is to use the 'split' build configuration, which builds a separate binary for every tool, and further splits the code into several shared libraries. To enable this tweak, pass the following flags to `cmake`:
``` ```
-DCMAKE_C_FLAGS="--ld-path=lld" -DCMAKE_CXX_FLAGS="--ld-path=lld" -DUSE_STATIC_LIBRARIES=0 -DSPLIT_SHARED_LIBRARIES=1 -DCLICKHOUSE_SPLIT_BINARY=1 -DUSE_STATIC_LIBRARIES=0 -DSPLIT_SHARED_LIBRARIES=1 -DCLICKHOUSE_SPLIT_BINARY=1
``` ```
## You Dont Have to Build ClickHouse {#you-dont-have-to-build-clickhouse} ## You Dont Have to Build ClickHouse {#you-dont-have-to-build-clickhouse}
@ -155,6 +155,10 @@ Normally ClickHouse is statically linked into a single static `clickhouse` binar
-DUSE_STATIC_LIBRARIES=0 -DSPLIT_SHARED_LIBRARIES=1 -DCLICKHOUSE_SPLIT_BINARY=1 -DUSE_STATIC_LIBRARIES=0 -DSPLIT_SHARED_LIBRARIES=1 -DCLICKHOUSE_SPLIT_BINARY=1
``` ```
Note that in this configuration there is no single `clickhouse` binary, and you have to run `clickhouse-server`, `clickhouse-client` etc. Note that the split build has several drawbacks:
* There is no single `clickhouse` binary, and you have to run `clickhouse-server`, `clickhouse-client`, etc.
* Risk of segfault if you run any of the programs while rebuilding the project.
* You cannot run the integration tests since they only work a single complete binary.
* You can't easily copy the binaries elsewhere. Instead of moving a single binary you'll need to copy all binaries and libraries.
[Original article](https://clickhouse.tech/docs/en/development/build/) <!--hide--> [Original article](https://clickhouse.tech/docs/en/development/build/) <!--hide-->

View File

@ -7,8 +7,8 @@ toc_title: Third-Party Libraries Used
The list of third-party libraries can be obtained by the following query: The list of third-party libraries can be obtained by the following query:
``` ``` sql
SELECT library_name, license_type, license_path FROM system.licenses ORDER BY library_name COLLATE 'en' SELECT library_name, license_type, license_path FROM system.licenses ORDER BY library_name COLLATE 'en';
``` ```
[Example](https://gh-api.clickhouse.tech/play?user=play#U0VMRUNUIGxpYnJhcnlfbmFtZSwgbGljZW5zZV90eXBlLCBsaWNlbnNlX3BhdGggRlJPTSBzeXN0ZW0ubGljZW5zZXMgT1JERVIgQlkgbGlicmFyeV9uYW1lIENPTExBVEUgJ2VuJw==) [Example](https://gh-api.clickhouse.tech/play?user=play#U0VMRUNUIGxpYnJhcnlfbmFtZSwgbGljZW5zZV90eXBlLCBsaWNlbnNlX3BhdGggRlJPTSBzeXN0ZW0ubGljZW5zZXMgT1JERVIgQlkgbGlicmFyeV9uYW1lIENPTExBVEUgJ2VuJw==)
@ -79,6 +79,7 @@ SELECT library_name, license_type, license_path FROM system.licenses ORDER BY li
| re2 | BSD 3-clause | /contrib/re2/LICENSE | | re2 | BSD 3-clause | /contrib/re2/LICENSE |
| replxx | BSD 3-clause | /contrib/replxx/LICENSE.md | | replxx | BSD 3-clause | /contrib/replxx/LICENSE.md |
| rocksdb | BSD 3-clause | /contrib/rocksdb/LICENSE.leveldb | | rocksdb | BSD 3-clause | /contrib/rocksdb/LICENSE.leveldb |
| s2geometry | Apache | /contrib/s2geometry/LICENSE |
| sentry-native | MIT | /contrib/sentry-native/LICENSE | | sentry-native | MIT | /contrib/sentry-native/LICENSE |
| simdjson | Apache | /contrib/simdjson/LICENSE | | simdjson | Apache | /contrib/simdjson/LICENSE |
| snappy | Public Domain | /contrib/snappy/COPYING | | snappy | Public Domain | /contrib/snappy/COPYING |
@ -89,3 +90,15 @@ SELECT library_name, license_type, license_path FROM system.licenses ORDER BY li
| xz | Public Domain | /contrib/xz/COPYING | | xz | Public Domain | /contrib/xz/COPYING |
| zlib-ng | zLib | /contrib/zlib-ng/LICENSE.md | | zlib-ng | zLib | /contrib/zlib-ng/LICENSE.md |
| zstd | BSD | /contrib/zstd/LICENSE | | zstd | BSD | /contrib/zstd/LICENSE |
## Guidelines for adding new third-party libraries and maintaining custom changes in them {#adding-third-party-libraries}
1. All external third-party code should reside in the dedicated directories under `contrib` directory of ClickHouse repo. Prefer Git submodules, when available.
2. Fork/mirror the official repo in [Clickhouse-extras](https://github.com/ClickHouse-Extras). Prefer official GitHub repos, when available.
3. Branch from the branch you want to integrate, e.g., `master` -> `clickhouse/master`, or `release/vX.Y.Z` -> `clickhouse/release/vX.Y.Z`.
4. All forks in [Clickhouse-extras](https://github.com/ClickHouse-Extras) can be automatically synchronized with upstreams. `clickhouse/...` branches will remain unaffected, since virtually nobody is going to use that naming pattern in their upstream repos.
5. Add submodules under `contrib` of ClickHouse repo that refer the above forks/mirrors. Set the submodules to track the corresponding `clickhouse/...` branches.
6. Every time the custom changes have to be made in the library code, a dedicated branch should be created, like `clickhouse/my-fix`. Then this branch should be merged into the branch, that is tracked by the submodule, e.g., `clickhouse/master` or `clickhouse/release/vX.Y.Z`.
7. No code should be pushed in any branch of the forks in [Clickhouse-extras](https://github.com/ClickHouse-Extras), whose names do not follow `clickhouse/...` pattern.
8. Always write the custom changes with the official repo in mind. Once the PR is merged from (a feature/fix branch in) your personal fork into the fork in [Clickhouse-extras](https://github.com/ClickHouse-Extras), and the submodule is bumped in ClickHouse repo, consider opening another PR from (a feature/fix branch in) the fork in [Clickhouse-extras](https://github.com/ClickHouse-Extras) to the official repo of the library. This will make sure, that 1) the contribution has more than a single use case and importance, 2) others will also benefit from it, 3) the change will not remain a maintenance burden solely on ClickHouse developers.
9. When a submodule needs to start using a newer code from the original branch (e.g., `master`), and since the custom changes might be merged in the branch it is tracking (e.g., `clickhouse/master`) and so it may diverge from its original counterpart (i.e., `master`), a careful merge should be carried out first, i.e., `master` -> `clickhouse/master`, and only then the submodule can be bumped in ClickHouse.

View File

@ -123,7 +123,7 @@ For installing CMake and Ninja on Mac OS X first install Homebrew and then insta
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
brew install cmake ninja brew install cmake ninja
Next, check the version of CMake: `cmake --version`. If it is below 3.3, you should install a newer version from the website: https://cmake.org/download/. Next, check the version of CMake: `cmake --version`. If it is below 3.12, you should install a newer version from the website: https://cmake.org/download/.
## Optional External Libraries {#optional-external-libraries} ## Optional External Libraries {#optional-external-libraries}
@ -237,6 +237,8 @@ The description of ClickHouse architecture can be found here: https://clickhouse
The Code Style Guide: https://clickhouse.tech/docs/en/development/style/ The Code Style Guide: https://clickhouse.tech/docs/en/development/style/
Adding third-party libraries: https://clickhouse.tech/docs/en/development/contrib/#adding-third-party-libraries
Writing tests: https://clickhouse.tech/docs/en/development/tests/ Writing tests: https://clickhouse.tech/docs/en/development/tests/
List of tasks: https://github.com/ClickHouse/ClickHouse/issues?q=is%3Aopen+is%3Aissue+label%3A%22easy+task%22 List of tasks: https://github.com/ClickHouse/ClickHouse/issues?q=is%3Aopen+is%3Aissue+label%3A%22easy+task%22

View File

@ -628,7 +628,7 @@ If the class is not intended for polymorphic use, you do not need to make functi
**18.** Encodings. **18.** Encodings.
Use UTF-8 everywhere. Use `std::string`and`char *`. Do not use `std::wstring`and`wchar_t`. Use UTF-8 everywhere. Use `std::string` and `char *`. Do not use `std::wstring` and `wchar_t`.
**19.** Logging. **19.** Logging.
@ -749,17 +749,9 @@ If your code in the `master` branch is not buildable yet, exclude it from the bu
**1.** The C++20 standard library is used (experimental extensions are allowed), as well as `boost` and `Poco` frameworks. **1.** The C++20 standard library is used (experimental extensions are allowed), as well as `boost` and `Poco` frameworks.
**2.** If necessary, you can use any well-known libraries available in the OS package. **2.** It is not allowed to use libraries from OS packages. It is also not allowed to use pre-installed libraries. All libraries should be placed in form of source code in `contrib` directory and built with ClickHouse. See [Guidelines for adding new third-party libraries](contrib.md#adding-third-party-libraries) for details.
If there is a good solution already available, then use it, even if it means you have to install another library. **3.** Preference is always given to libraries that are already in use.
(But be prepared to remove bad libraries from code.)
**3.** You can install a library that isnt in the packages, if the packages do not have what you need or have an outdated version or the wrong type of compilation.
**4.** If the library is small and does not have its own complex build system, put the source files in the `contrib` folder.
**5.** Preference is always given to libraries that are already in use.
## General Recommendations {#general-recommendations-1} ## General Recommendations {#general-recommendations-1}

Some files were not shown because too many files have changed in this diff Show More