mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-22 23:52:03 +00:00
Merge branch 'master' into annadevyatova-DOCSUP-5911-limit
This commit is contained in:
commit
6abd55880e
5
.gitmodules
vendored
5
.gitmodules
vendored
@ -133,7 +133,7 @@
|
|||||||
url = https://github.com/unicode-org/icu.git
|
url = https://github.com/unicode-org/icu.git
|
||||||
[submodule "contrib/flatbuffers"]
|
[submodule "contrib/flatbuffers"]
|
||||||
path = contrib/flatbuffers
|
path = contrib/flatbuffers
|
||||||
url = https://github.com/google/flatbuffers.git
|
url = https://github.com/ClickHouse-Extras/flatbuffers.git
|
||||||
[submodule "contrib/libc-headers"]
|
[submodule "contrib/libc-headers"]
|
||||||
path = contrib/libc-headers
|
path = contrib/libc-headers
|
||||||
url = https://github.com/ClickHouse-Extras/libc-headers.git
|
url = https://github.com/ClickHouse-Extras/libc-headers.git
|
||||||
@ -221,6 +221,9 @@
|
|||||||
[submodule "contrib/NuRaft"]
|
[submodule "contrib/NuRaft"]
|
||||||
path = contrib/NuRaft
|
path = contrib/NuRaft
|
||||||
url = https://github.com/ClickHouse-Extras/NuRaft.git
|
url = https://github.com/ClickHouse-Extras/NuRaft.git
|
||||||
|
[submodule "contrib/nanodbc"]
|
||||||
|
path = contrib/nanodbc
|
||||||
|
url = https://github.com/ClickHouse-Extras/nanodbc.git
|
||||||
[submodule "contrib/datasketches-cpp"]
|
[submodule "contrib/datasketches-cpp"]
|
||||||
path = contrib/datasketches-cpp
|
path = contrib/datasketches-cpp
|
||||||
url = https://github.com/ClickHouse-Extras/datasketches-cpp.git
|
url = https://github.com/ClickHouse-Extras/datasketches-cpp.git
|
||||||
|
155
CHANGELOG.md
155
CHANGELOG.md
@ -1,3 +1,156 @@
|
|||||||
|
## ClickHouse release 21.4
|
||||||
|
|
||||||
|
### ClickHouse release 21.4.1 2021-04-12
|
||||||
|
|
||||||
|
#### Backward Incompatible Change
|
||||||
|
|
||||||
|
* The `toStartOfIntervalFunction` will align hour intervals to the midnight (in previous versions they were aligned to the start of unix epoch). For example, `toStartOfInterval(x, INTERVAL 11 HOUR)` will split every day into three intervals: `00:00:00..10:59:59`, `11:00:00..21:59:59` and `22:00:00..23:59:59`. This behaviour is more suited for practical needs. This closes [#9510](https://github.com/ClickHouse/ClickHouse/issues/9510). [#22060](https://github.com/ClickHouse/ClickHouse/pull/22060) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* `Age` and `Precision` in graphite rollup configs should increase from retention to retention. Now it's checked and the wrong config raises an exception. [#21496](https://github.com/ClickHouse/ClickHouse/pull/21496) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||||
|
* Fix `cutToFirstSignificantSubdomainCustom()`/`firstSignificantSubdomainCustom()` returning wrong result for 3+ level domains present in custom top-level domain list. For input domains matching these custom top-level domains, the third-level domain was considered to be the first significant one. This is now fixed. This change may introduce incompatibility if the function is used in e.g. the sharding key. [#21946](https://github.com/ClickHouse/ClickHouse/pull/21946) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Column `keys` in table `system.dictionaries` was replaced to columns `key.names` and `key.types`. Columns `key.names`, `key.types`, `attribute.names`, `attribute.types` from `system.dictionaries` table does not require dictionary to be loaded. [#21884](https://github.com/ClickHouse/ClickHouse/pull/21884) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Now replicas that are processing the `ALTER TABLE ATTACH PART[ITION]` command search in their `detached/` folders before fetching the data from other replicas. As an implementation detail, a new command `ATTACH_PART` is introduced in the replicated log. Parts are searched and compared by their checksums. [#18978](https://github.com/ClickHouse/ClickHouse/pull/18978) ([Mike Kot](https://github.com/myrrc)). **Note**:
|
||||||
|
* `ATTACH PART[ITION]` queries may not work during cluster upgrade.
|
||||||
|
* It's not possible to rollback to older ClickHouse version after executing `ALTER ... ATTACH` query in new version as the old servers would fail to pass the `ATTACH_PART` entry in the replicated log.
|
||||||
|
* In this version, empty `<remote_url_allow_hosts></remote_url_allow_hosts>` will block all access to remote hosts while in previous versions it did nothing. If you want to keep old behaviour and you have empty `remote_url_allow_hosts` element in configuration file, remove it. [#20058](https://github.com/ClickHouse/ClickHouse/pull/20058) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||||
|
|
||||||
|
|
||||||
|
#### New Feature
|
||||||
|
|
||||||
|
* Extended range of `DateTime64` to support dates from year 1925 to 2283. Improved support of `DateTime` around zero date (`1970-01-01`). [#9404](https://github.com/ClickHouse/ClickHouse/pull/9404) ([alexey-milovidov](https://github.com/alexey-milovidov), [Vasily Nemkov](https://github.com/Enmk)). Not every time and date functions are working for extended range of dates.
|
||||||
|
* Added support of Kerberos authentication for preconfigured users and HTTP requests (GSS-SPNEGO). [#14995](https://github.com/ClickHouse/ClickHouse/pull/14995) ([Denis Glazachev](https://github.com/traceon)).
|
||||||
|
* Add `prefer_column_name_to_alias` setting to use original column names instead of aliases. it is needed to be more compatible with common databases' aliasing rules. This is for [#9715](https://github.com/ClickHouse/ClickHouse/issues/9715) and [#9887](https://github.com/ClickHouse/ClickHouse/issues/9887). [#22044](https://github.com/ClickHouse/ClickHouse/pull/22044) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Added functions `dictGetChildren(dictionary, key)`, `dictGetDescendants(dictionary, key, level)`. Function `dictGetChildren` return all children as an array if indexes. It is a inverse transformation for `dictGetHierarchy`. Function `dictGetDescendants` return all descendants as if `dictGetChildren` was applied `level` times recursively. Zero `level` value is equivalent to infinity. Closes [#14656](https://github.com/ClickHouse/ClickHouse/issues/14656). [#22096](https://github.com/ClickHouse/ClickHouse/pull/22096) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Added `executable_pool` dictionary source. Close [#14528](https://github.com/ClickHouse/ClickHouse/issues/14528). [#21321](https://github.com/ClickHouse/ClickHouse/pull/21321) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Added table function `dictionary`. It works the same way as `Dictionary` engine. Closes [#21560](https://github.com/ClickHouse/ClickHouse/issues/21560). [#21910](https://github.com/ClickHouse/ClickHouse/pull/21910) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Support `Nullable` type for `PolygonDictionary` attribute. [#21890](https://github.com/ClickHouse/ClickHouse/pull/21890) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Functions `dictGet`, `dictHas` use current database name if it is not specified for dictionaries created with DDL. Closes [#21632](https://github.com/ClickHouse/ClickHouse/issues/21632). [#21859](https://github.com/ClickHouse/ClickHouse/pull/21859) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Added function `dictGetOrNull`. It works like `dictGet`, but return `Null` in case key was not found in dictionary. Closes [#22375](https://github.com/ClickHouse/ClickHouse/issues/22375). [#22413](https://github.com/ClickHouse/ClickHouse/pull/22413) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Added async update in `ComplexKeyCache`, `SSDCache`, `SSDComplexKeyCache` dictionaries. Added support for `Nullable` type in `Cache`, `ComplexKeyCache`, `SSDCache`, `SSDComplexKeyCache` dictionaries. Added support for multiple attributes fetch with `dictGet`, `dictGetOrDefault` functions. Fixes [#21517](https://github.com/ClickHouse/ClickHouse/issues/21517). [#20595](https://github.com/ClickHouse/ClickHouse/pull/20595) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Support `dictHas` function for `RangeHashedDictionary`. Fixes [#6680](https://github.com/ClickHouse/ClickHouse/issues/6680). [#19816](https://github.com/ClickHouse/ClickHouse/pull/19816) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Add function `timezoneOf` that returns the timezone name of `DateTime` or `DateTime64` data types. This does not close [#9959](https://github.com/ClickHouse/ClickHouse/issues/9959). Fix inconsistencies in function names: add aliases `timezone` and `timeZone` as well as `toTimezone` and `toTimeZone` and `timezoneOf` and `timeZoneOf`. [#22001](https://github.com/ClickHouse/ClickHouse/pull/22001) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add new optional clause `GRANTEES` for `CREATE/ALTER USER` commands. It specifies users or roles which are allowed to receive grants from this user on condition this user has also all required access granted with grant option. By default `GRANTEES ANY` is used which means a user with grant option can grant to anyone. Syntax: `CREATE USER ... GRANTEES {user | role | ANY | NONE} [,...] [EXCEPT {user | role} [,...]]`. [#21641](https://github.com/ClickHouse/ClickHouse/pull/21641) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* Add new column `slowdowns_count` to `system.clusters`. When using hedged requests, it shows how many times we switched to another replica because this replica was responding slowly. Also show actual value of `errors_count` in `system.clusters`. [#21480](https://github.com/ClickHouse/ClickHouse/pull/21480) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Add `_partition_id` virtual column for `MergeTree*` engines. Allow to prune partitions by `_partition_id`. Add `partitionID()` function to calculate partition id string. [#21401](https://github.com/ClickHouse/ClickHouse/pull/21401) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Add function `isIPAddressInRange` to test if an IPv4 or IPv6 address is contained in a given CIDR network prefix. [#21329](https://github.com/ClickHouse/ClickHouse/pull/21329) ([PHO](https://github.com/depressed-pho)).
|
||||||
|
* Added new SQL command `ALTER TABLE 'table_name' UNFREEZE [PARTITION 'part_expr'] WITH NAME 'backup_name'`. This command is needed to properly remove 'freezed' partitions from all disks. [#21142](https://github.com/ClickHouse/ClickHouse/pull/21142) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* Supports implicit key type conversion for JOIN. [#19885](https://github.com/ClickHouse/ClickHouse/pull/19885) ([Vladimir](https://github.com/vdimir)).
|
||||||
|
|
||||||
|
#### Experimental Feature
|
||||||
|
|
||||||
|
* Support `RANGE OFFSET` frame (for window functions) for floating point types. Implement `lagInFrame`/`leadInFrame` window functions, which are analogous to `lag`/`lead`, but respect the window frame. They are identical when the frame is `between unbounded preceding and unbounded following`. This closes [#5485](https://github.com/ClickHouse/ClickHouse/issues/5485). [#21895](https://github.com/ClickHouse/ClickHouse/pull/21895) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||||
|
* Zero-copy replication for `ReplicatedMergeTree` over S3 storage. [#16240](https://github.com/ClickHouse/ClickHouse/pull/16240) ([ianton-ru](https://github.com/ianton-ru)).
|
||||||
|
* Added possibility to migrate existing S3 disk to the schema with backup-restore capabilities. [#22070](https://github.com/ClickHouse/ClickHouse/pull/22070) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
|
||||||
|
#### Performance Improvement
|
||||||
|
|
||||||
|
* Supported parallel formatting in `clickhouse-local` and everywhere else. [#21630](https://github.com/ClickHouse/ClickHouse/pull/21630) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Support parallel parsing for `CSVWithNames` and `TSVWithNames` formats. This closes [#21085](https://github.com/ClickHouse/ClickHouse/issues/21085). [#21149](https://github.com/ClickHouse/ClickHouse/pull/21149) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Enable read with mmap IO for file ranges from 64 MiB (the settings `min_bytes_to_use_mmap_io`). It may lead to moderate performance improvement. [#22326](https://github.com/ClickHouse/ClickHouse/pull/22326) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add cache for files read with `min_bytes_to_use_mmap_io` setting. It makes significant (2x and more) performance improvement when the value of the setting is small by avoiding frequent mmap/munmap calls and the consequent page faults. Note that mmap IO has major drawbacks that makes it less reliable in production (e.g. hung or SIGBUS on faulty disks; less controllable memory usage). Nevertheless it is good in benchmarks. [#22206](https://github.com/ClickHouse/ClickHouse/pull/22206) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Avoid unnecessary data copy when using codec `NONE`. Please note that codec `NONE` is mostly useless - it's recommended to always use compression (`LZ4` is by default). Despite the common belief, disabling compression may not improve performance (the opposite effect is possible). The `NONE` codec is useful in some cases: - when data is uncompressable; - for synthetic benchmarks. [#22145](https://github.com/ClickHouse/ClickHouse/pull/22145) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Faster `GROUP BY` with small `max_rows_to_group_by` and `group_by_overflow_mode='any'`. [#21856](https://github.com/ClickHouse/ClickHouse/pull/21856) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Optimize performance of queries like `SELECT ... FINAL ... WHERE`. Now in queries with `FINAL` it's allowed to move to `PREWHERE` columns, which are in sorting key. [#21830](https://github.com/ClickHouse/ClickHouse/pull/21830) ([foolchi](https://github.com/foolchi)).
|
||||||
|
* Improved performance by replacing `memcpy` to another implementation. This closes [#18583](https://github.com/ClickHouse/ClickHouse/issues/18583). [#21520](https://github.com/ClickHouse/ClickHouse/pull/21520) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Improve performance of aggregation in order of sorting key (with enabled setting `optimize_aggregation_in_order`). [#19401](https://github.com/ClickHouse/ClickHouse/pull/19401) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
|
||||||
|
#### Improvement
|
||||||
|
|
||||||
|
* Add connection pool for PostgreSQL table/database engine and dictionary source. Should fix [#21444](https://github.com/ClickHouse/ClickHouse/issues/21444). [#21839](https://github.com/ClickHouse/ClickHouse/pull/21839) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Support non-default table schema for postgres storage/table-function. Closes [#21701](https://github.com/ClickHouse/ClickHouse/issues/21701). [#21711](https://github.com/ClickHouse/ClickHouse/pull/21711) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Support replicas priority for postgres dictionary source. [#21710](https://github.com/ClickHouse/ClickHouse/pull/21710) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Introduce a new merge tree setting `min_bytes_to_rebalance_partition_over_jbod` which allows assigning new parts to different disks of a JBOD volume in a balanced way. [#16481](https://github.com/ClickHouse/ClickHouse/pull/16481) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Added `Grant`, `Revoke` and `System` values of `query_kind` column for corresponding queries in `system.query_log`. [#21102](https://github.com/ClickHouse/ClickHouse/pull/21102) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||||
|
* Allow customizing timeouts for HTTP connections used for replication independently from other HTTP timeouts. [#20088](https://github.com/ClickHouse/ClickHouse/pull/20088) ([nvartolomei](https://github.com/nvartolomei)).
|
||||||
|
* Better exception message in client in case of exception while server is writing blocks. In previous versions client may get misleading message like `Data compressed with different methods`. [#22427](https://github.com/ClickHouse/ClickHouse/pull/22427) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix error `Directory tmp_fetch_XXX already exists` which could happen after failed fetch part. Delete temporary fetch directory if it already exists. Fixes [#14197](https://github.com/ClickHouse/ClickHouse/issues/14197). [#22411](https://github.com/ClickHouse/ClickHouse/pull/22411) ([nvartolomei](https://github.com/nvartolomei)).
|
||||||
|
* Fix MSan report for function `range` with `UInt256` argument (support for large integers is experimental). This closes [#22157](https://github.com/ClickHouse/ClickHouse/issues/22157). [#22387](https://github.com/ClickHouse/ClickHouse/pull/22387) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add `current_database` column to `system.processes` table. It contains the current database of the query. [#22365](https://github.com/ClickHouse/ClickHouse/pull/22365) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||||
|
* Add case-insensitive history search/navigation and subword movement features to `clickhouse-client`. [#22105](https://github.com/ClickHouse/ClickHouse/pull/22105) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* If tuple of NULLs, e.g. `(NULL, NULL)` is on the left hand side of `IN` operator with tuples of non-NULLs on the right hand side, e.g. `SELECT (NULL, NULL) IN ((0, 0), (3, 1))` return 0 instead of throwing an exception about incompatible types. The expression may also appear due to optimization of something like `SELECT (NULL, NULL) = (8, 0) OR (NULL, NULL) = (3, 2) OR (NULL, NULL) = (0, 0) OR (NULL, NULL) = (3, 1)`. This closes [#22017](https://github.com/ClickHouse/ClickHouse/issues/22017). [#22063](https://github.com/ClickHouse/ClickHouse/pull/22063) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Update used version of simdjson to 0.9.1. This fixes [#21984](https://github.com/ClickHouse/ClickHouse/issues/21984). [#22057](https://github.com/ClickHouse/ClickHouse/pull/22057) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||||
|
* Added case insensitive aliases for `CONNECTION_ID()` and `VERSION()` functions. This fixes [#22028](https://github.com/ClickHouse/ClickHouse/issues/22028). [#22042](https://github.com/ClickHouse/ClickHouse/pull/22042) ([Eugene Klimov](https://github.com/Slach)).
|
||||||
|
* Add option `strict_increase` to `windowFunnel` function to calculate each event once (resolve [#21835](https://github.com/ClickHouse/ClickHouse/issues/21835)). [#22025](https://github.com/ClickHouse/ClickHouse/pull/22025) ([Vladimir](https://github.com/vdimir)).
|
||||||
|
* If partition key of a `MergeTree` table does not include `Date` or `DateTime` columns but includes exactly one `DateTime64` column, expose its values in the `min_time` and `max_time` columns in `system.parts` and `system.parts_columns` tables. Add `min_time` and `max_time` columns to `system.parts_columns` table (these was inconsistency to the `system.parts` table). This closes [#18244](https://github.com/ClickHouse/ClickHouse/issues/18244). [#22011](https://github.com/ClickHouse/ClickHouse/pull/22011) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Supported `replication_alter_partitions_sync=1` setting in `clickhouse-copier` for moving partitions from helping table to destination. Decreased default timeouts. Fixes [#21911](https://github.com/ClickHouse/ClickHouse/issues/21911). [#21912](https://github.com/ClickHouse/ClickHouse/pull/21912) ([turbo jason](https://github.com/songenjie)).
|
||||||
|
* Show path to data directory of `EmbeddedRocksDB` tables in system tables. [#21903](https://github.com/ClickHouse/ClickHouse/pull/21903) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Add profile event `HedgedRequestsChangeReplica`, change read data timeout from sec to ms. [#21886](https://github.com/ClickHouse/ClickHouse/pull/21886) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* DiskS3 (experimental feature under development). Fixed bug with the impossibility to move directory if the destination is not empty and cache disk is used. [#21837](https://github.com/ClickHouse/ClickHouse/pull/21837) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* Better formatting for `Array` and `Map` data types in Web UI. [#21798](https://github.com/ClickHouse/ClickHouse/pull/21798) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Update clusters only if their configurations were updated. [#21685](https://github.com/ClickHouse/ClickHouse/pull/21685) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Propagate query and session settings for distributed DDL queries. Set `distributed_ddl_entry_format_version` to 2 to enable this. Added `distributed_ddl_output_mode` setting. Supported modes: `none`, `throw` (default), `null_status_on_timeout` and `never_throw`. Miscellaneous fixes and improvements for `Replicated` database engine. [#21535](https://github.com/ClickHouse/ClickHouse/pull/21535) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* If `PODArray` was instantiated with element size that is neither a fraction or a multiple of 16, buffer overflow was possible. No bugs in current releases exist. [#21533](https://github.com/ClickHouse/ClickHouse/pull/21533) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add `last_error_time`/`last_error_message`/`last_error_stacktrace`/`remote` columns for `system.errors`. [#21529](https://github.com/ClickHouse/ClickHouse/pull/21529) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Add aliases `simpleJSONExtract/simpleJSONHas` to `visitParam/visitParamExtract{UInt, Int, Bool, Float, Raw, String}`. Fixes #21383. [#21519](https://github.com/ClickHouse/ClickHouse/pull/21519) ([fastio](https://github.com/fastio)).
|
||||||
|
* Add setting `optimize_skip_unused_shards_limit` to limit the number of sharding key values for `optimize_skip_unused_shards`. [#21512](https://github.com/ClickHouse/ClickHouse/pull/21512) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Improve `clickhouse-format` to not throw exception when there are extra spaces or comment after the last query, and throw exception early with readable message when format `ASTInsertQuery` with data . [#21311](https://github.com/ClickHouse/ClickHouse/pull/21311) ([flynn](https://github.com/ucasFL)).
|
||||||
|
* Improve support of integer keys in data type `Map`. [#21157](https://github.com/ClickHouse/ClickHouse/pull/21157) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* MaterializeMySQL: attempt to reconnect to MySQL if the connection is lost. [#20961](https://github.com/ClickHouse/ClickHouse/pull/20961) ([Håvard Kvålen](https://github.com/havardk)).
|
||||||
|
* Support more cases to rewrite `CROSS JOIN` to `INNER JOIN`. [#20392](https://github.com/ClickHouse/ClickHouse/pull/20392) ([Vladimir](https://github.com/vdimir)).
|
||||||
|
* Do not create empty parts on INSERT when `optimize_on_insert` setting enabled. Fixes [#20304](https://github.com/ClickHouse/ClickHouse/issues/20304). [#20387](https://github.com/ClickHouse/ClickHouse/pull/20387) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* `MaterializeMySQL`: add minmax skipping index for `_version` column. [#20382](https://github.com/ClickHouse/ClickHouse/pull/20382) ([Stig Bakken](https://github.com/stigsb)).
|
||||||
|
* Add option `--backslash` for `clickhouse-format`, which can add a backslash at the end of each line of the formatted query. [#21494](https://github.com/ClickHouse/ClickHouse/pull/21494) ([flynn](https://github.com/ucasFL)).
|
||||||
|
* Now clickhouse will not throw `LOGICAL_ERROR` exception when we try to mutate the already covered part. Fixes [#22013](https://github.com/ClickHouse/ClickHouse/issues/22013). [#22291](https://github.com/ClickHouse/ClickHouse/pull/22291) ([alesapin](https://github.com/alesapin)).
|
||||||
|
|
||||||
|
#### Bug Fix
|
||||||
|
|
||||||
|
* Remove socket from epoll before cancelling packet receiver in `HedgedConnections` to prevent possible race. Fixes [#22161](https://github.com/ClickHouse/ClickHouse/issues/22161). [#22443](https://github.com/ClickHouse/ClickHouse/pull/22443) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Add (missing) memory accounting in parallel parsing routines. In previous versions OOM was possible when the resultset contains very large blocks of data. This closes [#22008](https://github.com/ClickHouse/ClickHouse/issues/22008). [#22425](https://github.com/ClickHouse/ClickHouse/pull/22425) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Fix exception which may happen when `SELECT` has constant `WHERE` condition and source table has columns which names are digits. [#22270](https://github.com/ClickHouse/ClickHouse/pull/22270) ([LiuNeng](https://github.com/liuneng1994)).
|
||||||
|
* Fix query cancellation with `use_hedged_requests=0` and `async_socket_for_remote=1`. [#22183](https://github.com/ClickHouse/ClickHouse/pull/22183) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix uncaught exception in `InterserverIOHTTPHandler`. [#22146](https://github.com/ClickHouse/ClickHouse/pull/22146) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix docker entrypoint in case `http_port` is not in the config. [#22132](https://github.com/ClickHouse/ClickHouse/pull/22132) ([Ewout](https://github.com/devwout)).
|
||||||
|
* Fix error `Invalid number of rows in Chunk` in `JOIN` with `TOTALS` and `arrayJoin`. Closes [#19303](https://github.com/ClickHouse/ClickHouse/issues/19303). [#22129](https://github.com/ClickHouse/ClickHouse/pull/22129) ([Vladimir](https://github.com/vdimir)).
|
||||||
|
* Fix the background thread pool name which used to poll message from Kafka. The Kafka engine with the broken thread pool will not consume the message from message queue. [#22122](https://github.com/ClickHouse/ClickHouse/pull/22122) ([fastio](https://github.com/fastio)).
|
||||||
|
* Fix waiting for `OPTIMIZE` and `ALTER` queries for `ReplicatedMergeTree` table engines. Now the query will not hang when the table was detached or restarted. [#22118](https://github.com/ClickHouse/ClickHouse/pull/22118) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Disable `async_socket_for_remote`/`use_hedged_requests` for buggy Linux kernels. [#22109](https://github.com/ClickHouse/ClickHouse/pull/22109) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Docker entrypoint: avoid chown of `.` in case when `LOG_PATH` is empty. Closes [#22100](https://github.com/ClickHouse/ClickHouse/issues/22100). [#22102](https://github.com/ClickHouse/ClickHouse/pull/22102) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* The function `decrypt` was lacking a check for the minimal size of data encrypted in `AEAD` mode. This closes [#21897](https://github.com/ClickHouse/ClickHouse/issues/21897). [#22064](https://github.com/ClickHouse/ClickHouse/pull/22064) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* In rare case, merge for `CollapsingMergeTree` may create granule with `index_granularity + 1` rows. Because of this, internal check, added in [#18928](https://github.com/ClickHouse/ClickHouse/issues/18928) (affects 21.2 and 21.3), may fail with error `Incomplete granules are not allowed while blocks are granules size`. This error did not allow parts to merge. [#21976](https://github.com/ClickHouse/ClickHouse/pull/21976) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Reverted [#15454](https://github.com/ClickHouse/ClickHouse/issues/15454) that may cause significant increase in memory usage while loading external dictionaries of hashed type. This closes [#21935](https://github.com/ClickHouse/ClickHouse/issues/21935). [#21948](https://github.com/ClickHouse/ClickHouse/pull/21948) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||||
|
* Prevent hedged connections overlaps (`Unknown packet 9 from server` error). [#21941](https://github.com/ClickHouse/ClickHouse/pull/21941) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix reading the HTTP POST request with "multipart/form-data" content type in some cases. [#21936](https://github.com/ClickHouse/ClickHouse/pull/21936) ([Ivan](https://github.com/abyss7)).
|
||||||
|
* Fix wrong `ORDER BY` results when a query contains window functions, and optimization for reading in primary key order is applied. Fixes [#21828](https://github.com/ClickHouse/ClickHouse/issues/21828). [#21915](https://github.com/ClickHouse/ClickHouse/pull/21915) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||||
|
* Fix deadlock in first catboost model execution. Closes [#13832](https://github.com/ClickHouse/ClickHouse/issues/13832). [#21844](https://github.com/ClickHouse/ClickHouse/pull/21844) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* Fix incorrect query result (and possible crash) which could happen when `WHERE` or `HAVING` condition is pushed before `GROUP BY`. Fixes [#21773](https://github.com/ClickHouse/ClickHouse/issues/21773). [#21841](https://github.com/ClickHouse/ClickHouse/pull/21841) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||||
|
* Better error handling and logging in `WriteBufferFromS3`. [#21836](https://github.com/ClickHouse/ClickHouse/pull/21836) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
* Fix possible crashes in aggregate functions with combinator `Distinct`, while using two-level aggregation. This is a follow-up fix of [#18365](https://github.com/ClickHouse/ClickHouse/pull/18365) . Can only reproduced in production env. [#21818](https://github.com/ClickHouse/ClickHouse/pull/21818) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix scalar subquery index analysis. This fixes [#21717](https://github.com/ClickHouse/ClickHouse/issues/21717) , which was introduced in [#18896](https://github.com/ClickHouse/ClickHouse/pull/18896). [#21766](https://github.com/ClickHouse/ClickHouse/pull/21766) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix bug for `ReplicatedMerge` table engines when `ALTER MODIFY COLUMN` query doesn't change the type of `Decimal` column if its size (32 bit or 64 bit) doesn't change. [#21728](https://github.com/ClickHouse/ClickHouse/pull/21728) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Fix possible infinite waiting when concurrent `OPTIMIZE` and `DROP` are run for `ReplicatedMergeTree`. [#21716](https://github.com/ClickHouse/ClickHouse/pull/21716) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix function `arrayElement` with type `Map` for constant integer arguments. [#21699](https://github.com/ClickHouse/ClickHouse/pull/21699) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Fix SIGSEGV on not existing attributes from `ip_trie` with `access_to_key_from_attributes`. [#21692](https://github.com/ClickHouse/ClickHouse/pull/21692) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Server now start accepting connections only after `DDLWorker` and dictionaries initialization. [#21676](https://github.com/ClickHouse/ClickHouse/pull/21676) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Add type conversion for keys of tables of type `Join` (previously led to SIGSEGV). [#21646](https://github.com/ClickHouse/ClickHouse/pull/21646) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix distributed requests cancellation (for example simple select from multiple shards with limit, i.e. `select * from remote('127.{2,3}', system.numbers) limit 100`) with `async_socket_for_remote=1`. [#21643](https://github.com/ClickHouse/ClickHouse/pull/21643) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Fix `fsync_part_directory` for horizontal merge. [#21642](https://github.com/ClickHouse/ClickHouse/pull/21642) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* Remove unknown columns from joined table in `WHERE` for queries to external database engines (MySQL, PostgreSQL). close [#14614](https://github.com/ClickHouse/ClickHouse/issues/14614), close [#19288](https://github.com/ClickHouse/ClickHouse/issues/19288) (dup), close [#19645](https://github.com/ClickHouse/ClickHouse/issues/19645) (dup). [#21640](https://github.com/ClickHouse/ClickHouse/pull/21640) ([Vladimir](https://github.com/vdimir)).
|
||||||
|
* `std::terminate` was called if there is an error writing data into s3. [#21624](https://github.com/ClickHouse/ClickHouse/pull/21624) ([Vladimir](https://github.com/vdimir)).
|
||||||
|
* Fix possible error `Cannot find column` when `optimize_skip_unused_shards` is enabled and zero shards are used. [#21579](https://github.com/ClickHouse/ClickHouse/pull/21579) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
* In case if query has constant `WHERE` condition, and setting `optimize_skip_unused_shards` enabled, all shards may be skipped and query could return incorrect empty result. [#21550](https://github.com/ClickHouse/ClickHouse/pull/21550) ([Amos Bird](https://github.com/amosbird)).
|
||||||
|
* Fix table function `clusterAllReplicas` returns wrong `_shard_num`. close [#21481](https://github.com/ClickHouse/ClickHouse/issues/21481). [#21498](https://github.com/ClickHouse/ClickHouse/pull/21498) ([flynn](https://github.com/ucasFL)).
|
||||||
|
* Fix that S3 table holds old credentials after config update. [#21457](https://github.com/ClickHouse/ClickHouse/pull/21457) ([Grigory Pervakov](https://github.com/GrigoryPervakov)).
|
||||||
|
* Fixed race on SSL object inside `SecureSocket` in Poco. [#21456](https://github.com/ClickHouse/ClickHouse/pull/21456) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||||
|
* Fix `Avro` format parsing for `Kafka`. Fixes [#21437](https://github.com/ClickHouse/ClickHouse/issues/21437). [#21438](https://github.com/ClickHouse/ClickHouse/pull/21438) ([Ilya Golshtein](https://github.com/ilejn)).
|
||||||
|
* Fix receive and send timeouts and non-blocking read in secure socket. [#21429](https://github.com/ClickHouse/ClickHouse/pull/21429) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||||
|
* `force_drop_table` flag didn't work for `MATERIALIZED VIEW`, it's fixed. Fixes [#18943](https://github.com/ClickHouse/ClickHouse/issues/18943). [#20626](https://github.com/ClickHouse/ClickHouse/pull/20626) ([tavplubix](https://github.com/tavplubix)).
|
||||||
|
* Fix name clashes in `PredicateRewriteVisitor`. It caused incorrect `WHERE` filtration after full join. Close [#20497](https://github.com/ClickHouse/ClickHouse/issues/20497). [#20622](https://github.com/ClickHouse/ClickHouse/pull/20622) ([Vladimir](https://github.com/vdimir)).
|
||||||
|
|
||||||
|
#### Build/Testing/Packaging Improvement
|
||||||
|
|
||||||
|
* Add [Jepsen](https://github.com/jepsen-io/jepsen) tests for ClickHouse Keeper. [#21677](https://github.com/ClickHouse/ClickHouse/pull/21677) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Run stateless tests in parallel in CI. Depends on [#22181](https://github.com/ClickHouse/ClickHouse/issues/22181). [#22300](https://github.com/ClickHouse/ClickHouse/pull/22300) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Enable status check for [SQLancer](https://github.com/sqlancer/sqlancer) CI run. [#22015](https://github.com/ClickHouse/ClickHouse/pull/22015) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||||
|
* Multiple preparations for PowerPC builds: Enable the bundled openldap on `ppc64le`. [#22487](https://github.com/ClickHouse/ClickHouse/pull/22487) ([Kfir Itzhak](https://github.com/mastertheknife)). Enable compiling on `ppc64le` with Clang. [#22476](https://github.com/ClickHouse/ClickHouse/pull/22476) ([Kfir Itzhak](https://github.com/mastertheknife)). Fix compiling boost on `ppc64le`. [#22474](https://github.com/ClickHouse/ClickHouse/pull/22474) ([Kfir Itzhak](https://github.com/mastertheknife)). Fix CMake error about internal CMake variable `CMAKE_ASM_COMPILE_OBJECT` not set on `ppc64le`. [#22469](https://github.com/ClickHouse/ClickHouse/pull/22469) ([Kfir Itzhak](https://github.com/mastertheknife)). Fix Fedora/RHEL/CentOS not finding `libclang_rt.builtins` on `ppc64le`. [#22458](https://github.com/ClickHouse/ClickHouse/pull/22458) ([Kfir Itzhak](https://github.com/mastertheknife)). Enable building with `jemalloc` on `ppc64le`. [#22447](https://github.com/ClickHouse/ClickHouse/pull/22447) ([Kfir Itzhak](https://github.com/mastertheknife)). Fix ClickHouse's config embedding and cctz's timezone embedding on `ppc64le`. [#22445](https://github.com/ClickHouse/ClickHouse/pull/22445) ([Kfir Itzhak](https://github.com/mastertheknife)). Fixed compiling on `ppc64le` and use the correct instruction pointer register on `ppc64le`. [#22430](https://github.com/ClickHouse/ClickHouse/pull/22430) ([Kfir Itzhak](https://github.com/mastertheknife)).
|
||||||
|
* Re-enable the S3 (AWS) library on `aarch64`. [#22484](https://github.com/ClickHouse/ClickHouse/pull/22484) ([Kfir Itzhak](https://github.com/mastertheknife)).
|
||||||
|
* Add `tzdata` to Docker containers because reading `ORC` formats requires it. This closes [#14156](https://github.com/ClickHouse/ClickHouse/issues/14156). [#22000](https://github.com/ClickHouse/ClickHouse/pull/22000) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Introduce 2 arguments for `clickhouse-server` image Dockerfile: `deb_location` & `single_binary_location`. [#21977](https://github.com/ClickHouse/ClickHouse/pull/21977) ([filimonov](https://github.com/filimonov)).
|
||||||
|
* Allow to use clang-tidy with release builds by enabling assertions if it is used. [#21914](https://github.com/ClickHouse/ClickHouse/pull/21914) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Add llvm-12 binaries name to search in cmake scripts. Implicit constants conversions to mute clang warnings. Updated submodules to build with CMake 3.19. Mute recursion in macro expansion in `readpassphrase` library. Deprecated `-fuse-ld` changed to `--ld-path` for clang. [#21597](https://github.com/ClickHouse/ClickHouse/pull/21597) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||||
|
* Updating `docker/test/testflows/runner/dockerd-entrypoint.sh` to use Yandex dockerhub-proxy, because Docker Hub has enabled very restrictive rate limits [#21551](https://github.com/ClickHouse/ClickHouse/pull/21551) ([vzakaznikov](https://github.com/vzakaznikov)).
|
||||||
|
* Fix macOS shared lib build. [#20184](https://github.com/ClickHouse/ClickHouse/pull/20184) ([nvartolomei](https://github.com/nvartolomei)).
|
||||||
|
* Add `ctime` option to `zookeeper-dump-tree`. It allows to dump node creation time. [#21842](https://github.com/ClickHouse/ClickHouse/pull/21842) ([Ilya](https://github.com/HumanUser)).
|
||||||
|
|
||||||
|
|
||||||
## ClickHouse release 21.3 (LTS)
|
## ClickHouse release 21.3 (LTS)
|
||||||
|
|
||||||
### ClickHouse release v21.3, 2021-03-12
|
### ClickHouse release v21.3, 2021-03-12
|
||||||
@ -26,7 +179,7 @@
|
|||||||
#### Experimental feature
|
#### Experimental feature
|
||||||
|
|
||||||
* Add experimental `Replicated` database engine. It replicates DDL queries across multiple hosts. [#16193](https://github.com/ClickHouse/ClickHouse/pull/16193) ([tavplubix](https://github.com/tavplubix)).
|
* Add experimental `Replicated` database engine. It replicates DDL queries across multiple hosts. [#16193](https://github.com/ClickHouse/ClickHouse/pull/16193) ([tavplubix](https://github.com/tavplubix)).
|
||||||
* Introduce experimental support for window functions, enabled with `allow_experimental_functions = 1`. This is a preliminary, alpha-quality implementation that is not suitable for production use and will change in backward-incompatible ways in future releases. Please see [the documentation](https://github.com/ClickHouse/ClickHouse/blob/master/docs/en/sql-reference/window-functions/index.md#experimental-window-functions) for the list of supported features. [#20337](https://github.com/ClickHouse/ClickHouse/pull/20337) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
* Introduce experimental support for window functions, enabled with `allow_experimental_window_functions = 1`. This is a preliminary, alpha-quality implementation that is not suitable for production use and will change in backward-incompatible ways in future releases. Please see [the documentation](https://github.com/ClickHouse/ClickHouse/blob/master/docs/en/sql-reference/window-functions/index.md#experimental-window-functions) for the list of supported features. [#20337](https://github.com/ClickHouse/ClickHouse/pull/20337) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||||
* Add the ability to backup/restore metadata files for DiskS3. [#18377](https://github.com/ClickHouse/ClickHouse/pull/18377) ([Pavel Kovalenko](https://github.com/Jokser)).
|
* Add the ability to backup/restore metadata files for DiskS3. [#18377](https://github.com/ClickHouse/ClickHouse/pull/18377) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||||
|
|
||||||
#### Performance Improvement
|
#### Performance Improvement
|
||||||
|
@ -68,17 +68,30 @@ endif ()
|
|||||||
|
|
||||||
include (cmake/find/ccache.cmake)
|
include (cmake/find/ccache.cmake)
|
||||||
|
|
||||||
option(ENABLE_CHECK_HEAVY_BUILDS "Don't allow C++ translation units to compile too long or to take too much memory while compiling" OFF)
|
# Take care to add prlimit in command line before ccache, or else ccache thinks that
|
||||||
|
# prlimit is compiler, and clang++ is its input file, and refuses to work with
|
||||||
|
# multiple inputs, e.g in ccache log:
|
||||||
|
# [2021-03-31T18:06:32.655327 36900] Command line: /usr/bin/ccache prlimit --as=10000000000 --data=5000000000 --cpu=600 /usr/bin/clang++-11 - ...... std=gnu++2a -MD -MT src/CMakeFiles/dbms.dir/Storages/MergeTree/IMergeTreeDataPart.cpp.o -MF src/CMakeFiles/dbms.dir/Storages/MergeTree/IMergeTreeDataPart.cpp.o.d -o src/CMakeFiles/dbms.dir/Storages/MergeTree/IMergeTreeDataPart.cpp.o -c ../src/Storages/MergeTree/IMergeTreeDataPart.cpp
|
||||||
|
#
|
||||||
|
# [2021-03-31T18:06:32.656704 36900] Multiple input files: /usr/bin/clang++-11 and ../src/Storages/MergeTree/IMergeTreeDataPart.cpp
|
||||||
|
#
|
||||||
|
# Another way would be to use --ccache-skip option before clang++-11 to make
|
||||||
|
# ccache ignore it.
|
||||||
|
option(ENABLE_CHECK_HEAVY_BUILDS "Don't allow C++ translation units to compile too long or to take too much memory while compiling." OFF)
|
||||||
if (ENABLE_CHECK_HEAVY_BUILDS)
|
if (ENABLE_CHECK_HEAVY_BUILDS)
|
||||||
# set DATA (since RSS does not work since 2.6.x+) to 2G
|
# set DATA (since RSS does not work since 2.6.x+) to 2G
|
||||||
set (RLIMIT_DATA 5000000000)
|
set (RLIMIT_DATA 5000000000)
|
||||||
# set VIRT (RLIMIT_AS) to 10G (DATA*10)
|
# set VIRT (RLIMIT_AS) to 10G (DATA*10)
|
||||||
set (RLIMIT_AS 10000000000)
|
set (RLIMIT_AS 10000000000)
|
||||||
|
# set CPU time limit to 600 seconds
|
||||||
|
set (RLIMIT_CPU 600)
|
||||||
|
|
||||||
# gcc10/gcc10/clang -fsanitize=memory is too heavy
|
# gcc10/gcc10/clang -fsanitize=memory is too heavy
|
||||||
if (SANITIZE STREQUAL "memory" OR COMPILER_GCC)
|
if (SANITIZE STREQUAL "memory" OR COMPILER_GCC)
|
||||||
set (RLIMIT_DATA 10000000000)
|
set (RLIMIT_DATA 10000000000)
|
||||||
endif()
|
endif()
|
||||||
set (CMAKE_CXX_COMPILER_LAUNCHER prlimit --as=${RLIMIT_AS} --data=${RLIMIT_DATA} --cpu=600)
|
|
||||||
|
set (CMAKE_CXX_COMPILER_LAUNCHER prlimit --as=${RLIMIT_AS} --data=${RLIMIT_DATA} --cpu=${RLIMIT_CPU} ${CMAKE_CXX_COMPILER_LAUNCHER})
|
||||||
endif ()
|
endif ()
|
||||||
|
|
||||||
if (NOT CMAKE_BUILD_TYPE OR CMAKE_BUILD_TYPE STREQUAL "None")
|
if (NOT CMAKE_BUILD_TYPE OR CMAKE_BUILD_TYPE STREQUAL "None")
|
||||||
@ -277,6 +290,12 @@ if (COMPILER_GCC OR COMPILER_CLANG)
|
|||||||
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fsized-deallocation")
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fsized-deallocation")
|
||||||
endif ()
|
endif ()
|
||||||
|
|
||||||
|
# falign-functions=32 prevents from random performance regressions with the code change. Thus, providing more stable
|
||||||
|
# benchmarks.
|
||||||
|
if (COMPILER_GCC OR COMPILER_CLANG)
|
||||||
|
set(COMPILER_FLAGS "${COMPILER_FLAGS} -falign-functions=32")
|
||||||
|
endif ()
|
||||||
|
|
||||||
# Compiler-specific coverage flags e.g. -fcoverage-mapping for gcc
|
# Compiler-specific coverage flags e.g. -fcoverage-mapping for gcc
|
||||||
option(WITH_COVERAGE "Profile the resulting binary/binaries" OFF)
|
option(WITH_COVERAGE "Profile the resulting binary/binaries" OFF)
|
||||||
|
|
||||||
@ -464,6 +483,7 @@ find_contrib_lib(double-conversion) # Must be before parquet
|
|||||||
include (cmake/find/ssl.cmake)
|
include (cmake/find/ssl.cmake)
|
||||||
include (cmake/find/ldap.cmake) # after ssl
|
include (cmake/find/ldap.cmake) # after ssl
|
||||||
include (cmake/find/icu.cmake)
|
include (cmake/find/icu.cmake)
|
||||||
|
include (cmake/find/xz.cmake)
|
||||||
include (cmake/find/zlib.cmake)
|
include (cmake/find/zlib.cmake)
|
||||||
include (cmake/find/zstd.cmake)
|
include (cmake/find/zstd.cmake)
|
||||||
include (cmake/find/ltdl.cmake) # for odbc
|
include (cmake/find/ltdl.cmake) # for odbc
|
||||||
@ -496,6 +516,7 @@ include (cmake/find/fast_float.cmake)
|
|||||||
include (cmake/find/rapidjson.cmake)
|
include (cmake/find/rapidjson.cmake)
|
||||||
include (cmake/find/fastops.cmake)
|
include (cmake/find/fastops.cmake)
|
||||||
include (cmake/find/odbc.cmake)
|
include (cmake/find/odbc.cmake)
|
||||||
|
include (cmake/find/nanodbc.cmake)
|
||||||
include (cmake/find/rocksdb.cmake)
|
include (cmake/find/rocksdb.cmake)
|
||||||
include (cmake/find/libpqxx.cmake)
|
include (cmake/find/libpqxx.cmake)
|
||||||
include (cmake/find/nuraft.cmake)
|
include (cmake/find/nuraft.cmake)
|
||||||
|
@ -8,6 +8,7 @@ add_subdirectory (loggers)
|
|||||||
add_subdirectory (pcg-random)
|
add_subdirectory (pcg-random)
|
||||||
add_subdirectory (widechar_width)
|
add_subdirectory (widechar_width)
|
||||||
add_subdirectory (readpassphrase)
|
add_subdirectory (readpassphrase)
|
||||||
|
add_subdirectory (bridge)
|
||||||
|
|
||||||
if (USE_MYSQL)
|
if (USE_MYSQL)
|
||||||
add_subdirectory (mysqlxx)
|
add_subdirectory (mysqlxx)
|
||||||
|
7
base/bridge/CMakeLists.txt
Normal file
7
base/bridge/CMakeLists.txt
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
add_library (bridge
|
||||||
|
IBridge.cpp
|
||||||
|
)
|
||||||
|
|
||||||
|
target_include_directories (daemon PUBLIC ..)
|
||||||
|
target_link_libraries (bridge PRIVATE daemon dbms Poco::Data Poco::Data::ODBC)
|
||||||
|
|
238
base/bridge/IBridge.cpp
Normal file
238
base/bridge/IBridge.cpp
Normal file
@ -0,0 +1,238 @@
|
|||||||
|
#include "IBridge.h"
|
||||||
|
|
||||||
|
#include <IO/ReadHelpers.h>
|
||||||
|
#include <boost/program_options.hpp>
|
||||||
|
#include <Poco/Net/NetException.h>
|
||||||
|
#include <Poco/Util/HelpFormatter.h>
|
||||||
|
#include <Common/StringUtils/StringUtils.h>
|
||||||
|
#include <Formats/registerFormats.h>
|
||||||
|
#include <common/logger_useful.h>
|
||||||
|
#include <Common/SensitiveDataMasker.h>
|
||||||
|
#include <Server/HTTP/HTTPServer.h>
|
||||||
|
|
||||||
|
#if USE_ODBC
|
||||||
|
# include <Poco/Data/ODBC/Connector.h>
|
||||||
|
#endif
|
||||||
|
|
||||||
|
|
||||||
|
namespace DB
|
||||||
|
{
|
||||||
|
|
||||||
|
namespace ErrorCodes
|
||||||
|
{
|
||||||
|
extern const int ARGUMENT_OUT_OF_BOUND;
|
||||||
|
}
|
||||||
|
|
||||||
|
namespace
|
||||||
|
{
|
||||||
|
Poco::Net::SocketAddress makeSocketAddress(const std::string & host, UInt16 port, Poco::Logger * log)
|
||||||
|
{
|
||||||
|
Poco::Net::SocketAddress socket_address;
|
||||||
|
try
|
||||||
|
{
|
||||||
|
socket_address = Poco::Net::SocketAddress(host, port);
|
||||||
|
}
|
||||||
|
catch (const Poco::Net::DNSException & e)
|
||||||
|
{
|
||||||
|
const auto code = e.code();
|
||||||
|
if (code == EAI_FAMILY
|
||||||
|
#if defined(EAI_ADDRFAMILY)
|
||||||
|
|| code == EAI_ADDRFAMILY
|
||||||
|
#endif
|
||||||
|
)
|
||||||
|
{
|
||||||
|
LOG_ERROR(log, "Cannot resolve listen_host ({}), error {}: {}. If it is an IPv6 address and your host has disabled IPv6, then consider to specify IPv4 address to listen in <listen_host> element of configuration file. Example: <listen_host>0.0.0.0</listen_host>", host, e.code(), e.message());
|
||||||
|
}
|
||||||
|
|
||||||
|
throw;
|
||||||
|
}
|
||||||
|
return socket_address;
|
||||||
|
}
|
||||||
|
|
||||||
|
Poco::Net::SocketAddress socketBindListen(Poco::Net::ServerSocket & socket, const std::string & host, UInt16 port, Poco::Logger * log)
|
||||||
|
{
|
||||||
|
auto address = makeSocketAddress(host, port, log);
|
||||||
|
#if POCO_VERSION < 0x01080000
|
||||||
|
socket.bind(address, /* reuseAddress = */ true);
|
||||||
|
#else
|
||||||
|
socket.bind(address, /* reuseAddress = */ true, /* reusePort = */ false);
|
||||||
|
#endif
|
||||||
|
|
||||||
|
socket.listen(/* backlog = */ 64);
|
||||||
|
|
||||||
|
return address;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
void IBridge::handleHelp(const std::string &, const std::string &)
|
||||||
|
{
|
||||||
|
Poco::Util::HelpFormatter help_formatter(options());
|
||||||
|
help_formatter.setCommand(commandName());
|
||||||
|
help_formatter.setHeader("HTTP-proxy for odbc requests");
|
||||||
|
help_formatter.setUsage("--http-port <port>");
|
||||||
|
help_formatter.format(std::cerr);
|
||||||
|
|
||||||
|
stopOptionsProcessing();
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
void IBridge::defineOptions(Poco::Util::OptionSet & options)
|
||||||
|
{
|
||||||
|
options.addOption(
|
||||||
|
Poco::Util::Option("http-port", "", "port to listen").argument("http-port", true) .binding("http-port"));
|
||||||
|
|
||||||
|
options.addOption(
|
||||||
|
Poco::Util::Option("listen-host", "", "hostname or address to listen, default 127.0.0.1").argument("listen-host").binding("listen-host"));
|
||||||
|
|
||||||
|
options.addOption(
|
||||||
|
Poco::Util::Option("http-timeout", "", "http timeout for socket, default 1800").argument("http-timeout").binding("http-timeout"));
|
||||||
|
|
||||||
|
options.addOption(
|
||||||
|
Poco::Util::Option("max-server-connections", "", "max connections to server, default 1024").argument("max-server-connections").binding("max-server-connections"));
|
||||||
|
|
||||||
|
options.addOption(
|
||||||
|
Poco::Util::Option("keep-alive-timeout", "", "keepalive timeout, default 10").argument("keep-alive-timeout").binding("keep-alive-timeout"));
|
||||||
|
|
||||||
|
options.addOption(
|
||||||
|
Poco::Util::Option("log-level", "", "sets log level, default info") .argument("log-level").binding("logger.level"));
|
||||||
|
|
||||||
|
options.addOption(
|
||||||
|
Poco::Util::Option("log-path", "", "log path for all logs, default console").argument("log-path").binding("logger.log"));
|
||||||
|
|
||||||
|
options.addOption(
|
||||||
|
Poco::Util::Option("err-log-path", "", "err log path for all logs, default no").argument("err-log-path").binding("logger.errorlog"));
|
||||||
|
|
||||||
|
options.addOption(
|
||||||
|
Poco::Util::Option("stdout-path", "", "stdout log path, default console").argument("stdout-path").binding("logger.stdout"));
|
||||||
|
|
||||||
|
options.addOption(
|
||||||
|
Poco::Util::Option("stderr-path", "", "stderr log path, default console").argument("stderr-path").binding("logger.stderr"));
|
||||||
|
|
||||||
|
using Me = std::decay_t<decltype(*this)>;
|
||||||
|
|
||||||
|
options.addOption(
|
||||||
|
Poco::Util::Option("help", "", "produce this help message").binding("help").callback(Poco::Util::OptionCallback<Me>(this, &Me::handleHelp)));
|
||||||
|
|
||||||
|
ServerApplication::defineOptions(options); // NOLINT Don't need complex BaseDaemon's .xml config
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
void IBridge::initialize(Application & self)
|
||||||
|
{
|
||||||
|
BaseDaemon::closeFDs();
|
||||||
|
is_help = config().has("help");
|
||||||
|
|
||||||
|
if (is_help)
|
||||||
|
return;
|
||||||
|
|
||||||
|
config().setString("logger", bridgeName());
|
||||||
|
|
||||||
|
/// Redirect stdout, stderr to specified files.
|
||||||
|
/// Some libraries and sanitizers write to stderr in case of errors.
|
||||||
|
const auto stdout_path = config().getString("logger.stdout", "");
|
||||||
|
if (!stdout_path.empty())
|
||||||
|
{
|
||||||
|
if (!freopen(stdout_path.c_str(), "a+", stdout))
|
||||||
|
throw Poco::OpenFileException("Cannot attach stdout to " + stdout_path);
|
||||||
|
|
||||||
|
/// Disable buffering for stdout.
|
||||||
|
setbuf(stdout, nullptr);
|
||||||
|
}
|
||||||
|
const auto stderr_path = config().getString("logger.stderr", "");
|
||||||
|
if (!stderr_path.empty())
|
||||||
|
{
|
||||||
|
if (!freopen(stderr_path.c_str(), "a+", stderr))
|
||||||
|
throw Poco::OpenFileException("Cannot attach stderr to " + stderr_path);
|
||||||
|
|
||||||
|
/// Disable buffering for stderr.
|
||||||
|
setbuf(stderr, nullptr);
|
||||||
|
}
|
||||||
|
|
||||||
|
buildLoggers(config(), logger(), self.commandName());
|
||||||
|
|
||||||
|
BaseDaemon::logRevision();
|
||||||
|
|
||||||
|
log = &logger();
|
||||||
|
hostname = config().getString("listen-host", "127.0.0.1");
|
||||||
|
port = config().getUInt("http-port");
|
||||||
|
if (port > 0xFFFF)
|
||||||
|
throw Exception("Out of range 'http-port': " + std::to_string(port), ErrorCodes::ARGUMENT_OUT_OF_BOUND);
|
||||||
|
|
||||||
|
http_timeout = config().getUInt("http-timeout", DEFAULT_HTTP_READ_BUFFER_TIMEOUT);
|
||||||
|
max_server_connections = config().getUInt("max-server-connections", 1024);
|
||||||
|
keep_alive_timeout = config().getUInt("keep-alive-timeout", 10);
|
||||||
|
|
||||||
|
initializeTerminationAndSignalProcessing();
|
||||||
|
|
||||||
|
#if USE_ODBC
|
||||||
|
if (bridgeName() == "ODBCBridge")
|
||||||
|
Poco::Data::ODBC::Connector::registerConnector();
|
||||||
|
#endif
|
||||||
|
|
||||||
|
ServerApplication::initialize(self); // NOLINT
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
void IBridge::uninitialize()
|
||||||
|
{
|
||||||
|
BaseDaemon::uninitialize();
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
int IBridge::main(const std::vector<std::string> & /*args*/)
|
||||||
|
{
|
||||||
|
if (is_help)
|
||||||
|
return Application::EXIT_OK;
|
||||||
|
|
||||||
|
registerFormats();
|
||||||
|
LOG_INFO(log, "Starting up {} on host: {}, port: {}", bridgeName(), hostname, port);
|
||||||
|
|
||||||
|
Poco::Net::ServerSocket socket;
|
||||||
|
auto address = socketBindListen(socket, hostname, port, log);
|
||||||
|
socket.setReceiveTimeout(http_timeout);
|
||||||
|
socket.setSendTimeout(http_timeout);
|
||||||
|
|
||||||
|
Poco::ThreadPool server_pool(3, max_server_connections);
|
||||||
|
|
||||||
|
Poco::Net::HTTPServerParams::Ptr http_params = new Poco::Net::HTTPServerParams;
|
||||||
|
http_params->setTimeout(http_timeout);
|
||||||
|
http_params->setKeepAliveTimeout(keep_alive_timeout);
|
||||||
|
|
||||||
|
auto shared_context = Context::createShared();
|
||||||
|
auto context = Context::createGlobal(shared_context.get());
|
||||||
|
context->makeGlobalContext();
|
||||||
|
|
||||||
|
if (config().has("query_masking_rules"))
|
||||||
|
SensitiveDataMasker::setInstance(std::make_unique<SensitiveDataMasker>(config(), "query_masking_rules"));
|
||||||
|
|
||||||
|
auto server = HTTPServer(
|
||||||
|
context,
|
||||||
|
getHandlerFactoryPtr(context),
|
||||||
|
server_pool,
|
||||||
|
socket,
|
||||||
|
http_params);
|
||||||
|
|
||||||
|
SCOPE_EXIT({
|
||||||
|
LOG_DEBUG(log, "Received termination signal.");
|
||||||
|
LOG_DEBUG(log, "Waiting for current connections to close.");
|
||||||
|
|
||||||
|
server.stop();
|
||||||
|
|
||||||
|
for (size_t count : ext::range(1, 6))
|
||||||
|
{
|
||||||
|
if (server.currentConnections() == 0)
|
||||||
|
break;
|
||||||
|
LOG_DEBUG(log, "Waiting for {} connections, try {}", server.currentConnections(), count);
|
||||||
|
std::this_thread::sleep_for(std::chrono::milliseconds(1000));
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
server.start();
|
||||||
|
LOG_INFO(log, "Listening http://{}", address.toString());
|
||||||
|
|
||||||
|
waitForTerminationRequest();
|
||||||
|
return Application::EXIT_OK;
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
51
base/bridge/IBridge.h
Normal file
51
base/bridge/IBridge.h
Normal file
@ -0,0 +1,51 @@
|
|||||||
|
#pragma once
|
||||||
|
|
||||||
|
#include <Interpreters/Context.h>
|
||||||
|
#include <Server/HTTP/HTTPRequestHandlerFactory.h>
|
||||||
|
#include <daemon/BaseDaemon.h>
|
||||||
|
|
||||||
|
#include <Poco/Logger.h>
|
||||||
|
#include <Poco/Util/ServerApplication.h>
|
||||||
|
|
||||||
|
|
||||||
|
namespace DB
|
||||||
|
{
|
||||||
|
|
||||||
|
/// Class represents base for clickhouse-odbc-bridge and clickhouse-library-bridge servers.
|
||||||
|
/// Listens to incoming HTTP POST and GET requests on specified port and host.
|
||||||
|
/// Has two handlers '/' for all incoming POST requests and /ping for GET request about service status.
|
||||||
|
class IBridge : public BaseDaemon
|
||||||
|
{
|
||||||
|
|
||||||
|
public:
|
||||||
|
/// Define command line arguments
|
||||||
|
void defineOptions(Poco::Util::OptionSet & options) override;
|
||||||
|
|
||||||
|
protected:
|
||||||
|
using HandlerFactoryPtr = std::shared_ptr<HTTPRequestHandlerFactory>;
|
||||||
|
|
||||||
|
void initialize(Application & self) override;
|
||||||
|
|
||||||
|
void uninitialize() override;
|
||||||
|
|
||||||
|
int main(const std::vector<std::string> & args) override;
|
||||||
|
|
||||||
|
virtual std::string bridgeName() const = 0;
|
||||||
|
|
||||||
|
virtual HandlerFactoryPtr getHandlerFactoryPtr(ContextPtr context) const = 0;
|
||||||
|
|
||||||
|
size_t keep_alive_timeout;
|
||||||
|
|
||||||
|
private:
|
||||||
|
void handleHelp(const std::string &, const std::string &);
|
||||||
|
|
||||||
|
bool is_help;
|
||||||
|
std::string hostname;
|
||||||
|
size_t port;
|
||||||
|
std::string log_level;
|
||||||
|
size_t max_server_connections;
|
||||||
|
size_t http_timeout;
|
||||||
|
|
||||||
|
Poco::Logger * log;
|
||||||
|
};
|
||||||
|
}
|
@ -7,8 +7,7 @@
|
|||||||
#include <condition_variable>
|
#include <condition_variable>
|
||||||
|
|
||||||
#include <common/defines.h>
|
#include <common/defines.h>
|
||||||
|
#include <common/MoveOrCopyIfThrow.h>
|
||||||
#include <Common/MoveOrCopyIfThrow.h>
|
|
||||||
|
|
||||||
/** Pool for limited size objects that cannot be used from different threads simultaneously.
|
/** Pool for limited size objects that cannot be used from different threads simultaneously.
|
||||||
* The main use case is to have fixed size of objects that can be reused in difference threads during their lifetime
|
* The main use case is to have fixed size of objects that can be reused in difference threads during their lifetime
|
@ -25,7 +25,7 @@
|
|||||||
|
|
||||||
|
|
||||||
#if defined(__PPC__)
|
#if defined(__PPC__)
|
||||||
#if !__clang__
|
#if !defined(__clang__)
|
||||||
#pragma GCC diagnostic ignored "-Wmaybe-uninitialized"
|
#pragma GCC diagnostic ignored "-Wmaybe-uninitialized"
|
||||||
#endif
|
#endif
|
||||||
#endif
|
#endif
|
||||||
@ -1266,7 +1266,7 @@ public:
|
|||||||
};
|
};
|
||||||
|
|
||||||
#if defined(__PPC__)
|
#if defined(__PPC__)
|
||||||
#if !__clang__
|
#if !defined(__clang__)
|
||||||
#pragma GCC diagnostic pop
|
#pragma GCC diagnostic pop
|
||||||
#endif
|
#endif
|
||||||
#endif
|
#endif
|
||||||
|
@ -271,9 +271,13 @@ struct integer<Bits, Signed>::_impl
|
|||||||
/// As to_Integral does a static_cast to int64_t, it may result in UB.
|
/// As to_Integral does a static_cast to int64_t, it may result in UB.
|
||||||
/// The necessary check here is that long double has enough significant (mantissa) bits to store the
|
/// The necessary check here is that long double has enough significant (mantissa) bits to store the
|
||||||
/// int64_t max value precisely.
|
/// int64_t max value precisely.
|
||||||
|
|
||||||
|
//TODO Be compatible with Apple aarch64
|
||||||
|
#if not (defined(__APPLE__) && defined(__aarch64__))
|
||||||
static_assert(LDBL_MANT_DIG >= 64,
|
static_assert(LDBL_MANT_DIG >= 64,
|
||||||
"On your system long double has less than 64 precision bits,"
|
"On your system long double has less than 64 precision bits,"
|
||||||
"which may result in UB when initializing double from int64_t");
|
"which may result in UB when initializing double from int64_t");
|
||||||
|
#endif
|
||||||
|
|
||||||
if ((rhs > 0 && rhs < static_cast<long double>(max_int)) || (rhs < 0 && rhs > static_cast<long double>(min_int)))
|
if ((rhs > 0 && rhs < static_cast<long double>(max_int)) || (rhs < 0 && rhs > static_cast<long double>(min_int)))
|
||||||
{
|
{
|
||||||
|
@ -9,6 +9,7 @@
|
|||||||
#include <common/getMemoryAmount.h>
|
#include <common/getMemoryAmount.h>
|
||||||
#include <common/logger_useful.h>
|
#include <common/logger_useful.h>
|
||||||
|
|
||||||
|
#include <Common/formatReadable.h>
|
||||||
#include <Common/SymbolIndex.h>
|
#include <Common/SymbolIndex.h>
|
||||||
#include <Common/StackTrace.h>
|
#include <Common/StackTrace.h>
|
||||||
#include <Common/getNumberOfPhysicalCPUCores.h>
|
#include <Common/getNumberOfPhysicalCPUCores.h>
|
||||||
|
68
base/ext/scope_guard_safe.h
Normal file
68
base/ext/scope_guard_safe.h
Normal file
@ -0,0 +1,68 @@
|
|||||||
|
#pragma once
|
||||||
|
|
||||||
|
#include <ext/scope_guard.h>
|
||||||
|
#include <common/logger_useful.h>
|
||||||
|
#include <Common/MemoryTracker.h>
|
||||||
|
|
||||||
|
/// Same as SCOPE_EXIT() but block the MEMORY_LIMIT_EXCEEDED errors.
|
||||||
|
///
|
||||||
|
/// Typical example of SCOPE_EXIT_MEMORY() usage is when code under it may do
|
||||||
|
/// some tiny allocations, that may fail under high memory pressure or/and low
|
||||||
|
/// max_memory_usage (and related limits).
|
||||||
|
///
|
||||||
|
/// NOTE: it should be used with caution.
|
||||||
|
#define SCOPE_EXIT_MEMORY(...) SCOPE_EXIT( \
|
||||||
|
MemoryTracker::LockExceptionInThread \
|
||||||
|
lock_memory_tracker(VariableContext::Global); \
|
||||||
|
__VA_ARGS__; \
|
||||||
|
)
|
||||||
|
|
||||||
|
/// Same as SCOPE_EXIT() but try/catch/tryLogCurrentException any exceptions.
|
||||||
|
///
|
||||||
|
/// SCOPE_EXIT_SAFE() should be used in case the exception during the code
|
||||||
|
/// under SCOPE_EXIT() is not "that fatal" and error message in log is enough.
|
||||||
|
///
|
||||||
|
/// Good example is calling CurrentThread::detachQueryIfNotDetached().
|
||||||
|
///
|
||||||
|
/// Anti-pattern is calling WriteBuffer::finalize() under SCOPE_EXIT_SAFE()
|
||||||
|
/// (since finalize() can do final write and it is better to fail abnormally
|
||||||
|
/// instead of ignoring write error).
|
||||||
|
///
|
||||||
|
/// NOTE: it should be used with double caution.
|
||||||
|
#define SCOPE_EXIT_SAFE(...) SCOPE_EXIT( \
|
||||||
|
try \
|
||||||
|
{ \
|
||||||
|
__VA_ARGS__; \
|
||||||
|
} \
|
||||||
|
catch (...) \
|
||||||
|
{ \
|
||||||
|
tryLogCurrentException(__PRETTY_FUNCTION__); \
|
||||||
|
} \
|
||||||
|
)
|
||||||
|
|
||||||
|
/// Same as SCOPE_EXIT() but:
|
||||||
|
/// - block the MEMORY_LIMIT_EXCEEDED errors,
|
||||||
|
/// - try/catch/tryLogCurrentException any exceptions.
|
||||||
|
///
|
||||||
|
/// SCOPE_EXIT_MEMORY_SAFE() can be used when the error can be ignored, and in
|
||||||
|
/// addition to SCOPE_EXIT_SAFE() it will also lock MEMORY_LIMIT_EXCEEDED to
|
||||||
|
/// avoid such exceptions.
|
||||||
|
///
|
||||||
|
/// It does exists as a separate helper, since you do not need to lock
|
||||||
|
/// MEMORY_LIMIT_EXCEEDED always (there are cases when code under SCOPE_EXIT does
|
||||||
|
/// not do any allocations, while LockExceptionInThread increment atomic
|
||||||
|
/// variable).
|
||||||
|
///
|
||||||
|
/// NOTE: it should be used with triple caution.
|
||||||
|
#define SCOPE_EXIT_MEMORY_SAFE(...) SCOPE_EXIT( \
|
||||||
|
try \
|
||||||
|
{ \
|
||||||
|
MemoryTracker::LockExceptionInThread \
|
||||||
|
lock_memory_tracker(VariableContext::Global); \
|
||||||
|
__VA_ARGS__; \
|
||||||
|
} \
|
||||||
|
catch (...) \
|
||||||
|
{ \
|
||||||
|
tryLogCurrentException(__PRETTY_FUNCTION__); \
|
||||||
|
} \
|
||||||
|
)
|
@ -159,9 +159,9 @@ public:
|
|||||||
*/
|
*/
|
||||||
Pool(const std::string & db_,
|
Pool(const std::string & db_,
|
||||||
const std::string & server_,
|
const std::string & server_,
|
||||||
const std::string & user_ = "",
|
const std::string & user_,
|
||||||
const std::string & password_ = "",
|
const std::string & password_,
|
||||||
unsigned port_ = 0,
|
unsigned port_,
|
||||||
const std::string & socket_ = "",
|
const std::string & socket_ = "",
|
||||||
unsigned connect_timeout_ = MYSQLXX_DEFAULT_TIMEOUT,
|
unsigned connect_timeout_ = MYSQLXX_DEFAULT_TIMEOUT,
|
||||||
unsigned rw_timeout_ = MYSQLXX_DEFAULT_RW_TIMEOUT,
|
unsigned rw_timeout_ = MYSQLXX_DEFAULT_RW_TIMEOUT,
|
||||||
|
@ -2,7 +2,6 @@
|
|||||||
#include <ctime>
|
#include <ctime>
|
||||||
#include <random>
|
#include <random>
|
||||||
#include <thread>
|
#include <thread>
|
||||||
|
|
||||||
#include <mysqlxx/PoolWithFailover.h>
|
#include <mysqlxx/PoolWithFailover.h>
|
||||||
|
|
||||||
|
|
||||||
@ -15,9 +14,12 @@ static bool startsWith(const std::string & s, const char * prefix)
|
|||||||
|
|
||||||
using namespace mysqlxx;
|
using namespace mysqlxx;
|
||||||
|
|
||||||
PoolWithFailover::PoolWithFailover(const Poco::Util::AbstractConfiguration & config_,
|
PoolWithFailover::PoolWithFailover(
|
||||||
const std::string & config_name_, const unsigned default_connections_,
|
const Poco::Util::AbstractConfiguration & config_,
|
||||||
const unsigned max_connections_, const size_t max_tries_)
|
const std::string & config_name_,
|
||||||
|
const unsigned default_connections_,
|
||||||
|
const unsigned max_connections_,
|
||||||
|
const size_t max_tries_)
|
||||||
: max_tries(max_tries_)
|
: max_tries(max_tries_)
|
||||||
{
|
{
|
||||||
shareable = config_.getBool(config_name_ + ".share_connection", false);
|
shareable = config_.getBool(config_name_ + ".share_connection", false);
|
||||||
@ -59,16 +61,38 @@ PoolWithFailover::PoolWithFailover(const Poco::Util::AbstractConfiguration & con
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
PoolWithFailover::PoolWithFailover(const std::string & config_name_, const unsigned default_connections_,
|
|
||||||
const unsigned max_connections_, const size_t max_tries_)
|
PoolWithFailover::PoolWithFailover(
|
||||||
: PoolWithFailover{
|
const std::string & config_name_,
|
||||||
Poco::Util::Application::instance().config(), config_name_,
|
const unsigned default_connections_,
|
||||||
default_connections_, max_connections_, max_tries_}
|
const unsigned max_connections_,
|
||||||
|
const size_t max_tries_)
|
||||||
|
: PoolWithFailover{Poco::Util::Application::instance().config(),
|
||||||
|
config_name_, default_connections_, max_connections_, max_tries_}
|
||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
PoolWithFailover::PoolWithFailover(
|
||||||
|
const std::string & database,
|
||||||
|
const RemoteDescription & addresses,
|
||||||
|
const std::string & user,
|
||||||
|
const std::string & password,
|
||||||
|
size_t max_tries_)
|
||||||
|
: max_tries(max_tries_)
|
||||||
|
, shareable(false)
|
||||||
|
{
|
||||||
|
/// Replicas have the same priority, but traversed replicas are moved to the end of the queue.
|
||||||
|
for (const auto & [host, port] : addresses)
|
||||||
|
{
|
||||||
|
replicas_by_priority[0].emplace_back(std::make_shared<Pool>(database, host, user, password, port));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
PoolWithFailover::PoolWithFailover(const PoolWithFailover & other)
|
PoolWithFailover::PoolWithFailover(const PoolWithFailover & other)
|
||||||
: max_tries{other.max_tries}, shareable{other.shareable}
|
: max_tries{other.max_tries}
|
||||||
|
, shareable{other.shareable}
|
||||||
{
|
{
|
||||||
if (shareable)
|
if (shareable)
|
||||||
{
|
{
|
||||||
|
@ -11,6 +11,8 @@
|
|||||||
namespace mysqlxx
|
namespace mysqlxx
|
||||||
{
|
{
|
||||||
/** MySQL connection pool with support of failover.
|
/** MySQL connection pool with support of failover.
|
||||||
|
*
|
||||||
|
* For dictionary source:
|
||||||
* Have information about replicas and their priorities.
|
* Have information about replicas and their priorities.
|
||||||
* Tries to connect to replica in an order of priority. When equal priority, choose replica with maximum time without connections.
|
* Tries to connect to replica in an order of priority. When equal priority, choose replica with maximum time without connections.
|
||||||
*
|
*
|
||||||
@ -68,42 +70,58 @@ namespace mysqlxx
|
|||||||
using PoolPtr = std::shared_ptr<Pool>;
|
using PoolPtr = std::shared_ptr<Pool>;
|
||||||
using Replicas = std::vector<PoolPtr>;
|
using Replicas = std::vector<PoolPtr>;
|
||||||
|
|
||||||
/// [priority][index] -> replica.
|
/// [priority][index] -> replica. Highest priority is 0.
|
||||||
using ReplicasByPriority = std::map<int, Replicas>;
|
using ReplicasByPriority = std::map<int, Replicas>;
|
||||||
|
|
||||||
ReplicasByPriority replicas_by_priority;
|
ReplicasByPriority replicas_by_priority;
|
||||||
|
|
||||||
/// Number of connection tries.
|
/// Number of connection tries.
|
||||||
size_t max_tries;
|
size_t max_tries;
|
||||||
/// Mutex for set of replicas.
|
/// Mutex for set of replicas.
|
||||||
std::mutex mutex;
|
std::mutex mutex;
|
||||||
|
|
||||||
/// Can the Pool be shared
|
/// Can the Pool be shared
|
||||||
bool shareable;
|
bool shareable;
|
||||||
|
|
||||||
public:
|
public:
|
||||||
using Entry = Pool::Entry;
|
using Entry = Pool::Entry;
|
||||||
|
using RemoteDescription = std::vector<std::pair<std::string, uint16_t>>;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* config_name Name of parameter in configuration file.
|
* * Mysql dictionary sourse related params:
|
||||||
|
* config_name Name of parameter in configuration file for dictionary source.
|
||||||
|
*
|
||||||
|
* * Mysql storage related parameters:
|
||||||
|
* replicas_description
|
||||||
|
*
|
||||||
|
* * Mutual parameters:
|
||||||
* default_connections Number of connection in pool to each replica at start.
|
* default_connections Number of connection in pool to each replica at start.
|
||||||
* max_connections Maximum number of connections in pool to each replica.
|
* max_connections Maximum number of connections in pool to each replica.
|
||||||
* max_tries_ Max number of connection tries.
|
* max_tries_ Max number of connection tries.
|
||||||
*/
|
*/
|
||||||
PoolWithFailover(const std::string & config_name_,
|
PoolWithFailover(
|
||||||
|
const std::string & config_name_,
|
||||||
unsigned default_connections_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_START_CONNECTIONS,
|
unsigned default_connections_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_START_CONNECTIONS,
|
||||||
unsigned max_connections_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_CONNECTIONS,
|
unsigned max_connections_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_CONNECTIONS,
|
||||||
size_t max_tries_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_TRIES);
|
size_t max_tries_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_TRIES);
|
||||||
|
|
||||||
PoolWithFailover(const Poco::Util::AbstractConfiguration & config_,
|
PoolWithFailover(
|
||||||
|
const Poco::Util::AbstractConfiguration & config_,
|
||||||
const std::string & config_name_,
|
const std::string & config_name_,
|
||||||
unsigned default_connections_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_START_CONNECTIONS,
|
unsigned default_connections_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_START_CONNECTIONS,
|
||||||
unsigned max_connections_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_CONNECTIONS,
|
unsigned max_connections_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_CONNECTIONS,
|
||||||
size_t max_tries_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_TRIES);
|
size_t max_tries_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_TRIES);
|
||||||
|
|
||||||
|
PoolWithFailover(
|
||||||
|
const std::string & database,
|
||||||
|
const RemoteDescription & addresses,
|
||||||
|
const std::string & user,
|
||||||
|
const std::string & password,
|
||||||
|
size_t max_tries_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_TRIES);
|
||||||
|
|
||||||
PoolWithFailover(const PoolWithFailover & other);
|
PoolWithFailover(const PoolWithFailover & other);
|
||||||
|
|
||||||
/** Allocates a connection to use. */
|
/** Allocates a connection to use. */
|
||||||
Entry get();
|
Entry get();
|
||||||
};
|
};
|
||||||
|
|
||||||
|
using PoolWithFailoverPtr = std::shared_ptr<PoolWithFailover>;
|
||||||
}
|
}
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
if (CMAKE_SYSTEM_PROCESSOR MATCHES "amd64|x86_64")
|
if (CMAKE_SYSTEM_PROCESSOR MATCHES "amd64|x86_64")
|
||||||
set (ARCH_AMD64 1)
|
set (ARCH_AMD64 1)
|
||||||
endif ()
|
endif ()
|
||||||
if (CMAKE_SYSTEM_PROCESSOR MATCHES "^(aarch64.*|AARCH64.*)")
|
if (CMAKE_SYSTEM_PROCESSOR MATCHES "^(aarch64.*|AARCH64.*|arm64.*|ARM64.*)")
|
||||||
set (ARCH_AARCH64 1)
|
set (ARCH_AARCH64 1)
|
||||||
endif ()
|
endif ()
|
||||||
if (ARCH_AARCH64 OR CMAKE_SYSTEM_PROCESSOR MATCHES "arm")
|
if (ARCH_AARCH64 OR CMAKE_SYSTEM_PROCESSOR MATCHES "arm")
|
||||||
|
@ -1,9 +1,9 @@
|
|||||||
# This strings autochanged from release_lib.sh:
|
# This strings autochanged from release_lib.sh:
|
||||||
SET(VERSION_REVISION 54450)
|
SET(VERSION_REVISION 54451)
|
||||||
SET(VERSION_MAJOR 21)
|
SET(VERSION_MAJOR 21)
|
||||||
SET(VERSION_MINOR 5)
|
SET(VERSION_MINOR 6)
|
||||||
SET(VERSION_PATCH 1)
|
SET(VERSION_PATCH 1)
|
||||||
SET(VERSION_GITHASH 3827789b3d8fd2021952e57e5110343d26daa1a1)
|
SET(VERSION_GITHASH 96fced4c3cf432fb0b401d2ab01f0c56e5f74a96)
|
||||||
SET(VERSION_DESCRIBE v21.5.1.1-prestable)
|
SET(VERSION_DESCRIBE v21.6.1.1-prestable)
|
||||||
SET(VERSION_STRING 21.5.1.1)
|
SET(VERSION_STRING 21.6.1.1)
|
||||||
# end of autochange
|
# end of autochange
|
||||||
|
@ -1,11 +1,14 @@
|
|||||||
set (DEFAULT_LIBS "-nodefaultlibs")
|
set (DEFAULT_LIBS "-nodefaultlibs")
|
||||||
|
|
||||||
if (NOT COMPILER_CLANG)
|
|
||||||
message (FATAL_ERROR "Darwin build is supported only for Clang")
|
|
||||||
endif ()
|
|
||||||
|
|
||||||
set (DEFAULT_LIBS "${DEFAULT_LIBS} ${COVERAGE_OPTION} -lc -lm -lpthread -ldl")
|
set (DEFAULT_LIBS "${DEFAULT_LIBS} ${COVERAGE_OPTION} -lc -lm -lpthread -ldl")
|
||||||
|
|
||||||
|
if (COMPILER_GCC)
|
||||||
|
set (DEFAULT_LIBS "${DEFAULT_LIBS} -lgcc_eh")
|
||||||
|
if (ARCH_AARCH64)
|
||||||
|
set (DEFAULT_LIBS "${DEFAULT_LIBS} -lgcc")
|
||||||
|
endif ()
|
||||||
|
endif ()
|
||||||
|
|
||||||
message(STATUS "Default libraries: ${DEFAULT_LIBS}")
|
message(STATUS "Default libraries: ${DEFAULT_LIBS}")
|
||||||
|
|
||||||
set(CMAKE_CXX_STANDARD_LIBRARIES ${DEFAULT_LIBS})
|
set(CMAKE_CXX_STANDARD_LIBRARIES ${DEFAULT_LIBS})
|
||||||
|
14
cmake/darwin/toolchain-aarch64.cmake
Normal file
14
cmake/darwin/toolchain-aarch64.cmake
Normal file
@ -0,0 +1,14 @@
|
|||||||
|
set (CMAKE_SYSTEM_NAME "Darwin")
|
||||||
|
set (CMAKE_SYSTEM_PROCESSOR "aarch64")
|
||||||
|
set (CMAKE_C_COMPILER_TARGET "aarch64-apple-darwin")
|
||||||
|
set (CMAKE_CXX_COMPILER_TARGET "aarch64-apple-darwin")
|
||||||
|
set (CMAKE_ASM_COMPILER_TARGET "aarch64-apple-darwin")
|
||||||
|
set (CMAKE_OSX_SYSROOT "${CMAKE_CURRENT_LIST_DIR}/../toolchain/darwin-aarch64")
|
||||||
|
|
||||||
|
set (CMAKE_TRY_COMPILE_TARGET_TYPE STATIC_LIBRARY) # disable linkage check - it doesn't work in CMake
|
||||||
|
|
||||||
|
set (HAS_PRE_1970_EXITCODE "0" CACHE STRING "Result from TRY_RUN" FORCE)
|
||||||
|
set (HAS_PRE_1970_EXITCODE__TRYRUN_OUTPUT "" CACHE STRING "Output from TRY_RUN" FORCE)
|
||||||
|
|
||||||
|
set (HAS_POST_2038_EXITCODE "0" CACHE STRING "Result from TRY_RUN" FORCE)
|
||||||
|
set (HAS_POST_2038_EXITCODE__TRYRUN_OUTPUT "" CACHE STRING "Output from TRY_RUN" FORCE)
|
@ -1,3 +1,8 @@
|
|||||||
|
if (OS_DARWIN AND COMPILER_GCC)
|
||||||
|
# AMQP-CPP requires libuv which cannot be built with GCC in macOS due to a bug: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93082
|
||||||
|
set (ENABLE_AMQPCPP OFF CACHE INTERNAL "")
|
||||||
|
endif()
|
||||||
|
|
||||||
option(ENABLE_AMQPCPP "Enalbe AMQP-CPP" ${ENABLE_LIBRARIES})
|
option(ENABLE_AMQPCPP "Enalbe AMQP-CPP" ${ENABLE_LIBRARIES})
|
||||||
|
|
||||||
if (NOT ENABLE_AMQPCPP)
|
if (NOT ENABLE_AMQPCPP)
|
||||||
|
@ -1,3 +1,8 @@
|
|||||||
|
if (OS_DARWIN AND COMPILER_GCC)
|
||||||
|
# Cassandra requires libuv which cannot be built with GCC in macOS due to a bug: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93082
|
||||||
|
set (ENABLE_CASSANDRA OFF CACHE INTERNAL "")
|
||||||
|
endif()
|
||||||
|
|
||||||
option(ENABLE_CASSANDRA "Enable Cassandra" ${ENABLE_LIBRARIES})
|
option(ENABLE_CASSANDRA "Enable Cassandra" ${ENABLE_LIBRARIES})
|
||||||
|
|
||||||
if (NOT ENABLE_CASSANDRA)
|
if (NOT ENABLE_CASSANDRA)
|
||||||
|
@ -32,7 +32,9 @@ if (CCACHE_FOUND AND NOT COMPILER_MATCHES_CCACHE)
|
|||||||
if (CCACHE_VERSION VERSION_GREATER "3.2.0" OR NOT CMAKE_CXX_COMPILER_ID STREQUAL "Clang")
|
if (CCACHE_VERSION VERSION_GREATER "3.2.0" OR NOT CMAKE_CXX_COMPILER_ID STREQUAL "Clang")
|
||||||
message(STATUS "Using ${CCACHE_FOUND} ${CCACHE_VERSION}")
|
message(STATUS "Using ${CCACHE_FOUND} ${CCACHE_VERSION}")
|
||||||
|
|
||||||
set_property (GLOBAL PROPERTY RULE_LAUNCH_COMPILE ${CCACHE_FOUND})
|
set (CMAKE_CXX_COMPILER_LAUNCHER ${CCACHE_FOUND} ${CMAKE_CXX_COMPILER_LAUNCHER})
|
||||||
|
set (CMAKE_C_COMPILER_LAUNCHER ${CCACHE_FOUND} ${CMAKE_C_COMPILER_LAUNCHER})
|
||||||
|
|
||||||
set_property (GLOBAL PROPERTY RULE_LAUNCH_LINK ${CCACHE_FOUND})
|
set_property (GLOBAL PROPERTY RULE_LAUNCH_LINK ${CCACHE_FOUND})
|
||||||
|
|
||||||
# debian (debhelpers) set SOURCE_DATE_EPOCH environment variable, that is
|
# debian (debhelpers) set SOURCE_DATE_EPOCH environment variable, that is
|
||||||
|
@ -64,7 +64,8 @@ if (NOT OPENLDAP_FOUND AND NOT MISSING_INTERNAL_LDAP_LIBRARY)
|
|||||||
( "${_system_name}" STREQUAL "linux" AND "${_system_processor}" STREQUAL "aarch64" ) OR
|
( "${_system_name}" STREQUAL "linux" AND "${_system_processor}" STREQUAL "aarch64" ) OR
|
||||||
( "${_system_name}" STREQUAL "linux" AND "${_system_processor}" STREQUAL "ppc64le" ) OR
|
( "${_system_name}" STREQUAL "linux" AND "${_system_processor}" STREQUAL "ppc64le" ) OR
|
||||||
( "${_system_name}" STREQUAL "freebsd" AND "${_system_processor}" STREQUAL "x86_64" ) OR
|
( "${_system_name}" STREQUAL "freebsd" AND "${_system_processor}" STREQUAL "x86_64" ) OR
|
||||||
( "${_system_name}" STREQUAL "darwin" AND "${_system_processor}" STREQUAL "x86_64" )
|
( "${_system_name}" STREQUAL "darwin" AND "${_system_processor}" STREQUAL "x86_64" ) OR
|
||||||
|
( "${_system_name}" STREQUAL "darwin" AND "${_system_processor}" STREQUAL "aarch64" )
|
||||||
)
|
)
|
||||||
set (_ldap_supported_platform TRUE)
|
set (_ldap_supported_platform TRUE)
|
||||||
endif ()
|
endif ()
|
||||||
|
16
cmake/find/nanodbc.cmake
Normal file
16
cmake/find/nanodbc.cmake
Normal file
@ -0,0 +1,16 @@
|
|||||||
|
if (NOT ENABLE_ODBC)
|
||||||
|
return ()
|
||||||
|
endif ()
|
||||||
|
|
||||||
|
if (NOT USE_INTERNAL_NANODBC_LIBRARY)
|
||||||
|
message (FATAL_ERROR "Only the bundled nanodbc library can be used")
|
||||||
|
endif ()
|
||||||
|
|
||||||
|
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/nanodbc/CMakeLists.txt")
|
||||||
|
message (FATAL_ERROR "submodule contrib/nanodbc is missing. to fix try run: \n git submodule update --init --recursive")
|
||||||
|
endif()
|
||||||
|
|
||||||
|
set (NANODBC_LIBRARY nanodbc)
|
||||||
|
set (NANODBC_INCLUDE_DIR "${ClickHouse_SOURCE_DIR}/contrib/nanodbc/nanodbc")
|
||||||
|
|
||||||
|
message (STATUS "Using nanodbc: ${NANODBC_INCLUDE_DIR} : ${NANODBC_LIBRARY}")
|
@ -11,7 +11,7 @@ if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/NuRaft/CMakeLists.txt")
|
|||||||
return()
|
return()
|
||||||
endif ()
|
endif ()
|
||||||
|
|
||||||
if (NOT OS_FREEBSD AND NOT OS_DARWIN)
|
if (NOT OS_FREEBSD)
|
||||||
set (USE_NURAFT 1)
|
set (USE_NURAFT 1)
|
||||||
set (NURAFT_LIBRARY nuraft)
|
set (NURAFT_LIBRARY nuraft)
|
||||||
|
|
||||||
|
@ -50,4 +50,6 @@ if (NOT EXTERNAL_ODBC_LIBRARY_FOUND)
|
|||||||
set (USE_INTERNAL_ODBC_LIBRARY 1)
|
set (USE_INTERNAL_ODBC_LIBRARY 1)
|
||||||
endif ()
|
endif ()
|
||||||
|
|
||||||
|
set (USE_INTERNAL_NANODBC_LIBRARY 1)
|
||||||
|
|
||||||
message (STATUS "Using unixodbc")
|
message (STATUS "Using unixodbc")
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
if (OS_DARWIN AND ARCH_AARCH64)
|
||||||
|
set (ENABLE_ROCKSDB OFF CACHE INTERNAL "")
|
||||||
|
endif()
|
||||||
|
|
||||||
option(ENABLE_ROCKSDB "Enable ROCKSDB" ${ENABLE_LIBRARIES})
|
option(ENABLE_ROCKSDB "Enable ROCKSDB" ${ENABLE_LIBRARIES})
|
||||||
|
|
||||||
if (NOT ENABLE_ROCKSDB)
|
if (NOT ENABLE_ROCKSDB)
|
||||||
|
27
cmake/find/xz.cmake
Normal file
27
cmake/find/xz.cmake
Normal file
@ -0,0 +1,27 @@
|
|||||||
|
option (USE_INTERNAL_XZ_LIBRARY "Set to OFF to use system xz (lzma) library instead of bundled" ${NOT_UNBUNDLED})
|
||||||
|
|
||||||
|
if(NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/xz/src/liblzma/api/lzma.h")
|
||||||
|
if(USE_INTERNAL_XZ_LIBRARY)
|
||||||
|
message(WARNING "submodule contrib/xz is missing. to fix try run: \n git submodule update --init --recursive")
|
||||||
|
message (${RECONFIGURE_MESSAGE_LEVEL} "Can't find internal xz (lzma) library")
|
||||||
|
set(USE_INTERNAL_XZ_LIBRARY 0)
|
||||||
|
endif()
|
||||||
|
set(MISSING_INTERNAL_XZ_LIBRARY 1)
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if (NOT USE_INTERNAL_XZ_LIBRARY)
|
||||||
|
find_library (XZ_LIBRARY lzma)
|
||||||
|
find_path (XZ_INCLUDE_DIR NAMES lzma.h PATHS ${XZ_INCLUDE_PATHS})
|
||||||
|
if (NOT XZ_LIBRARY OR NOT XZ_INCLUDE_DIR)
|
||||||
|
message (${RECONFIGURE_MESSAGE_LEVEL} "Can't find system xz (lzma) library")
|
||||||
|
endif ()
|
||||||
|
endif ()
|
||||||
|
|
||||||
|
if (XZ_LIBRARY AND XZ_INCLUDE_DIR)
|
||||||
|
elseif (NOT MISSING_INTERNAL_XZ_LIBRARY)
|
||||||
|
set (USE_INTERNAL_XZ_LIBRARY 1)
|
||||||
|
set (XZ_LIBRARY liblzma)
|
||||||
|
set (XZ_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/contrib/xz/src/liblzma/api)
|
||||||
|
endif ()
|
||||||
|
|
||||||
|
message (STATUS "Using xz (lzma): ${XZ_INCLUDE_DIR} : ${XZ_LIBRARY}")
|
@ -171,6 +171,7 @@ elseif (COMPILER_GCC)
|
|||||||
add_cxx_compile_options(-Wtrampolines)
|
add_cxx_compile_options(-Wtrampolines)
|
||||||
# Obvious
|
# Obvious
|
||||||
add_cxx_compile_options(-Wunused)
|
add_cxx_compile_options(-Wunused)
|
||||||
|
add_cxx_compile_options(-Wundef)
|
||||||
# Warn if vector operation is not implemented via SIMD capabilities of the architecture
|
# Warn if vector operation is not implemented via SIMD capabilities of the architecture
|
||||||
add_cxx_compile_options(-Wvector-operation-performance)
|
add_cxx_compile_options(-Wvector-operation-performance)
|
||||||
# XXX: libstdc++ has some of these for 3way compare
|
# XXX: libstdc++ has some of these for 3way compare
|
||||||
|
10
contrib/CMakeLists.txt
vendored
10
contrib/CMakeLists.txt
vendored
@ -47,7 +47,11 @@ add_subdirectory (lz4-cmake)
|
|||||||
add_subdirectory (murmurhash)
|
add_subdirectory (murmurhash)
|
||||||
add_subdirectory (replxx-cmake)
|
add_subdirectory (replxx-cmake)
|
||||||
add_subdirectory (unixodbc-cmake)
|
add_subdirectory (unixodbc-cmake)
|
||||||
|
add_subdirectory (nanodbc-cmake)
|
||||||
|
|
||||||
|
if (USE_INTERNAL_XZ_LIBRARY)
|
||||||
add_subdirectory (xz)
|
add_subdirectory (xz)
|
||||||
|
endif()
|
||||||
|
|
||||||
add_subdirectory (poco-cmake)
|
add_subdirectory (poco-cmake)
|
||||||
add_subdirectory (croaring-cmake)
|
add_subdirectory (croaring-cmake)
|
||||||
@ -93,14 +97,8 @@ if (USE_INTERNAL_ZLIB_LIBRARY)
|
|||||||
add_subdirectory (${INTERNAL_ZLIB_NAME})
|
add_subdirectory (${INTERNAL_ZLIB_NAME})
|
||||||
# We should use same defines when including zlib.h as used when zlib compiled
|
# We should use same defines when including zlib.h as used when zlib compiled
|
||||||
target_compile_definitions (zlib PUBLIC ZLIB_COMPAT WITH_GZFILEOP)
|
target_compile_definitions (zlib PUBLIC ZLIB_COMPAT WITH_GZFILEOP)
|
||||||
if (TARGET zlibstatic)
|
|
||||||
target_compile_definitions (zlibstatic PUBLIC ZLIB_COMPAT WITH_GZFILEOP)
|
|
||||||
endif ()
|
|
||||||
if (ARCH_AMD64 OR ARCH_AARCH64)
|
if (ARCH_AMD64 OR ARCH_AARCH64)
|
||||||
target_compile_definitions (zlib PUBLIC X86_64 UNALIGNED_OK)
|
target_compile_definitions (zlib PUBLIC X86_64 UNALIGNED_OK)
|
||||||
if (TARGET zlibstatic)
|
|
||||||
target_compile_definitions (zlibstatic PUBLIC X86_64 UNALIGNED_OK)
|
|
||||||
endif ()
|
|
||||||
endif ()
|
endif ()
|
||||||
endif ()
|
endif ()
|
||||||
|
|
||||||
|
2
contrib/NuRaft
vendored
2
contrib/NuRaft
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 70468326ad5d72e9497944838484c591dae054ea
|
Subproject commit 377f8e77491d9f66ce8e32e88aae19dffe8dc4d7
|
2
contrib/antlr4-runtime
vendored
2
contrib/antlr4-runtime
vendored
@ -1 +1 @@
|
|||||||
Subproject commit a2fa7b76e2ee16d2ad955e9214a90bbf79da66fc
|
Subproject commit 672643e9a427ef803abf13bc8cb4989606553d64
|
2
contrib/boost
vendored
2
contrib/boost
vendored
@ -1 +1 @@
|
|||||||
Subproject commit ee24fa55bc46e4d2ce7d0d052cc5a0d9b1be8c36
|
Subproject commit a8d43d3142cc6b26fc55bec33f7f6edb1156ab7a
|
2
contrib/boringssl
vendored
2
contrib/boringssl
vendored
@ -1 +1 @@
|
|||||||
Subproject commit fd9ce1a0406f571507068b9555d0b545b8a18332
|
Subproject commit 83c1cda8a0224dc817cbad2966c7ed4acc35f02a
|
@ -16,7 +16,7 @@ endif()
|
|||||||
|
|
||||||
if(CMAKE_COMPILER_IS_GNUCXX OR CLANG)
|
if(CMAKE_COMPILER_IS_GNUCXX OR CLANG)
|
||||||
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -fvisibility=hidden -fno-common -fno-exceptions -fno-rtti")
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -fvisibility=hidden -fno-common -fno-exceptions -fno-rtti")
|
||||||
if(APPLE)
|
if(APPLE AND CLANG)
|
||||||
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -stdlib=libc++")
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -stdlib=libc++")
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
|
2
contrib/flatbuffers
vendored
2
contrib/flatbuffers
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 6df40a2471737b27271bdd9b900ab5f3aec746c7
|
Subproject commit 22e3ffc66d2d7d72d1414390aa0f04ffd114a5a1
|
@ -1,7 +1,10 @@
|
|||||||
if (SANITIZE OR NOT (ARCH_AMD64 OR ARCH_ARM OR ARCH_PPC64LE) OR NOT (OS_LINUX OR OS_FREEBSD OR OS_DARWIN))
|
if (SANITIZE OR NOT (
|
||||||
|
((OS_LINUX OR OS_FREEBSD) AND (ARCH_AMD64 OR ARCH_ARM OR ARCH_PPC64LE)) OR
|
||||||
|
(OS_DARWIN AND CMAKE_BUILD_TYPE STREQUAL "RelWithDebInfo")
|
||||||
|
))
|
||||||
if (ENABLE_JEMALLOC)
|
if (ENABLE_JEMALLOC)
|
||||||
message (${RECONFIGURE_MESSAGE_LEVEL}
|
message (${RECONFIGURE_MESSAGE_LEVEL}
|
||||||
"jemalloc is disabled implicitly: it doesn't work with sanitizers and can only be used with x86_64, aarch64 or ppc64le on linux or freebsd.")
|
"jemalloc is disabled implicitly: it doesn't work with sanitizers and can only be used with x86_64, aarch64, or ppc64le Linux or FreeBSD builds and RelWithDebInfo macOS builds.")
|
||||||
endif ()
|
endif ()
|
||||||
set (ENABLE_JEMALLOC OFF)
|
set (ENABLE_JEMALLOC OFF)
|
||||||
else ()
|
else ()
|
||||||
@ -34,9 +37,9 @@ if (OS_LINUX)
|
|||||||
# avoid spurious latencies and additional work associated with
|
# avoid spurious latencies and additional work associated with
|
||||||
# MADV_DONTNEED. See
|
# MADV_DONTNEED. See
|
||||||
# https://github.com/ClickHouse/ClickHouse/issues/11121 for motivation.
|
# https://github.com/ClickHouse/ClickHouse/issues/11121 for motivation.
|
||||||
set (JEMALLOC_CONFIG_MALLOC_CONF "percpu_arena:percpu,oversize_threshold:0,muzzy_decay_ms:10000")
|
set (JEMALLOC_CONFIG_MALLOC_CONF "percpu_arena:percpu,oversize_threshold:0,muzzy_decay_ms:5000,dirty_decay_ms:5000")
|
||||||
else()
|
else()
|
||||||
set (JEMALLOC_CONFIG_MALLOC_CONF "oversize_threshold:0,muzzy_decay_ms:10000")
|
set (JEMALLOC_CONFIG_MALLOC_CONF "oversize_threshold:0,muzzy_decay_ms:5000,dirty_decay_ms:5000")
|
||||||
endif()
|
endif()
|
||||||
# CACHE variable is empty, to allow changing defaults without necessity
|
# CACHE variable is empty, to allow changing defaults without necessity
|
||||||
# to purge cache
|
# to purge cache
|
||||||
@ -121,13 +124,15 @@ target_include_directories(jemalloc SYSTEM PRIVATE
|
|||||||
target_compile_definitions(jemalloc PRIVATE -DJEMALLOC_NO_PRIVATE_NAMESPACE)
|
target_compile_definitions(jemalloc PRIVATE -DJEMALLOC_NO_PRIVATE_NAMESPACE)
|
||||||
|
|
||||||
if (CMAKE_BUILD_TYPE_UC STREQUAL "DEBUG")
|
if (CMAKE_BUILD_TYPE_UC STREQUAL "DEBUG")
|
||||||
target_compile_definitions(jemalloc PRIVATE -DJEMALLOC_DEBUG=1 -DJEMALLOC_PROF=1)
|
target_compile_definitions(jemalloc PRIVATE -DJEMALLOC_DEBUG=1)
|
||||||
|
endif ()
|
||||||
|
|
||||||
|
target_compile_definitions(jemalloc PRIVATE -DJEMALLOC_PROF=1)
|
||||||
|
|
||||||
if (USE_UNWIND)
|
if (USE_UNWIND)
|
||||||
target_compile_definitions (jemalloc PRIVATE -DJEMALLOC_PROF_LIBUNWIND=1)
|
target_compile_definitions (jemalloc PRIVATE -DJEMALLOC_PROF_LIBUNWIND=1)
|
||||||
target_link_libraries (jemalloc PRIVATE unwind)
|
target_link_libraries (jemalloc PRIVATE unwind)
|
||||||
endif ()
|
endif ()
|
||||||
endif ()
|
|
||||||
|
|
||||||
target_compile_options(jemalloc PRIVATE -Wno-redundant-decls)
|
target_compile_options(jemalloc PRIVATE -Wno-redundant-decls)
|
||||||
# for RTLD_NEXT
|
# for RTLD_NEXT
|
||||||
|
@ -42,7 +42,7 @@
|
|||||||
* total number of bits in a pointer, e.g. on x64, for which the uppermost 16
|
* total number of bits in a pointer, e.g. on x64, for which the uppermost 16
|
||||||
* bits are the same as bit 47.
|
* bits are the same as bit 47.
|
||||||
*/
|
*/
|
||||||
#define LG_VADDR 48
|
#define LG_VADDR 64
|
||||||
|
|
||||||
/* Defined if C11 atomics are available. */
|
/* Defined if C11 atomics are available. */
|
||||||
#define JEMALLOC_C11_ATOMICS 1
|
#define JEMALLOC_C11_ATOMICS 1
|
||||||
@ -101,11 +101,6 @@
|
|||||||
*/
|
*/
|
||||||
#define JEMALLOC_HAVE_MACH_ABSOLUTE_TIME 1
|
#define JEMALLOC_HAVE_MACH_ABSOLUTE_TIME 1
|
||||||
|
|
||||||
/*
|
|
||||||
* Defined if clock_gettime(CLOCK_REALTIME, ...) is available.
|
|
||||||
*/
|
|
||||||
#define JEMALLOC_HAVE_CLOCK_REALTIME 1
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Defined if _malloc_thread_cleanup() exists. At least in the case of
|
* Defined if _malloc_thread_cleanup() exists. At least in the case of
|
||||||
* FreeBSD, pthread_key_create() allocates, which if used during malloc
|
* FreeBSD, pthread_key_create() allocates, which if used during malloc
|
||||||
@ -181,14 +176,14 @@
|
|||||||
/* #undef LG_QUANTUM */
|
/* #undef LG_QUANTUM */
|
||||||
|
|
||||||
/* One page is 2^LG_PAGE bytes. */
|
/* One page is 2^LG_PAGE bytes. */
|
||||||
#define LG_PAGE 16
|
#define LG_PAGE 14
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* One huge page is 2^LG_HUGEPAGE bytes. Note that this is defined even if the
|
* One huge page is 2^LG_HUGEPAGE bytes. Note that this is defined even if the
|
||||||
* system does not explicitly support huge pages; system calls that require
|
* system does not explicitly support huge pages; system calls that require
|
||||||
* explicit huge page support are separately configured.
|
* explicit huge page support are separately configured.
|
||||||
*/
|
*/
|
||||||
#define LG_HUGEPAGE 29
|
#define LG_HUGEPAGE 21
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* If defined, adjacent virtual memory mappings with identical attributes
|
* If defined, adjacent virtual memory mappings with identical attributes
|
||||||
@ -356,7 +351,7 @@
|
|||||||
/* #undef JEMALLOC_EXPORT */
|
/* #undef JEMALLOC_EXPORT */
|
||||||
|
|
||||||
/* config.malloc_conf options string. */
|
/* config.malloc_conf options string. */
|
||||||
#define JEMALLOC_CONFIG_MALLOC_CONF "@JEMALLOC_CONFIG_MALLOC_CONF@"
|
#define JEMALLOC_CONFIG_MALLOC_CONF ""
|
||||||
|
|
||||||
/* If defined, jemalloc takes the malloc/free/etc. symbol names. */
|
/* If defined, jemalloc takes the malloc/free/etc. symbol names. */
|
||||||
/* #undef JEMALLOC_IS_MALLOC */
|
/* #undef JEMALLOC_IS_MALLOC */
|
||||||
|
2
contrib/libcxx
vendored
2
contrib/libcxx
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 8b80a151d12b98ffe2d0c22f7cec12c3b9ff88d7
|
Subproject commit 2fa892f69acbaa40f8a18c6484854a6183a34482
|
@ -56,6 +56,11 @@ if (USE_UNWIND)
|
|||||||
target_compile_definitions(cxx PUBLIC -DSTD_EXCEPTION_HAS_STACK_TRACE=1)
|
target_compile_definitions(cxx PUBLIC -DSTD_EXCEPTION_HAS_STACK_TRACE=1)
|
||||||
endif ()
|
endif ()
|
||||||
|
|
||||||
|
# Override the deduced attribute support that causes error.
|
||||||
|
if (OS_DARWIN AND COMPILER_GCC)
|
||||||
|
add_compile_definitions(_LIBCPP_INIT_PRIORITY_MAX)
|
||||||
|
endif ()
|
||||||
|
|
||||||
target_compile_options(cxx PUBLIC $<$<COMPILE_LANGUAGE:CXX>:-nostdinc++>)
|
target_compile_options(cxx PUBLIC $<$<COMPILE_LANGUAGE:CXX>:-nostdinc++>)
|
||||||
|
|
||||||
# Third party library may have substandard code.
|
# Third party library may have substandard code.
|
||||||
|
@ -66,7 +66,7 @@
|
|||||||
#cmakedefine WITH_SASL_OAUTHBEARER 1
|
#cmakedefine WITH_SASL_OAUTHBEARER 1
|
||||||
#cmakedefine WITH_SASL_CYRUS 1
|
#cmakedefine WITH_SASL_CYRUS 1
|
||||||
// crc32chw
|
// crc32chw
|
||||||
#if !defined(__PPC__) && (!defined(__aarch64__) || defined(__ARM_FEATURE_CRC32))
|
#if !defined(__PPC__) && (!defined(__aarch64__) || defined(__ARM_FEATURE_CRC32)) && !(defined(__aarch64__) && defined(__APPLE__))
|
||||||
#define WITH_CRC32C_HW 1
|
#define WITH_CRC32C_HW 1
|
||||||
#endif
|
#endif
|
||||||
// regex
|
// regex
|
||||||
@ -75,6 +75,8 @@
|
|||||||
#define HAVE_STRNDUP 1
|
#define HAVE_STRNDUP 1
|
||||||
// strerror_r
|
// strerror_r
|
||||||
#define HAVE_STRERROR_R 1
|
#define HAVE_STRERROR_R 1
|
||||||
|
// rand_r
|
||||||
|
#define HAVE_RAND_R 1
|
||||||
|
|
||||||
#ifdef __APPLE__
|
#ifdef __APPLE__
|
||||||
// pthread_setname_np
|
// pthread_setname_np
|
||||||
|
2
contrib/mariadb-connector-c
vendored
2
contrib/mariadb-connector-c
vendored
@ -1 +1 @@
|
|||||||
Subproject commit f4476ee7311b35b593750f6ae2cbdb62a4006374
|
Subproject commit 5f4034a3a6376416504f17186c55fe401c6d8e5e
|
1
contrib/nanodbc
vendored
Submodule
1
contrib/nanodbc
vendored
Submodule
@ -0,0 +1 @@
|
|||||||
|
Subproject commit 9fc459675515d491401727ec67fca38db721f28c
|
18
contrib/nanodbc-cmake/CMakeLists.txt
Normal file
18
contrib/nanodbc-cmake/CMakeLists.txt
Normal file
@ -0,0 +1,18 @@
|
|||||||
|
if (NOT USE_INTERNAL_NANODBC_LIBRARY)
|
||||||
|
return ()
|
||||||
|
endif ()
|
||||||
|
|
||||||
|
set (LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/nanodbc)
|
||||||
|
|
||||||
|
if (NOT TARGET unixodbc)
|
||||||
|
message(FATAL_ERROR "Configuration error: unixodbc is not a target")
|
||||||
|
endif()
|
||||||
|
|
||||||
|
set (SRCS
|
||||||
|
${LIBRARY_DIR}/nanodbc/nanodbc.cpp
|
||||||
|
)
|
||||||
|
|
||||||
|
add_library(nanodbc ${SRCS})
|
||||||
|
|
||||||
|
target_link_libraries (nanodbc PUBLIC unixodbc)
|
||||||
|
target_include_directories (nanodbc SYSTEM PUBLIC ${LIBRARY_DIR}/)
|
63
contrib/openldap-cmake/darwin_aarch64/include/lber_types.h
Normal file
63
contrib/openldap-cmake/darwin_aarch64/include/lber_types.h
Normal file
@ -0,0 +1,63 @@
|
|||||||
|
/* include/lber_types.h. Generated from lber_types.hin by configure. */
|
||||||
|
/* $OpenLDAP$ */
|
||||||
|
/* This work is part of OpenLDAP Software <http://www.openldap.org/>.
|
||||||
|
*
|
||||||
|
* Copyright 1998-2020 The OpenLDAP Foundation.
|
||||||
|
* All rights reserved.
|
||||||
|
*
|
||||||
|
* Redistribution and use in source and binary forms, with or without
|
||||||
|
* modification, are permitted only as authorized by the OpenLDAP
|
||||||
|
* Public License.
|
||||||
|
*
|
||||||
|
* A copy of this license is available in file LICENSE in the
|
||||||
|
* top-level directory of the distribution or, alternatively, at
|
||||||
|
* <http://www.OpenLDAP.org/license.html>.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
* LBER types
|
||||||
|
*/
|
||||||
|
|
||||||
|
#ifndef _LBER_TYPES_H
|
||||||
|
#define _LBER_TYPES_H
|
||||||
|
|
||||||
|
#include <ldap_cdefs.h>
|
||||||
|
|
||||||
|
LDAP_BEGIN_DECL
|
||||||
|
|
||||||
|
/* LBER boolean, enum, integers (32 bits or larger) */
|
||||||
|
#define LBER_INT_T int
|
||||||
|
|
||||||
|
/* LBER tags (32 bits or larger) */
|
||||||
|
#define LBER_TAG_T long
|
||||||
|
|
||||||
|
/* LBER socket descriptor */
|
||||||
|
#define LBER_SOCKET_T int
|
||||||
|
|
||||||
|
/* LBER lengths (32 bits or larger) */
|
||||||
|
#define LBER_LEN_T long
|
||||||
|
|
||||||
|
/* ------------------------------------------------------------ */
|
||||||
|
|
||||||
|
/* booleans, enumerations, and integers */
|
||||||
|
typedef LBER_INT_T ber_int_t;
|
||||||
|
|
||||||
|
/* signed and unsigned versions */
|
||||||
|
typedef signed LBER_INT_T ber_sint_t;
|
||||||
|
typedef unsigned LBER_INT_T ber_uint_t;
|
||||||
|
|
||||||
|
/* tags */
|
||||||
|
typedef unsigned LBER_TAG_T ber_tag_t;
|
||||||
|
|
||||||
|
/* "socket" descriptors */
|
||||||
|
typedef LBER_SOCKET_T ber_socket_t;
|
||||||
|
|
||||||
|
/* lengths */
|
||||||
|
typedef unsigned LBER_LEN_T ber_len_t;
|
||||||
|
|
||||||
|
/* signed lengths */
|
||||||
|
typedef signed LBER_LEN_T ber_slen_t;
|
||||||
|
|
||||||
|
LDAP_END_DECL
|
||||||
|
|
||||||
|
#endif /* _LBER_TYPES_H */
|
74
contrib/openldap-cmake/darwin_aarch64/include/ldap_config.h
Normal file
74
contrib/openldap-cmake/darwin_aarch64/include/ldap_config.h
Normal file
@ -0,0 +1,74 @@
|
|||||||
|
/* include/ldap_config.h. Generated from ldap_config.hin by configure. */
|
||||||
|
/* $OpenLDAP$ */
|
||||||
|
/* This work is part of OpenLDAP Software <http://www.openldap.org/>.
|
||||||
|
*
|
||||||
|
* Copyright 1998-2020 The OpenLDAP Foundation.
|
||||||
|
* All rights reserved.
|
||||||
|
*
|
||||||
|
* Redistribution and use in source and binary forms, with or without
|
||||||
|
* modification, are permitted only as authorized by the OpenLDAP
|
||||||
|
* Public License.
|
||||||
|
*
|
||||||
|
* A copy of this license is available in file LICENSE in the
|
||||||
|
* top-level directory of the distribution or, alternatively, at
|
||||||
|
* <http://www.OpenLDAP.org/license.html>.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
* This file works in conjunction with OpenLDAP configure system.
|
||||||
|
* If you do no like the values below, adjust your configure options.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#ifndef _LDAP_CONFIG_H
|
||||||
|
#define _LDAP_CONFIG_H
|
||||||
|
|
||||||
|
/* directory separator */
|
||||||
|
#ifndef LDAP_DIRSEP
|
||||||
|
#ifndef _WIN32
|
||||||
|
#define LDAP_DIRSEP "/"
|
||||||
|
#else
|
||||||
|
#define LDAP_DIRSEP "\\"
|
||||||
|
#endif
|
||||||
|
#endif
|
||||||
|
|
||||||
|
/* directory for temporary files */
|
||||||
|
#if defined(_WIN32)
|
||||||
|
# define LDAP_TMPDIR "C:\\." /* we don't have much of a choice */
|
||||||
|
#elif defined( _P_tmpdir )
|
||||||
|
# define LDAP_TMPDIR _P_tmpdir
|
||||||
|
#elif defined( P_tmpdir )
|
||||||
|
# define LDAP_TMPDIR P_tmpdir
|
||||||
|
#elif defined( _PATH_TMPDIR )
|
||||||
|
# define LDAP_TMPDIR _PATH_TMPDIR
|
||||||
|
#else
|
||||||
|
# define LDAP_TMPDIR LDAP_DIRSEP "tmp"
|
||||||
|
#endif
|
||||||
|
|
||||||
|
/* directories */
|
||||||
|
#ifndef LDAP_BINDIR
|
||||||
|
#define LDAP_BINDIR "/tmp/ldap-prefix/bin"
|
||||||
|
#endif
|
||||||
|
#ifndef LDAP_SBINDIR
|
||||||
|
#define LDAP_SBINDIR "/tmp/ldap-prefix/sbin"
|
||||||
|
#endif
|
||||||
|
#ifndef LDAP_DATADIR
|
||||||
|
#define LDAP_DATADIR "/tmp/ldap-prefix/share/openldap"
|
||||||
|
#endif
|
||||||
|
#ifndef LDAP_SYSCONFDIR
|
||||||
|
#define LDAP_SYSCONFDIR "/tmp/ldap-prefix/etc/openldap"
|
||||||
|
#endif
|
||||||
|
#ifndef LDAP_LIBEXECDIR
|
||||||
|
#define LDAP_LIBEXECDIR "/tmp/ldap-prefix/libexec"
|
||||||
|
#endif
|
||||||
|
#ifndef LDAP_MODULEDIR
|
||||||
|
#define LDAP_MODULEDIR "/tmp/ldap-prefix/libexec/openldap"
|
||||||
|
#endif
|
||||||
|
#ifndef LDAP_RUNDIR
|
||||||
|
#define LDAP_RUNDIR "/tmp/ldap-prefix/var"
|
||||||
|
#endif
|
||||||
|
#ifndef LDAP_LOCALEDIR
|
||||||
|
#define LDAP_LOCALEDIR ""
|
||||||
|
#endif
|
||||||
|
|
||||||
|
|
||||||
|
#endif /* _LDAP_CONFIG_H */
|
@ -0,0 +1,61 @@
|
|||||||
|
/* include/ldap_features.h. Generated from ldap_features.hin by configure. */
|
||||||
|
/* $OpenLDAP$ */
|
||||||
|
/* This work is part of OpenLDAP Software <http://www.openldap.org/>.
|
||||||
|
*
|
||||||
|
* Copyright 1998-2020 The OpenLDAP Foundation.
|
||||||
|
* All rights reserved.
|
||||||
|
*
|
||||||
|
* Redistribution and use in source and binary forms, with or without
|
||||||
|
* modification, are permitted only as authorized by the OpenLDAP
|
||||||
|
* Public License.
|
||||||
|
*
|
||||||
|
* A copy of this license is available in file LICENSE in the
|
||||||
|
* top-level directory of the distribution or, alternatively, at
|
||||||
|
* <http://www.OpenLDAP.org/license.html>.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
* LDAP Features
|
||||||
|
*/
|
||||||
|
|
||||||
|
#ifndef _LDAP_FEATURES_H
|
||||||
|
#define _LDAP_FEATURES_H 1
|
||||||
|
|
||||||
|
/* OpenLDAP API version macros */
|
||||||
|
#define LDAP_VENDOR_VERSION 20501
|
||||||
|
#define LDAP_VENDOR_VERSION_MAJOR 2
|
||||||
|
#define LDAP_VENDOR_VERSION_MINOR 5
|
||||||
|
#define LDAP_VENDOR_VERSION_PATCH X
|
||||||
|
|
||||||
|
/*
|
||||||
|
** WORK IN PROGRESS!
|
||||||
|
**
|
||||||
|
** OpenLDAP reentrancy/thread-safeness should be dynamically
|
||||||
|
** checked using ldap_get_option().
|
||||||
|
**
|
||||||
|
** The -lldap implementation is not thread-safe.
|
||||||
|
**
|
||||||
|
** The -lldap_r implementation is:
|
||||||
|
** LDAP_API_FEATURE_THREAD_SAFE (basic thread safety)
|
||||||
|
** but also be:
|
||||||
|
** LDAP_API_FEATURE_SESSION_THREAD_SAFE
|
||||||
|
** LDAP_API_FEATURE_OPERATION_THREAD_SAFE
|
||||||
|
**
|
||||||
|
** The preprocessor flag LDAP_API_FEATURE_X_OPENLDAP_THREAD_SAFE
|
||||||
|
** can be used to determine if -lldap_r is available at compile
|
||||||
|
** time. You must define LDAP_THREAD_SAFE if and only if you
|
||||||
|
** link with -lldap_r.
|
||||||
|
**
|
||||||
|
** If you fail to define LDAP_THREAD_SAFE when linking with
|
||||||
|
** -lldap_r or define LDAP_THREAD_SAFE when linking with -lldap,
|
||||||
|
** provided header definitions and declarations may be incorrect.
|
||||||
|
**
|
||||||
|
*/
|
||||||
|
|
||||||
|
/* is -lldap_r available or not */
|
||||||
|
#define LDAP_API_FEATURE_X_OPENLDAP_THREAD_SAFE 1
|
||||||
|
|
||||||
|
/* LDAP v2 Referrals */
|
||||||
|
/* #undef LDAP_API_FEATURE_X_OPENLDAP_V2_REFERRALS */
|
||||||
|
|
||||||
|
#endif /* LDAP_FEATURES */
|
1169
contrib/openldap-cmake/darwin_aarch64/include/portable.h
Normal file
1169
contrib/openldap-cmake/darwin_aarch64/include/portable.h
Normal file
File diff suppressed because it is too large
Load Diff
2
contrib/poco
vendored
2
contrib/poco
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 83beecccb09eec0c9fd2669cacea03ede1d9f138
|
Subproject commit b7d9ec16ee33ca76643d5fcd907ea9a33285640a
|
@ -233,3 +233,10 @@ else ()
|
|||||||
|
|
||||||
message (STATUS "Using Poco::Foundation: ${LIBRARY_POCO_FOUNDATION} ${INCLUDE_POCO_FOUNDATION}")
|
message (STATUS "Using Poco::Foundation: ${LIBRARY_POCO_FOUNDATION} ${INCLUDE_POCO_FOUNDATION}")
|
||||||
endif ()
|
endif ()
|
||||||
|
|
||||||
|
if(OS_DARWIN AND ARCH_AARCH64)
|
||||||
|
target_compile_definitions (_poco_foundation
|
||||||
|
PRIVATE
|
||||||
|
POCO_NO_STAT64
|
||||||
|
)
|
||||||
|
endif()
|
||||||
|
@ -142,14 +142,14 @@ if(CMAKE_SYSTEM_PROCESSOR MATCHES "^(powerpc|ppc)64")
|
|||||||
endif(HAS_ALTIVEC)
|
endif(HAS_ALTIVEC)
|
||||||
endif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(powerpc|ppc)64")
|
endif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(powerpc|ppc)64")
|
||||||
|
|
||||||
if(CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64|AARCH64")
|
if(CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64|AARCH64|arm64|ARM64")
|
||||||
CHECK_C_COMPILER_FLAG("-march=armv8-a+crc+crypto" HAS_ARMV8_CRC)
|
CHECK_C_COMPILER_FLAG("-march=armv8-a+crc+crypto" HAS_ARMV8_CRC)
|
||||||
if(HAS_ARMV8_CRC)
|
if(HAS_ARMV8_CRC)
|
||||||
message(STATUS " HAS_ARMV8_CRC yes")
|
message(STATUS " HAS_ARMV8_CRC yes")
|
||||||
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -march=armv8-a+crc+crypto -Wno-unused-function")
|
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -march=armv8-a+crc+crypto -Wno-unused-function")
|
||||||
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -march=armv8-a+crc+crypto -Wno-unused-function")
|
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -march=armv8-a+crc+crypto -Wno-unused-function")
|
||||||
endif(HAS_ARMV8_CRC)
|
endif(HAS_ARMV8_CRC)
|
||||||
endif(CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64|AARCH64")
|
endif(CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64|AARCH64|arm64|ARM64")
|
||||||
|
|
||||||
|
|
||||||
include(CheckCXXSourceCompiles)
|
include(CheckCXXSourceCompiles)
|
||||||
|
2
contrib/zlib-ng
vendored
2
contrib/zlib-ng
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 6fd1846c8b8f59436fe2dd752d0f316ddbb64df6
|
Subproject commit 5cc4d232020dc66d1d6c5438834457e2a2f6127b
|
4
debian/changelog
vendored
4
debian/changelog
vendored
@ -1,5 +1,5 @@
|
|||||||
clickhouse (21.5.1.1) unstable; urgency=low
|
clickhouse (21.6.1.1) unstable; urgency=low
|
||||||
|
|
||||||
* Modified source code
|
* Modified source code
|
||||||
|
|
||||||
-- clickhouse-release <clickhouse-release@yandex-team.ru> Fri, 02 Apr 2021 18:34:26 +0300
|
-- clickhouse-release <clickhouse-release@yandex-team.ru> Tue, 20 Apr 2021 01:48:16 +0300
|
||||||
|
2
debian/clickhouse-common-static.install
vendored
2
debian/clickhouse-common-static.install
vendored
@ -1,5 +1,5 @@
|
|||||||
usr/bin/clickhouse
|
usr/bin/clickhouse
|
||||||
usr/bin/clickhouse-odbc-bridge
|
usr/bin/clickhouse-odbc-bridge
|
||||||
|
usr/bin/clickhouse-library-bridge
|
||||||
usr/bin/clickhouse-extract-from-config
|
usr/bin/clickhouse-extract-from-config
|
||||||
usr/share/bash-completion/completions
|
usr/share/bash-completion/completions
|
||||||
etc/security/limits.d/clickhouse.conf
|
|
||||||
|
16
debian/clickhouse-server.config
vendored
16
debian/clickhouse-server.config
vendored
@ -1,16 +0,0 @@
|
|||||||
#!/bin/sh -e
|
|
||||||
|
|
||||||
test -f /usr/share/debconf/confmodule && . /usr/share/debconf/confmodule
|
|
||||||
|
|
||||||
db_fget clickhouse-server/default-password seen || true
|
|
||||||
password_seen="$RET"
|
|
||||||
|
|
||||||
if [ "$1" = "reconfigure" ]; then
|
|
||||||
password_seen=false
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ "$password_seen" != "true" ]; then
|
|
||||||
db_input high clickhouse-server/default-password || true
|
|
||||||
db_go || true
|
|
||||||
fi
|
|
||||||
db_go || true
|
|
8
debian/clickhouse-server.postinst
vendored
8
debian/clickhouse-server.postinst
vendored
@ -23,11 +23,13 @@ if [ ! -f "/etc/debian_version" ]; then
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
if [ "$1" = configure ] || [ -n "$not_deb_os" ]; then
|
if [ "$1" = configure ] || [ -n "$not_deb_os" ]; then
|
||||||
|
|
||||||
|
${CLICKHOUSE_GENERIC_PROGRAM} install --user "${CLICKHOUSE_USER}" --group "${CLICKHOUSE_GROUP}" --pid-path "${CLICKHOUSE_PIDDIR}" --config-path "${CLICKHOUSE_CONFDIR}" --binary-path "${CLICKHOUSE_BINDIR}" --log-path "${CLICKHOUSE_LOGDIR}" --data-path "${CLICKHOUSE_DATADIR}"
|
||||||
|
|
||||||
if [ -x "/bin/systemctl" ] && [ -f /etc/systemd/system/clickhouse-server.service ] && [ -d /run/systemd/system ]; then
|
if [ -x "/bin/systemctl" ] && [ -f /etc/systemd/system/clickhouse-server.service ] && [ -d /run/systemd/system ]; then
|
||||||
# if old rc.d service present - remove it
|
# if old rc.d service present - remove it
|
||||||
if [ -x "/etc/init.d/clickhouse-server" ] && [ -x "/usr/sbin/update-rc.d" ]; then
|
if [ -x "/etc/init.d/clickhouse-server" ] && [ -x "/usr/sbin/update-rc.d" ]; then
|
||||||
/usr/sbin/update-rc.d clickhouse-server remove
|
/usr/sbin/update-rc.d clickhouse-server remove
|
||||||
echo "ClickHouse init script has migrated to systemd. Please manually stop old server and restart the service: sudo killall clickhouse-server && sleep 5 && sudo service clickhouse-server restart"
|
|
||||||
fi
|
fi
|
||||||
|
|
||||||
/bin/systemctl daemon-reload
|
/bin/systemctl daemon-reload
|
||||||
@ -38,10 +40,8 @@ if [ "$1" = configure ] || [ -n "$not_deb_os" ]; then
|
|||||||
if [ -x "/usr/sbin/update-rc.d" ]; then
|
if [ -x "/usr/sbin/update-rc.d" ]; then
|
||||||
/usr/sbin/update-rc.d clickhouse-server defaults 19 19 >/dev/null || exit $?
|
/usr/sbin/update-rc.d clickhouse-server defaults 19 19 >/dev/null || exit $?
|
||||||
else
|
else
|
||||||
echo # TODO [ "$OS" = "rhel" ] || [ "$OS" = "centos" ] || [ "$OS" = "fedora" ]
|
echo # Other OS
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
${CLICKHOUSE_GENERIC_PROGRAM} install --user "${CLICKHOUSE_USER}" --group "${CLICKHOUSE_GROUP}" --pid-path "${CLICKHOUSE_PIDDIR}" --config-path "${CLICKHOUSE_CONFDIR}" --binary-path "${CLICKHOUSE_BINDIR}" --log-path "${CLICKHOUSE_LOGDIR}" --data-path "${CLICKHOUSE_DATADIR}"
|
|
||||||
fi
|
fi
|
||||||
|
8
debian/clickhouse-server.preinst
vendored
8
debian/clickhouse-server.preinst
vendored
@ -1,8 +0,0 @@
|
|||||||
#!/bin/sh
|
|
||||||
|
|
||||||
if [ "$1" = "upgrade" ]; then
|
|
||||||
# Return etc/cron.d/clickhouse-server to original state
|
|
||||||
service clickhouse-server disable_cron ||:
|
|
||||||
fi
|
|
||||||
|
|
||||||
#DEBHELPER#
|
|
6
debian/clickhouse-server.prerm
vendored
6
debian/clickhouse-server.prerm
vendored
@ -1,6 +0,0 @@
|
|||||||
#!/bin/sh
|
|
||||||
|
|
||||||
if [ "$1" = "upgrade" ] || [ "$1" = "remove" ]; then
|
|
||||||
# Return etc/cron.d/clickhouse-server to original state
|
|
||||||
service clickhouse-server disable_cron ||:
|
|
||||||
fi
|
|
3
debian/clickhouse-server.templates
vendored
3
debian/clickhouse-server.templates
vendored
@ -1,3 +0,0 @@
|
|||||||
Template: clickhouse-server/default-password
|
|
||||||
Type: password
|
|
||||||
Description: Enter password for default user:
|
|
2
debian/clickhouse.limits
vendored
2
debian/clickhouse.limits
vendored
@ -1,2 +0,0 @@
|
|||||||
clickhouse soft nofile 262144
|
|
||||||
clickhouse hard nofile 262144
|
|
3
debian/rules
vendored
3
debian/rules
vendored
@ -113,9 +113,6 @@ override_dh_install:
|
|||||||
ln -sf clickhouse-server.docs debian/clickhouse-client.docs
|
ln -sf clickhouse-server.docs debian/clickhouse-client.docs
|
||||||
ln -sf clickhouse-server.docs debian/clickhouse-common-static.docs
|
ln -sf clickhouse-server.docs debian/clickhouse-common-static.docs
|
||||||
|
|
||||||
mkdir -p $(DESTDIR)/etc/security/limits.d
|
|
||||||
cp debian/clickhouse.limits $(DESTDIR)/etc/security/limits.d/clickhouse.conf
|
|
||||||
|
|
||||||
# systemd compatibility
|
# systemd compatibility
|
||||||
mkdir -p $(DESTDIR)/etc/systemd/system/
|
mkdir -p $(DESTDIR)/etc/systemd/system/
|
||||||
cp debian/clickhouse-server.service $(DESTDIR)/etc/systemd/system/
|
cp debian/clickhouse-server.service $(DESTDIR)/etc/systemd/system/
|
||||||
|
2
debian/watch
vendored
2
debian/watch
vendored
@ -1,6 +1,6 @@
|
|||||||
version=4
|
version=4
|
||||||
|
|
||||||
opts="filenamemangle=s%(?:.*?)?v?(\d[\d.]*)-stable\.tar\.gz%clickhouse-$1.tar.gz%" \
|
opts="filenamemangle=s%(?:.*?)?v?(\d[\d.]*)-stable\.tar\.gz%clickhouse-$1.tar.gz%" \
|
||||||
https://github.com/yandex/clickhouse/tags \
|
https://github.com/ClickHouse/ClickHouse/tags \
|
||||||
(?:.*?/)?v?(\d[\d.]*)-stable\.tar\.gz debian uupdate
|
(?:.*?/)?v?(\d[\d.]*)-stable\.tar\.gz debian uupdate
|
||||||
|
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
FROM ubuntu:18.04
|
FROM ubuntu:18.04
|
||||||
|
|
||||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||||
ARG version=21.5.1.*
|
ARG version=21.6.1.*
|
||||||
|
|
||||||
RUN apt-get update \
|
RUN apt-get update \
|
||||||
&& apt-get install --yes --no-install-recommends \
|
&& apt-get install --yes --no-install-recommends \
|
||||||
|
@ -138,7 +138,8 @@
|
|||||||
"docker/test/stateless_unbundled",
|
"docker/test/stateless_unbundled",
|
||||||
"docker/test/stateless_pytest",
|
"docker/test/stateless_pytest",
|
||||||
"docker/test/integration/base",
|
"docker/test/integration/base",
|
||||||
"docker/test/fuzzer"
|
"docker/test/fuzzer",
|
||||||
|
"docker/test/keeper-jepsen"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
"docker/packager/unbundled": {
|
"docker/packager/unbundled": {
|
||||||
@ -159,5 +160,9 @@
|
|||||||
"docker/test/sqlancer": {
|
"docker/test/sqlancer": {
|
||||||
"name": "yandex/clickhouse-sqlancer-test",
|
"name": "yandex/clickhouse-sqlancer-test",
|
||||||
"dependent": []
|
"dependent": []
|
||||||
|
},
|
||||||
|
"docker/test/keeper-jepsen": {
|
||||||
|
"name": "yandex/clickhouse-keeper-jepsen-test",
|
||||||
|
"dependent": []
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -35,35 +35,32 @@ RUN apt-get update \
|
|||||||
RUN apt-get update \
|
RUN apt-get update \
|
||||||
&& apt-get install \
|
&& apt-get install \
|
||||||
bash \
|
bash \
|
||||||
cmake \
|
build-essential \
|
||||||
ccache \
|
ccache \
|
||||||
curl \
|
|
||||||
gcc-9 \
|
|
||||||
g++-9 \
|
|
||||||
clang-10 \
|
clang-10 \
|
||||||
clang-tidy-10 \
|
|
||||||
lld-10 \
|
|
||||||
llvm-10 \
|
|
||||||
llvm-10-dev \
|
|
||||||
clang-11 \
|
clang-11 \
|
||||||
|
clang-tidy-10 \
|
||||||
clang-tidy-11 \
|
clang-tidy-11 \
|
||||||
lld-11 \
|
cmake \
|
||||||
llvm-11 \
|
curl \
|
||||||
llvm-11-dev \
|
g++-9 \
|
||||||
|
gcc-9 \
|
||||||
|
gdb \
|
||||||
|
git \
|
||||||
|
gperf \
|
||||||
libicu-dev \
|
libicu-dev \
|
||||||
libreadline-dev \
|
libreadline-dev \
|
||||||
|
lld-10 \
|
||||||
|
lld-11 \
|
||||||
|
llvm-10 \
|
||||||
|
llvm-10-dev \
|
||||||
|
llvm-11 \
|
||||||
|
llvm-11-dev \
|
||||||
|
moreutils \
|
||||||
ninja-build \
|
ninja-build \
|
||||||
gperf \
|
pigz \
|
||||||
git \
|
|
||||||
opencl-headers \
|
|
||||||
ocl-icd-libopencl1 \
|
|
||||||
intel-opencl-icd \
|
|
||||||
tzdata \
|
|
||||||
gperf \
|
|
||||||
cmake \
|
|
||||||
gdb \
|
|
||||||
rename \
|
rename \
|
||||||
build-essential \
|
tzdata \
|
||||||
--yes --no-install-recommends
|
--yes --no-install-recommends
|
||||||
|
|
||||||
# This symlink required by gcc to find lld compiler
|
# This symlink required by gcc to find lld compiler
|
||||||
@ -111,4 +108,4 @@ RUN rm /etc/apt/sources.list.d/proposed-repositories.list && apt-get update
|
|||||||
|
|
||||||
|
|
||||||
COPY build.sh /
|
COPY build.sh /
|
||||||
CMD ["/bin/bash", "/build.sh"]
|
CMD ["bash", "-c", "/build.sh 2>&1 | ts"]
|
||||||
|
@ -11,17 +11,28 @@ tar xJf gcc-arm-8.3-2019.03-x86_64-aarch64-linux-gnu.tar.xz -C build/cmake/toolc
|
|||||||
mkdir -p build/cmake/toolchain/freebsd-x86_64
|
mkdir -p build/cmake/toolchain/freebsd-x86_64
|
||||||
tar xJf freebsd-11.3-toolchain.tar.xz -C build/cmake/toolchain/freebsd-x86_64 --strip-components=1
|
tar xJf freebsd-11.3-toolchain.tar.xz -C build/cmake/toolchain/freebsd-x86_64 --strip-components=1
|
||||||
|
|
||||||
|
# Uncomment to debug ccache. Don't put ccache log in /output right away, or it
|
||||||
|
# will be confusingly packed into the "performance" package.
|
||||||
|
# export CCACHE_LOGFILE=/build/ccache.log
|
||||||
|
# export CCACHE_DEBUG=1
|
||||||
|
|
||||||
mkdir -p build/build_docker
|
mkdir -p build/build_docker
|
||||||
cd build/build_docker
|
cd build/build_docker
|
||||||
ccache --show-stats ||:
|
|
||||||
ccache --zero-stats ||:
|
|
||||||
ln -s /usr/lib/x86_64-linux-gnu/libOpenCL.so.1.0.0 /usr/lib/libOpenCL.so ||:
|
|
||||||
rm -f CMakeCache.txt
|
rm -f CMakeCache.txt
|
||||||
# Read cmake arguments into array (possibly empty)
|
# Read cmake arguments into array (possibly empty)
|
||||||
read -ra CMAKE_FLAGS <<< "${CMAKE_FLAGS:-}"
|
read -ra CMAKE_FLAGS <<< "${CMAKE_FLAGS:-}"
|
||||||
cmake --debug-trycompile --verbose=1 -DCMAKE_VERBOSE_MAKEFILE=1 -LA "-DCMAKE_BUILD_TYPE=$BUILD_TYPE" "-DSANITIZE=$SANITIZER" -DENABLE_CHECK_HEAVY_BUILDS=1 "${CMAKE_FLAGS[@]}" ..
|
cmake --debug-trycompile --verbose=1 -DCMAKE_VERBOSE_MAKEFILE=1 -LA "-DCMAKE_BUILD_TYPE=$BUILD_TYPE" "-DSANITIZE=$SANITIZER" -DENABLE_CHECK_HEAVY_BUILDS=1 "${CMAKE_FLAGS[@]}" ..
|
||||||
|
|
||||||
|
ccache --show-config ||:
|
||||||
|
ccache --show-stats ||:
|
||||||
|
ccache --zero-stats ||:
|
||||||
|
|
||||||
# shellcheck disable=SC2086 # No quotes because I want it to expand to nothing if empty.
|
# shellcheck disable=SC2086 # No quotes because I want it to expand to nothing if empty.
|
||||||
ninja $NINJA_FLAGS clickhouse-bundle
|
ninja $NINJA_FLAGS clickhouse-bundle
|
||||||
|
|
||||||
|
ccache --show-config ||:
|
||||||
|
ccache --show-stats ||:
|
||||||
|
|
||||||
mv ./programs/clickhouse* /output
|
mv ./programs/clickhouse* /output
|
||||||
mv ./src/unit_tests_dbms /output ||: # may not exist for some binary builds
|
mv ./src/unit_tests_dbms /output ||: # may not exist for some binary builds
|
||||||
find . -name '*.so' -print -exec mv '{}' /output \;
|
find . -name '*.so' -print -exec mv '{}' /output \;
|
||||||
@ -65,8 +76,21 @@ then
|
|||||||
cp ../programs/server/config.xml /output/config
|
cp ../programs/server/config.xml /output/config
|
||||||
cp ../programs/server/users.xml /output/config
|
cp ../programs/server/users.xml /output/config
|
||||||
cp -r --dereference ../programs/server/config.d /output/config
|
cp -r --dereference ../programs/server/config.d /output/config
|
||||||
tar -czvf "$COMBINED_OUTPUT.tgz" /output
|
tar -cv -I pigz -f "$COMBINED_OUTPUT.tgz" /output
|
||||||
rm -r /output/*
|
rm -r /output/*
|
||||||
mv "$COMBINED_OUTPUT.tgz" /output
|
mv "$COMBINED_OUTPUT.tgz" /output
|
||||||
fi
|
fi
|
||||||
ccache --show-stats ||:
|
|
||||||
|
if [ "${CCACHE_DEBUG:-}" == "1" ]
|
||||||
|
then
|
||||||
|
find . -name '*.ccache-*' -print0 \
|
||||||
|
| tar -c -I pixz -f /output/ccache-debug.txz --null -T -
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -n "$CCACHE_LOGFILE" ]
|
||||||
|
then
|
||||||
|
# Compress the log as well, or else the CI will try to compress all log
|
||||||
|
# files in place, and will fail because this directory is not writable.
|
||||||
|
tar -cv -I pixz -f /output/ccache.log.txz "$CCACHE_LOGFILE"
|
||||||
|
fi
|
||||||
|
|
||||||
|
@ -34,31 +34,32 @@ RUN curl -O https://clickhouse-builds.s3.yandex.net/utils/1/dpkg-deb \
|
|||||||
# Libraries from OS are only needed to test the "unbundled" build (this is not used in production).
|
# Libraries from OS are only needed to test the "unbundled" build (this is not used in production).
|
||||||
RUN apt-get update \
|
RUN apt-get update \
|
||||||
&& apt-get install \
|
&& apt-get install \
|
||||||
gcc-9 \
|
alien \
|
||||||
g++-9 \
|
|
||||||
clang-11 \
|
|
||||||
clang-tidy-11 \
|
|
||||||
lld-11 \
|
|
||||||
llvm-11 \
|
|
||||||
llvm-11-dev \
|
|
||||||
clang-10 \
|
clang-10 \
|
||||||
|
clang-11 \
|
||||||
clang-tidy-10 \
|
clang-tidy-10 \
|
||||||
|
clang-tidy-11 \
|
||||||
|
cmake \
|
||||||
|
debhelper \
|
||||||
|
devscripts \
|
||||||
|
g++-9 \
|
||||||
|
gcc-9 \
|
||||||
|
gdb \
|
||||||
|
git \
|
||||||
|
gperf \
|
||||||
lld-10 \
|
lld-10 \
|
||||||
|
lld-11 \
|
||||||
llvm-10 \
|
llvm-10 \
|
||||||
llvm-10-dev \
|
llvm-10-dev \
|
||||||
|
llvm-11 \
|
||||||
|
llvm-11-dev \
|
||||||
|
moreutils \
|
||||||
ninja-build \
|
ninja-build \
|
||||||
perl \
|
perl \
|
||||||
pkg-config \
|
|
||||||
devscripts \
|
|
||||||
debhelper \
|
|
||||||
git \
|
|
||||||
tzdata \
|
|
||||||
gperf \
|
|
||||||
alien \
|
|
||||||
cmake \
|
|
||||||
gdb \
|
|
||||||
moreutils \
|
|
||||||
pigz \
|
pigz \
|
||||||
|
pixz \
|
||||||
|
pkg-config \
|
||||||
|
tzdata \
|
||||||
--yes --no-install-recommends
|
--yes --no-install-recommends
|
||||||
|
|
||||||
# NOTE: For some reason we have outdated version of gcc-10 in ubuntu 20.04 stable.
|
# NOTE: For some reason we have outdated version of gcc-10 in ubuntu 20.04 stable.
|
||||||
|
@ -2,8 +2,14 @@
|
|||||||
|
|
||||||
set -x -e
|
set -x -e
|
||||||
|
|
||||||
|
# Uncomment to debug ccache.
|
||||||
|
# export CCACHE_LOGFILE=/build/ccache.log
|
||||||
|
# export CCACHE_DEBUG=1
|
||||||
|
|
||||||
|
ccache --show-config ||:
|
||||||
ccache --show-stats ||:
|
ccache --show-stats ||:
|
||||||
ccache --zero-stats ||:
|
ccache --zero-stats ||:
|
||||||
|
|
||||||
read -ra ALIEN_PKGS <<< "${ALIEN_PKGS:-}"
|
read -ra ALIEN_PKGS <<< "${ALIEN_PKGS:-}"
|
||||||
build/release --no-pbuilder "${ALIEN_PKGS[@]}" | ts '%Y-%m-%d %H:%M:%S'
|
build/release --no-pbuilder "${ALIEN_PKGS[@]}" | ts '%Y-%m-%d %H:%M:%S'
|
||||||
mv /*.deb /output
|
mv /*.deb /output
|
||||||
@ -22,5 +28,19 @@ then
|
|||||||
mv /build/obj-*/src/unit_tests_dbms /output/binary
|
mv /build/obj-*/src/unit_tests_dbms /output/binary
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
ccache --show-config ||:
|
||||||
ccache --show-stats ||:
|
ccache --show-stats ||:
|
||||||
ln -s /usr/lib/x86_64-linux-gnu/libOpenCL.so.1.0.0 /usr/lib/libOpenCL.so ||:
|
|
||||||
|
if [ "${CCACHE_DEBUG:-}" == "1" ]
|
||||||
|
then
|
||||||
|
find /build -name '*.ccache-*' -print0 \
|
||||||
|
| tar -c -I pixz -f /output/ccache-debug.txz --null -T -
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -n "$CCACHE_LOGFILE" ]
|
||||||
|
then
|
||||||
|
# Compress the log as well, or else the CI will try to compress all log
|
||||||
|
# files in place, and will fail because this directory is not writable.
|
||||||
|
tar -cv -I pixz -f /output/ccache.log.txz "$CCACHE_LOGFILE"
|
||||||
|
fi
|
||||||
|
@ -35,9 +35,6 @@ RUN apt-get update \
|
|||||||
libjemalloc-dev \
|
libjemalloc-dev \
|
||||||
libmsgpack-dev \
|
libmsgpack-dev \
|
||||||
libcurl4-openssl-dev \
|
libcurl4-openssl-dev \
|
||||||
opencl-headers \
|
|
||||||
ocl-icd-libopencl1 \
|
|
||||||
intel-opencl-icd \
|
|
||||||
unixodbc-dev \
|
unixodbc-dev \
|
||||||
odbcinst \
|
odbcinst \
|
||||||
tzdata \
|
tzdata \
|
||||||
|
@ -13,4 +13,3 @@ mv /*.rpm /output ||: # if exists
|
|||||||
mv /*.tgz /output ||: # if exists
|
mv /*.tgz /output ||: # if exists
|
||||||
|
|
||||||
ccache --show-stats ||:
|
ccache --show-stats ||:
|
||||||
ln -s /usr/lib/x86_64-linux-gnu/libOpenCL.so.1.0.0 /usr/lib/libOpenCL.so ||:
|
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
FROM ubuntu:20.04
|
FROM ubuntu:20.04
|
||||||
|
|
||||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||||
ARG version=21.5.1.*
|
ARG version=21.6.1.*
|
||||||
ARG gosu_ver=1.10
|
ARG gosu_ver=1.10
|
||||||
|
|
||||||
# set non-empty deb_location_url url to create a docker image
|
# set non-empty deb_location_url url to create a docker image
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
FROM ubuntu:18.04
|
FROM ubuntu:18.04
|
||||||
|
|
||||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||||
ARG version=21.5.1.*
|
ARG version=21.6.1.*
|
||||||
|
|
||||||
RUN apt-get update && \
|
RUN apt-get update && \
|
||||||
apt-get install -y apt-transport-https dirmngr && \
|
apt-get install -y apt-transport-https dirmngr && \
|
||||||
|
@ -300,6 +300,7 @@ function run_tests
|
|||||||
01663_aes_msan # Depends on OpenSSL
|
01663_aes_msan # Depends on OpenSSL
|
||||||
01667_aes_args_check # Depends on OpenSSL
|
01667_aes_args_check # Depends on OpenSSL
|
||||||
01776_decrypt_aead_size_check # Depends on OpenSSL
|
01776_decrypt_aead_size_check # Depends on OpenSSL
|
||||||
|
01811_filter_by_null # Depends on OpenSSL
|
||||||
01281_unsucceeded_insert_select_queries_counter
|
01281_unsucceeded_insert_select_queries_counter
|
||||||
01292_create_user
|
01292_create_user
|
||||||
01294_lazy_database_concurrent
|
01294_lazy_database_concurrent
|
||||||
@ -307,10 +308,8 @@ function run_tests
|
|||||||
01354_order_by_tuple_collate_const
|
01354_order_by_tuple_collate_const
|
||||||
01355_ilike
|
01355_ilike
|
||||||
01411_bayesian_ab_testing
|
01411_bayesian_ab_testing
|
||||||
01532_collate_in_low_cardinality
|
collate
|
||||||
01533_collate_in_nullable
|
collation
|
||||||
01542_collate_in_array
|
|
||||||
01543_collate_in_tuple
|
|
||||||
_orc_
|
_orc_
|
||||||
arrow
|
arrow
|
||||||
avro
|
avro
|
||||||
@ -365,6 +364,12 @@ function run_tests
|
|||||||
|
|
||||||
# JSON functions
|
# JSON functions
|
||||||
01666_blns
|
01666_blns
|
||||||
|
|
||||||
|
# Requires postgresql-client
|
||||||
|
01802_test_postgresql_protocol_with_row_policy
|
||||||
|
|
||||||
|
# Depends on AWS
|
||||||
|
01801_s3_cluster
|
||||||
)
|
)
|
||||||
|
|
||||||
(time clickhouse-test --hung-check -j 8 --order=random --use-skip-list --no-long --testname --shard --zookeeper --skip "${TESTS_TO_SKIP[@]}" -- "$FASTTEST_FOCUS" 2>&1 ||:) | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/test_log.txt"
|
(time clickhouse-test --hung-check -j 8 --order=random --use-skip-list --no-long --testname --shard --zookeeper --skip "${TESTS_TO_SKIP[@]}" -- "$FASTTEST_FOCUS" 2>&1 ||:) | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/test_log.txt"
|
||||||
|
@ -198,7 +198,7 @@ case "$stage" in
|
|||||||
# Lost connection to the server. This probably means that the server died
|
# Lost connection to the server. This probably means that the server died
|
||||||
# with abort.
|
# with abort.
|
||||||
echo "failure" > status.txt
|
echo "failure" > status.txt
|
||||||
if ! grep -ao "Received signal.*\|Logical error.*\|Assertion.*failed\|Failed assertion.*\|.*runtime error: .*\|.*is located.*\|SUMMARY: MemorySanitizer:.*\|SUMMARY: ThreadSanitizer:.*\|.*_LIBCPP_ASSERT.*" server.log > description.txt
|
if ! grep -ao "Received signal.*\|Logical error.*\|Assertion.*failed\|Failed assertion.*\|.*runtime error: .*\|.*is located.*\|SUMMARY: AddressSanitizer:.*\|SUMMARY: MemorySanitizer:.*\|SUMMARY: ThreadSanitizer:.*\|.*_LIBCPP_ASSERT.*" server.log > description.txt
|
||||||
then
|
then
|
||||||
echo "Lost connection to server. See the logs." > description.txt
|
echo "Lost connection to server. See the logs." > description.txt
|
||||||
fi
|
fi
|
||||||
|
@ -19,7 +19,8 @@ RUN apt-get update \
|
|||||||
tar \
|
tar \
|
||||||
krb5-user \
|
krb5-user \
|
||||||
iproute2 \
|
iproute2 \
|
||||||
lsof
|
lsof \
|
||||||
|
g++
|
||||||
RUN rm -rf \
|
RUN rm -rf \
|
||||||
/var/lib/apt/lists/* \
|
/var/lib/apt/lists/* \
|
||||||
/var/cache/debconf \
|
/var/cache/debconf \
|
||||||
|
@ -31,6 +31,7 @@ RUN apt-get update \
|
|||||||
software-properties-common \
|
software-properties-common \
|
||||||
libkrb5-dev \
|
libkrb5-dev \
|
||||||
krb5-user \
|
krb5-user \
|
||||||
|
g++ \
|
||||||
&& rm -rf \
|
&& rm -rf \
|
||||||
/var/lib/apt/lists/* \
|
/var/lib/apt/lists/* \
|
||||||
/var/cache/debconf \
|
/var/cache/debconf \
|
||||||
|
@ -0,0 +1,23 @@
|
|||||||
|
version: '2.3'
|
||||||
|
services:
|
||||||
|
mysql2:
|
||||||
|
image: mysql:5.7
|
||||||
|
restart: always
|
||||||
|
environment:
|
||||||
|
MYSQL_ROOT_PASSWORD: clickhouse
|
||||||
|
ports:
|
||||||
|
- 3348:3306
|
||||||
|
mysql3:
|
||||||
|
image: mysql:5.7
|
||||||
|
restart: always
|
||||||
|
environment:
|
||||||
|
MYSQL_ROOT_PASSWORD: clickhouse
|
||||||
|
ports:
|
||||||
|
- 3388:3306
|
||||||
|
mysql4:
|
||||||
|
image: mysql:5.7
|
||||||
|
restart: always
|
||||||
|
environment:
|
||||||
|
MYSQL_ROOT_PASSWORD: clickhouse
|
||||||
|
ports:
|
||||||
|
- 3368:3306
|
@ -11,10 +11,3 @@ services:
|
|||||||
default:
|
default:
|
||||||
aliases:
|
aliases:
|
||||||
- postgre-sql.local
|
- postgre-sql.local
|
||||||
postgres2:
|
|
||||||
image: postgres
|
|
||||||
restart: always
|
|
||||||
environment:
|
|
||||||
POSTGRES_PASSWORD: mysecretpassword
|
|
||||||
ports:
|
|
||||||
- 5441:5432
|
|
||||||
|
@ -0,0 +1,23 @@
|
|||||||
|
version: '2.3'
|
||||||
|
services:
|
||||||
|
postgres2:
|
||||||
|
image: postgres
|
||||||
|
restart: always
|
||||||
|
environment:
|
||||||
|
POSTGRES_PASSWORD: mysecretpassword
|
||||||
|
ports:
|
||||||
|
- 5421:5432
|
||||||
|
postgres3:
|
||||||
|
image: postgres
|
||||||
|
restart: always
|
||||||
|
environment:
|
||||||
|
POSTGRES_PASSWORD: mysecretpassword
|
||||||
|
ports:
|
||||||
|
- 5441:5432
|
||||||
|
postgres4:
|
||||||
|
image: postgres
|
||||||
|
restart: always
|
||||||
|
environment:
|
||||||
|
POSTGRES_PASSWORD: mysecretpassword
|
||||||
|
ports:
|
||||||
|
- 5461:5432
|
@ -21,6 +21,7 @@ export CLICKHOUSE_TESTS_SERVER_BIN_PATH=/clickhouse
|
|||||||
export CLICKHOUSE_TESTS_CLIENT_BIN_PATH=/clickhouse
|
export CLICKHOUSE_TESTS_CLIENT_BIN_PATH=/clickhouse
|
||||||
export CLICKHOUSE_TESTS_BASE_CONFIG_DIR=/clickhouse-config
|
export CLICKHOUSE_TESTS_BASE_CONFIG_DIR=/clickhouse-config
|
||||||
export CLICKHOUSE_ODBC_BRIDGE_BINARY_PATH=/clickhouse-odbc-bridge
|
export CLICKHOUSE_ODBC_BRIDGE_BINARY_PATH=/clickhouse-odbc-bridge
|
||||||
|
export CLICKHOUSE_LIBRARY_BRIDGE_BINARY_PATH=/clickhouse-library-bridge
|
||||||
|
|
||||||
export DOCKER_MYSQL_GOLANG_CLIENT_TAG=${DOCKER_MYSQL_GOLANG_CLIENT_TAG:=latest}
|
export DOCKER_MYSQL_GOLANG_CLIENT_TAG=${DOCKER_MYSQL_GOLANG_CLIENT_TAG:=latest}
|
||||||
export DOCKER_MYSQL_JAVA_CLIENT_TAG=${DOCKER_MYSQL_JAVA_CLIENT_TAG:=latest}
|
export DOCKER_MYSQL_JAVA_CLIENT_TAG=${DOCKER_MYSQL_JAVA_CLIENT_TAG:=latest}
|
||||||
|
39
docker/test/keeper-jepsen/Dockerfile
Normal file
39
docker/test/keeper-jepsen/Dockerfile
Normal file
@ -0,0 +1,39 @@
|
|||||||
|
# docker build -t yandex/clickhouse-keeper-jepsen-test .
|
||||||
|
FROM yandex/clickhouse-test-base
|
||||||
|
|
||||||
|
ENV DEBIAN_FRONTEND=noninteractive
|
||||||
|
ENV CLOJURE_VERSION=1.10.3.814
|
||||||
|
|
||||||
|
# arguments
|
||||||
|
ENV PR_TO_TEST=""
|
||||||
|
ENV SHA_TO_TEST=""
|
||||||
|
|
||||||
|
ENV NODES_USERNAME="root"
|
||||||
|
ENV NODES_PASSWORD=""
|
||||||
|
ENV TESTS_TO_RUN="30"
|
||||||
|
ENV TIME_LIMIT="30"
|
||||||
|
|
||||||
|
|
||||||
|
# volumes
|
||||||
|
ENV NODES_FILE_PATH="/nodes.txt"
|
||||||
|
ENV TEST_OUTPUT="/test_output"
|
||||||
|
|
||||||
|
RUN mkdir "/root/.ssh"
|
||||||
|
RUN touch "/root/.ssh/known_hosts"
|
||||||
|
|
||||||
|
# install java
|
||||||
|
RUN apt-get update && apt-get install default-jre default-jdk libjna-java libjna-jni ssh gnuplot graphviz --yes --no-install-recommends
|
||||||
|
|
||||||
|
# install clojure
|
||||||
|
RUN curl -O "https://download.clojure.org/install/linux-install-${CLOJURE_VERSION}.sh" && \
|
||||||
|
chmod +x "linux-install-${CLOJURE_VERSION}.sh" && \
|
||||||
|
bash "./linux-install-${CLOJURE_VERSION}.sh"
|
||||||
|
|
||||||
|
# install leiningen
|
||||||
|
RUN curl -O "https://raw.githubusercontent.com/technomancy/leiningen/stable/bin/lein" && \
|
||||||
|
chmod +x ./lein && \
|
||||||
|
mv ./lein /usr/bin
|
||||||
|
|
||||||
|
COPY run.sh /
|
||||||
|
|
||||||
|
CMD ["/bin/bash", "/run.sh"]
|
22
docker/test/keeper-jepsen/run.sh
Normal file
22
docker/test/keeper-jepsen/run.sh
Normal file
@ -0,0 +1,22 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
|
||||||
|
CLICKHOUSE_PACKAGE=${CLICKHOUSE_PACKAGE:="https://clickhouse-builds.s3.yandex.net/$PR_TO_TEST/$SHA_TO_TEST/clickhouse_build_check/clang-11_relwithdebuginfo_none_bundled_unsplitted_disable_False_binary/clickhouse"}
|
||||||
|
CLICKHOUSE_REPO_PATH=${CLICKHOUSE_REPO_PATH:=""}
|
||||||
|
|
||||||
|
|
||||||
|
if [ -z "$CLICKHOUSE_REPO_PATH" ]; then
|
||||||
|
CLICKHOUSE_REPO_PATH=ch
|
||||||
|
rm -rf ch ||:
|
||||||
|
mkdir ch ||:
|
||||||
|
wget -nv -nd -c "https://clickhouse-test-reports.s3.yandex.net/$PR_TO_TEST/$SHA_TO_TEST/repo/clickhouse_no_subs.tar.gz"
|
||||||
|
tar -C ch --strip-components=1 -xf clickhouse_no_subs.tar.gz
|
||||||
|
ls -lath ||:
|
||||||
|
fi
|
||||||
|
|
||||||
|
cd "$CLICKHOUSE_REPO_PATH/tests/jepsen.clickhouse-keeper"
|
||||||
|
|
||||||
|
(lein run test-all --nodes-file "$NODES_FILE_PATH" --username "$NODES_USERNAME" --logging-json --password "$NODES_PASSWORD" --time-limit "$TIME_LIMIT" --concurrency 50 -r 50 --snapshot-distance 100 --stale-log-gap 100 --reserved-log-items 10 --lightweight-run --clickhouse-source "$CLICKHOUSE_PACKAGE" -q --test-count "$TESTS_TO_RUN" || true) | tee "$TEST_OUTPUT/jepsen_run_all_tests.log"
|
||||||
|
|
||||||
|
mv store "$TEST_OUTPUT/"
|
@ -1,6 +1,7 @@
|
|||||||
<yandex>
|
<yandex>
|
||||||
<http_port remove="remove"/>
|
<http_port remove="remove"/>
|
||||||
<mysql_port remove="remove"/>
|
<mysql_port remove="remove"/>
|
||||||
|
<postgresql_port remove="remove"/>
|
||||||
<interserver_http_port remove="remove"/>
|
<interserver_http_port remove="remove"/>
|
||||||
<tcp_with_proxy_port remove="remove"/>
|
<tcp_with_proxy_port remove="remove"/>
|
||||||
<keeper_server remove="remove"/>
|
<keeper_server remove="remove"/>
|
||||||
|
@ -17,6 +17,9 @@
|
|||||||
|
|
||||||
<!-- One NUMA node w/o hyperthreading -->
|
<!-- One NUMA node w/o hyperthreading -->
|
||||||
<max_threads>12</max_threads>
|
<max_threads>12</max_threads>
|
||||||
|
|
||||||
|
<!-- mmap shows some improvements in perf tests -->
|
||||||
|
<min_bytes_to_use_mmap_io>64Mi</min_bytes_to_use_mmap_io>
|
||||||
</default>
|
</default>
|
||||||
</profiles>
|
</profiles>
|
||||||
<users>
|
<users>
|
||||||
|
@ -66,7 +66,12 @@ reportStageEnd('parse')
|
|||||||
subst_elems = root.findall('substitutions/substitution')
|
subst_elems = root.findall('substitutions/substitution')
|
||||||
available_parameters = {} # { 'table': ['hits_10m', 'hits_100m'], ... }
|
available_parameters = {} # { 'table': ['hits_10m', 'hits_100m'], ... }
|
||||||
for e in subst_elems:
|
for e in subst_elems:
|
||||||
available_parameters[e.find('name').text] = [v.text for v in e.findall('values/value')]
|
name = e.find('name').text
|
||||||
|
values = [v.text for v in e.findall('values/value')]
|
||||||
|
if not values:
|
||||||
|
raise Exception(f'No values given for substitution {{{name}}}')
|
||||||
|
|
||||||
|
available_parameters[name] = values
|
||||||
|
|
||||||
# Takes parallel lists of templates, substitutes them with all combos of
|
# Takes parallel lists of templates, substitutes them with all combos of
|
||||||
# parameters. The set of parameters is determined based on the first list.
|
# parameters. The set of parameters is determined based on the first list.
|
||||||
|
@ -21,14 +21,14 @@ function start()
|
|||||||
-- --path /var/lib/clickhouse1/ --logger.stderr /var/log/clickhouse-server/stderr1.log \
|
-- --path /var/lib/clickhouse1/ --logger.stderr /var/log/clickhouse-server/stderr1.log \
|
||||||
--logger.log /var/log/clickhouse-server/clickhouse-server1.log --logger.errorlog /var/log/clickhouse-server/clickhouse-server1.err.log \
|
--logger.log /var/log/clickhouse-server/clickhouse-server1.log --logger.errorlog /var/log/clickhouse-server/clickhouse-server1.err.log \
|
||||||
--tcp_port 19000 --tcp_port_secure 19440 --http_port 18123 --https_port 18443 --interserver_http_port 19009 --tcp_with_proxy_port 19010 \
|
--tcp_port 19000 --tcp_port_secure 19440 --http_port 18123 --https_port 18443 --interserver_http_port 19009 --tcp_with_proxy_port 19010 \
|
||||||
--mysql_port 19004 \
|
--mysql_port 19004 --postgresql_port 19005 \
|
||||||
--keeper_server.tcp_port 19181 --keeper_server.server_id 2
|
--keeper_server.tcp_port 19181 --keeper_server.server_id 2
|
||||||
|
|
||||||
sudo -E -u clickhouse /usr/bin/clickhouse server --config /etc/clickhouse-server2/config.xml --daemon \
|
sudo -E -u clickhouse /usr/bin/clickhouse server --config /etc/clickhouse-server2/config.xml --daemon \
|
||||||
-- --path /var/lib/clickhouse2/ --logger.stderr /var/log/clickhouse-server/stderr2.log \
|
-- --path /var/lib/clickhouse2/ --logger.stderr /var/log/clickhouse-server/stderr2.log \
|
||||||
--logger.log /var/log/clickhouse-server/clickhouse-server2.log --logger.errorlog /var/log/clickhouse-server/clickhouse-server2.err.log \
|
--logger.log /var/log/clickhouse-server/clickhouse-server2.log --logger.errorlog /var/log/clickhouse-server/clickhouse-server2.err.log \
|
||||||
--tcp_port 29000 --tcp_port_secure 29440 --http_port 28123 --https_port 28443 --interserver_http_port 29009 --tcp_with_proxy_port 29010 \
|
--tcp_port 29000 --tcp_port_secure 29440 --http_port 28123 --https_port 28443 --interserver_http_port 29009 --tcp_with_proxy_port 29010 \
|
||||||
--mysql_port 29004 \
|
--mysql_port 29004 --postgresql_port 29005 \
|
||||||
--keeper_server.tcp_port 29181 --keeper_server.server_id 3
|
--keeper_server.tcp_port 29181 --keeper_server.server_id 3
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
@ -28,7 +28,8 @@ RUN apt-get update -y \
|
|||||||
tree \
|
tree \
|
||||||
unixodbc \
|
unixodbc \
|
||||||
wget \
|
wget \
|
||||||
mysql-client=5.7*
|
mysql-client=5.7* \
|
||||||
|
postgresql-client
|
||||||
|
|
||||||
RUN pip3 install numpy scipy pandas
|
RUN pip3 install numpy scipy pandas
|
||||||
|
|
||||||
|
@ -44,7 +44,7 @@ if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]
|
|||||||
-- --path /var/lib/clickhouse1/ --logger.stderr /var/log/clickhouse-server/stderr1.log \
|
-- --path /var/lib/clickhouse1/ --logger.stderr /var/log/clickhouse-server/stderr1.log \
|
||||||
--logger.log /var/log/clickhouse-server/clickhouse-server1.log --logger.errorlog /var/log/clickhouse-server/clickhouse-server1.err.log \
|
--logger.log /var/log/clickhouse-server/clickhouse-server1.log --logger.errorlog /var/log/clickhouse-server/clickhouse-server1.err.log \
|
||||||
--tcp_port 19000 --tcp_port_secure 19440 --http_port 18123 --https_port 18443 --interserver_http_port 19009 --tcp_with_proxy_port 19010 \
|
--tcp_port 19000 --tcp_port_secure 19440 --http_port 18123 --https_port 18443 --interserver_http_port 19009 --tcp_with_proxy_port 19010 \
|
||||||
--mysql_port 19004 \
|
--mysql_port 19004 --postgresql_port 19005 \
|
||||||
--keeper_server.tcp_port 19181 --keeper_server.server_id 2 \
|
--keeper_server.tcp_port 19181 --keeper_server.server_id 2 \
|
||||||
--macros.replica r2 # It doesn't work :(
|
--macros.replica r2 # It doesn't work :(
|
||||||
|
|
||||||
@ -52,7 +52,7 @@ if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]
|
|||||||
-- --path /var/lib/clickhouse2/ --logger.stderr /var/log/clickhouse-server/stderr2.log \
|
-- --path /var/lib/clickhouse2/ --logger.stderr /var/log/clickhouse-server/stderr2.log \
|
||||||
--logger.log /var/log/clickhouse-server/clickhouse-server2.log --logger.errorlog /var/log/clickhouse-server/clickhouse-server2.err.log \
|
--logger.log /var/log/clickhouse-server/clickhouse-server2.log --logger.errorlog /var/log/clickhouse-server/clickhouse-server2.err.log \
|
||||||
--tcp_port 29000 --tcp_port_secure 29440 --http_port 28123 --https_port 28443 --interserver_http_port 29009 --tcp_with_proxy_port 29010 \
|
--tcp_port 29000 --tcp_port_secure 29440 --http_port 28123 --https_port 28443 --interserver_http_port 29009 --tcp_with_proxy_port 29010 \
|
||||||
--mysql_port 29004 \
|
--mysql_port 29004 --postgresql_port 29005 \
|
||||||
--keeper_server.tcp_port 29181 --keeper_server.server_id 3 \
|
--keeper_server.tcp_port 29181 --keeper_server.server_id 3 \
|
||||||
--macros.shard s2 # It doesn't work :(
|
--macros.shard s2 # It doesn't work :(
|
||||||
|
|
||||||
@ -104,6 +104,12 @@ clickhouse-client -q "system flush logs" ||:
|
|||||||
pigz < /var/log/clickhouse-server/clickhouse-server.log > /test_output/clickhouse-server.log.gz &
|
pigz < /var/log/clickhouse-server/clickhouse-server.log > /test_output/clickhouse-server.log.gz &
|
||||||
clickhouse-client -q "select * from system.query_log format TSVWithNamesAndTypes" | pigz > /test_output/query-log.tsv.gz &
|
clickhouse-client -q "select * from system.query_log format TSVWithNamesAndTypes" | pigz > /test_output/query-log.tsv.gz &
|
||||||
clickhouse-client -q "select * from system.query_thread_log format TSVWithNamesAndTypes" | pigz > /test_output/query-thread-log.tsv.gz &
|
clickhouse-client -q "select * from system.query_thread_log format TSVWithNamesAndTypes" | pigz > /test_output/query-thread-log.tsv.gz &
|
||||||
|
clickhouse-client --allow_introspection_functions=1 -q "
|
||||||
|
WITH
|
||||||
|
arrayMap(x -> concat(demangle(addressToSymbol(x)), ':', addressToLine(x)), trace) AS trace_array,
|
||||||
|
arrayStringConcat(trace_array, '\n') AS trace_string
|
||||||
|
SELECT * EXCEPT(trace), trace_string FROM system.trace_log FORMAT TSVWithNamesAndTypes
|
||||||
|
" | pigz > /test_output/trace-log.tsv.gz &
|
||||||
wait ||:
|
wait ||:
|
||||||
|
|
||||||
mv /var/log/clickhouse-server/stderr.log /test_output/ ||:
|
mv /var/log/clickhouse-server/stderr.log /test_output/ ||:
|
||||||
@ -112,10 +118,13 @@ if [[ -n "$WITH_COVERAGE" ]] && [[ "$WITH_COVERAGE" -eq 1 ]]; then
|
|||||||
fi
|
fi
|
||||||
tar -chf /test_output/text_log_dump.tar /var/lib/clickhouse/data/system/text_log ||:
|
tar -chf /test_output/text_log_dump.tar /var/lib/clickhouse/data/system/text_log ||:
|
||||||
tar -chf /test_output/query_log_dump.tar /var/lib/clickhouse/data/system/query_log ||:
|
tar -chf /test_output/query_log_dump.tar /var/lib/clickhouse/data/system/query_log ||:
|
||||||
|
tar -chf /test_output/coordination.tar /var/lib/clickhouse/coordination ||:
|
||||||
|
|
||||||
if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then
|
if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then
|
||||||
pigz < /var/log/clickhouse-server/clickhouse-server1.log > /test_output/clickhouse-server1.log.gz ||:
|
pigz < /var/log/clickhouse-server/clickhouse-server1.log > /test_output/clickhouse-server1.log.gz ||:
|
||||||
pigz < /var/log/clickhouse-server/clickhouse-server2.log > /test_output/clickhouse-server2.log.gz ||:
|
pigz < /var/log/clickhouse-server/clickhouse-server2.log > /test_output/clickhouse-server2.log.gz ||:
|
||||||
mv /var/log/clickhouse-server/stderr1.log /test_output/ ||:
|
mv /var/log/clickhouse-server/stderr1.log /test_output/ ||:
|
||||||
mv /var/log/clickhouse-server/stderr2.log /test_output/ ||:
|
mv /var/log/clickhouse-server/stderr2.log /test_output/ ||:
|
||||||
|
tar -chf /test_output/coordination1.tar /var/lib/clickhouse1/coordination ||:
|
||||||
|
tar -chf /test_output/coordination2.tar /var/lib/clickhouse2/coordination ||:
|
||||||
fi
|
fi
|
||||||
|
@ -14,9 +14,7 @@ RUN apt-get --allow-unauthenticated update -y \
|
|||||||
expect \
|
expect \
|
||||||
gdb \
|
gdb \
|
||||||
gperf \
|
gperf \
|
||||||
gperf \
|
|
||||||
heimdal-multidev \
|
heimdal-multidev \
|
||||||
intel-opencl-icd \
|
|
||||||
libboost-filesystem-dev \
|
libboost-filesystem-dev \
|
||||||
libboost-iostreams-dev \
|
libboost-iostreams-dev \
|
||||||
libboost-program-options-dev \
|
libboost-program-options-dev \
|
||||||
@ -50,9 +48,7 @@ RUN apt-get --allow-unauthenticated update -y \
|
|||||||
moreutils \
|
moreutils \
|
||||||
ncdu \
|
ncdu \
|
||||||
netcat-openbsd \
|
netcat-openbsd \
|
||||||
ocl-icd-libopencl1 \
|
|
||||||
odbcinst \
|
odbcinst \
|
||||||
opencl-headers \
|
|
||||||
openssl \
|
openssl \
|
||||||
perl \
|
perl \
|
||||||
pigz \
|
pigz \
|
||||||
|
@ -108,6 +108,11 @@ zgrep -Fav "ASan doesn't fully support makecontext/swapcontext functions" > /dev
|
|||||||
|| echo -e 'No sanitizer asserts\tOK' >> /test_output/test_results.tsv
|
|| echo -e 'No sanitizer asserts\tOK' >> /test_output/test_results.tsv
|
||||||
rm -f /test_output/tmp
|
rm -f /test_output/tmp
|
||||||
|
|
||||||
|
# OOM
|
||||||
|
zgrep -Fa " <Fatal> Application: Child process was terminated by signal 9" /var/log/clickhouse-server/clickhouse-server.log > /dev/null \
|
||||||
|
&& echo -e 'OOM killer (or signal 9) in clickhouse-server.log\tFAIL' >> /test_output/test_results.tsv \
|
||||||
|
|| echo -e 'No OOM messages in clickhouse-server.log\tOK' >> /test_output/test_results.tsv
|
||||||
|
|
||||||
# Logical errors
|
# Logical errors
|
||||||
zgrep -Fa "Code: 49, e.displayText() = DB::Exception:" /var/log/clickhouse-server/clickhouse-server.log > /dev/null \
|
zgrep -Fa "Code: 49, e.displayText() = DB::Exception:" /var/log/clickhouse-server/clickhouse-server.log > /dev/null \
|
||||||
&& echo -e 'Logical error thrown (see clickhouse-server.log)\tFAIL' >> /test_output/test_results.tsv \
|
&& echo -e 'Logical error thrown (see clickhouse-server.log)\tFAIL' >> /test_output/test_results.tsv \
|
||||||
@ -118,7 +123,7 @@ zgrep -Fa "########################################" /var/log/clickhouse-server/
|
|||||||
&& echo -e 'Killed by signal (in clickhouse-server.log)\tFAIL' >> /test_output/test_results.tsv \
|
&& echo -e 'Killed by signal (in clickhouse-server.log)\tFAIL' >> /test_output/test_results.tsv \
|
||||||
|| echo -e 'Not crashed\tOK' >> /test_output/test_results.tsv
|
|| echo -e 'Not crashed\tOK' >> /test_output/test_results.tsv
|
||||||
|
|
||||||
# It also checks for OOM or crash without stacktrace (printed by watchdog)
|
# It also checks for crash without stacktrace (printed by watchdog)
|
||||||
zgrep -Fa " <Fatal> " /var/log/clickhouse-server/clickhouse-server.log > /dev/null \
|
zgrep -Fa " <Fatal> " /var/log/clickhouse-server/clickhouse-server.log > /dev/null \
|
||||||
&& echo -e 'Fatal message in clickhouse-server.log\tFAIL' >> /test_output/test_results.tsv \
|
&& echo -e 'Fatal message in clickhouse-server.log\tFAIL' >> /test_output/test_results.tsv \
|
||||||
|| echo -e 'No fatal messages in clickhouse-server.log\tOK' >> /test_output/test_results.tsv
|
|| echo -e 'No fatal messages in clickhouse-server.log\tOK' >> /test_output/test_results.tsv
|
||||||
@ -131,6 +136,7 @@ pigz < /var/log/clickhouse-server/clickhouse-server.log > /test_output/clickhous
|
|||||||
tar -chf /test_output/coordination.tar /var/lib/clickhouse/coordination ||:
|
tar -chf /test_output/coordination.tar /var/lib/clickhouse/coordination ||:
|
||||||
mv /var/log/clickhouse-server/stderr.log /test_output/
|
mv /var/log/clickhouse-server/stderr.log /test_output/
|
||||||
tar -chf /test_output/query_log_dump.tar /var/lib/clickhouse/data/system/query_log ||:
|
tar -chf /test_output/query_log_dump.tar /var/lib/clickhouse/data/system/query_log ||:
|
||||||
|
tar -chf /test_output/trace_log_dump.tar /var/lib/clickhouse/data/system/trace_log ||:
|
||||||
|
|
||||||
# Write check result into check_status.tsv
|
# Write check result into check_status.tsv
|
||||||
clickhouse-local --structure "test String, res String" -q "SELECT 'failure', test FROM table WHERE res != 'OK' order by (lower(test) like '%hung%') LIMIT 1" < /test_output/test_results.tsv > /test_output/check_status.tsv
|
clickhouse-local --structure "test String, res String" -q "SELECT 'failure', test FROM table WHERE res != 'OK' order by (lower(test) like '%hung%') LIMIT 1" < /test_output/test_results.tsv > /test_output/check_status.tsv
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
from multiprocessing import cpu_count
|
from multiprocessing import cpu_count
|
||||||
from subprocess import Popen, call, STDOUT
|
from subprocess import Popen, call, check_output, STDOUT
|
||||||
import os
|
import os
|
||||||
import sys
|
import sys
|
||||||
import shutil
|
import shutil
|
||||||
@ -85,10 +85,27 @@ def prepare_for_hung_check():
|
|||||||
# Issue #21004, live views are experimental, so let's just suppress it
|
# Issue #21004, live views are experimental, so let's just suppress it
|
||||||
call("""clickhouse client -q "KILL QUERY WHERE upper(query) LIKE 'WATCH %'" """, shell=True, stderr=STDOUT)
|
call("""clickhouse client -q "KILL QUERY WHERE upper(query) LIKE 'WATCH %'" """, shell=True, stderr=STDOUT)
|
||||||
|
|
||||||
# Wait for last queries to finish if any, not longer than 120 seconds
|
# Kill other queries which known to be slow
|
||||||
|
# It's query from 01232_preparing_sets_race_condition_long, it may take up to 1000 seconds in slow builds
|
||||||
|
call("""clickhouse client -q "KILL QUERY WHERE query LIKE 'insert into tableB select %'" """, shell=True, stderr=STDOUT)
|
||||||
|
# Long query from 00084_external_agregation
|
||||||
|
call("""clickhouse client -q "KILL QUERY WHERE query LIKE 'SELECT URL, uniq(SearchPhrase) AS u FROM test.hits GROUP BY URL ORDER BY u %'" """, shell=True, stderr=STDOUT)
|
||||||
|
|
||||||
|
# Wait for last queries to finish if any, not longer than 300 seconds
|
||||||
call("""clickhouse client -q "select sleepEachRow((
|
call("""clickhouse client -q "select sleepEachRow((
|
||||||
select maxOrDefault(120 - elapsed) + 1 from system.processes where query not like '%from system.processes%' and elapsed < 120
|
select maxOrDefault(300 - elapsed) + 1 from system.processes where query not like '%from system.processes%' and elapsed < 300
|
||||||
) / 120) from numbers(120) format Null" """, shell=True, stderr=STDOUT)
|
) / 300) from numbers(300) format Null" """, shell=True, stderr=STDOUT)
|
||||||
|
|
||||||
|
# Even if all clickhouse-test processes are finished, there are probably some sh scripts,
|
||||||
|
# which still run some new queries. Let's ignore them.
|
||||||
|
try:
|
||||||
|
query = """clickhouse client -q "SELECT count() FROM system.processes where where elapsed > 300" """
|
||||||
|
output = check_output(query, shell=True, stderr=STDOUT).decode('utf-8').strip()
|
||||||
|
if int(output) == 0:
|
||||||
|
return False
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
return True
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
logging.basicConfig(level=logging.INFO, format='%(asctime)s %(message)s')
|
logging.basicConfig(level=logging.INFO, format='%(asctime)s %(message)s')
|
||||||
@ -119,12 +136,12 @@ if __name__ == "__main__":
|
|||||||
|
|
||||||
logging.info("All processes finished")
|
logging.info("All processes finished")
|
||||||
if args.hung_check:
|
if args.hung_check:
|
||||||
prepare_for_hung_check()
|
have_long_running_queries = prepare_for_hung_check()
|
||||||
logging.info("Checking if some queries hung")
|
logging.info("Checking if some queries hung")
|
||||||
cmd = "{} {} {}".format(args.test_cmd, "--hung-check", "00001_select_1")
|
cmd = "{} {} {}".format(args.test_cmd, "--hung-check", "00001_select_1")
|
||||||
res = call(cmd, shell=True, stderr=STDOUT)
|
res = call(cmd, shell=True, stderr=STDOUT)
|
||||||
hung_check_status = "No queries hung\tOK\n"
|
hung_check_status = "No queries hung\tOK\n"
|
||||||
if res != 0:
|
if res != 0 and have_long_running_queries:
|
||||||
logging.info("Hung check failed with exit code {}".format(res))
|
logging.info("Hung check failed with exit code {}".format(res))
|
||||||
hung_check_status = "Hung check failed\tFAIL\n"
|
hung_check_status = "Hung check failed\tFAIL\n"
|
||||||
open(os.path.join(args.output_folder, "test_results.tsv"), 'w+').write(hung_check_status)
|
open(os.path.join(args.output_folder, "test_results.tsv"), 'w+').write(hung_check_status)
|
||||||
|
@ -31,9 +31,10 @@ toc_title: Cloud
|
|||||||
|
|
||||||
## Alibaba Cloud {#alibaba-cloud}
|
## Alibaba Cloud {#alibaba-cloud}
|
||||||
|
|
||||||
Alibaba Cloud Managed Service for ClickHouse [China Site](https://www.aliyun.com/product/clickhouse) (Will be available at international site at May, 2021) provides the following key features:
|
Alibaba Cloud Managed Service for ClickHouse. [China Site](https://www.aliyun.com/product/clickhouse) (will be available at the international site in May 2021). Provides the following key features:
|
||||||
- Highly reliable cloud disk storage engine based on Alibaba Cloud Apsara distributed system
|
|
||||||
- Expand capacity on demand without manual data migration
|
- Highly reliable cloud disk storage engine based on [Alibaba Cloud Apsara](https://www.alibabacloud.com/product/apsara-stack) distributed system
|
||||||
|
- Expand capacity on-demand without manual data migration
|
||||||
- Support single-node, single-replica, multi-node, and multi-replica architectures, and support hot and cold data tiering
|
- Support single-node, single-replica, multi-node, and multi-replica architectures, and support hot and cold data tiering
|
||||||
- Support access allow-list, one-key recovery, multi-layer network security protection, cloud disk encryption
|
- Support access allow-list, one-key recovery, multi-layer network security protection, cloud disk encryption
|
||||||
- Seamless integration with cloud log systems, databases, and data application tools
|
- Seamless integration with cloud log systems, databases, and data application tools
|
||||||
|
@ -5,12 +5,13 @@ toc_title: Build on Mac OS X
|
|||||||
|
|
||||||
# How to Build ClickHouse on Mac OS X {#how-to-build-clickhouse-on-mac-os-x}
|
# How to Build ClickHouse on Mac OS X {#how-to-build-clickhouse-on-mac-os-x}
|
||||||
|
|
||||||
Build should work on x86_64 (Intel) based macOS 10.15 (Catalina) and higher with recent Xcode's native AppleClang, or Homebrew's vanilla Clang or GCC compilers.
|
Build should work on x86_64 (Intel) and arm64 (Apple Silicon) based macOS 10.15 (Catalina) and higher with recent Xcode's native AppleClang, or Homebrew's vanilla Clang or GCC compilers.
|
||||||
|
|
||||||
## Install Homebrew {#install-homebrew}
|
## Install Homebrew {#install-homebrew}
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
|
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
|
||||||
|
# ...and follow the printed instructions on any additional steps required to complete the installation.
|
||||||
```
|
```
|
||||||
|
|
||||||
## Install Xcode and Command Line Tools {#install-xcode-and-command-line-tools}
|
## Install Xcode and Command Line Tools {#install-xcode-and-command-line-tools}
|
||||||
@ -22,8 +23,8 @@ Open it at least once to accept the end-user license agreement and automatically
|
|||||||
Then, make sure that the latest Comman Line Tools are installed and selected in the system:
|
Then, make sure that the latest Comman Line Tools are installed and selected in the system:
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ sudo rm -rf /Library/Developer/CommandLineTools
|
sudo rm -rf /Library/Developer/CommandLineTools
|
||||||
$ sudo xcode-select --install
|
sudo xcode-select --install
|
||||||
```
|
```
|
||||||
|
|
||||||
Reboot.
|
Reboot.
|
||||||
@ -31,14 +32,15 @@ Reboot.
|
|||||||
## Install Required Compilers, Tools, and Libraries {#install-required-compilers-tools-and-libraries}
|
## Install Required Compilers, Tools, and Libraries {#install-required-compilers-tools-and-libraries}
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ brew update
|
brew update
|
||||||
$ brew install cmake ninja libtool gettext llvm gcc
|
brew install cmake ninja libtool gettext llvm gcc
|
||||||
```
|
```
|
||||||
|
|
||||||
## Checkout ClickHouse Sources {#checkout-clickhouse-sources}
|
## Checkout ClickHouse Sources {#checkout-clickhouse-sources}
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ git clone --recursive git@github.com:ClickHouse/ClickHouse.git # or https://github.com/ClickHouse/ClickHouse.git
|
git clone --recursive git@github.com:ClickHouse/ClickHouse.git
|
||||||
|
# ...alternatively, you can use https://github.com/ClickHouse/ClickHouse.git as the repo URL.
|
||||||
```
|
```
|
||||||
|
|
||||||
## Build ClickHouse {#build-clickhouse}
|
## Build ClickHouse {#build-clickhouse}
|
||||||
@ -46,37 +48,37 @@ $ git clone --recursive git@github.com:ClickHouse/ClickHouse.git # or https://gi
|
|||||||
To build using Xcode's native AppleClang compiler:
|
To build using Xcode's native AppleClang compiler:
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ cd ClickHouse
|
cd ClickHouse
|
||||||
$ rm -rf build
|
rm -rf build
|
||||||
$ mkdir build
|
mkdir build
|
||||||
$ cd build
|
cd build
|
||||||
$ cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo -DENABLE_JEMALLOC=OFF ..
|
cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo ..
|
||||||
$ cmake --build . --config RelWithDebInfo
|
cmake --build . --config RelWithDebInfo
|
||||||
$ cd ..
|
cd ..
|
||||||
```
|
```
|
||||||
|
|
||||||
To build using Homebrew's vanilla Clang compiler:
|
To build using Homebrew's vanilla Clang compiler:
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ cd ClickHouse
|
cd ClickHouse
|
||||||
$ rm -rf build
|
rm -rf build
|
||||||
$ mkdir build
|
mkdir build
|
||||||
$ cd build
|
cd build
|
||||||
$ cmake -DCMAKE_C_COMPILER=$(brew --prefix llvm)/bin/clang -DCMAKE_CXX_COMPILER==$(brew --prefix llvm)/bin/clang++ -DCMAKE_BUILD_TYPE=RelWithDebInfo -DENABLE_JEMALLOC=OFF ..
|
cmake -DCMAKE_C_COMPILER=$(brew --prefix llvm)/bin/clang -DCMAKE_CXX_COMPILER=$(brew --prefix llvm)/bin/clang++ -DCMAKE_BUILD_TYPE=RelWithDebInfo ..
|
||||||
$ cmake --build . --config RelWithDebInfo
|
cmake --build . --config RelWithDebInfo
|
||||||
$ cd ..
|
cd ..
|
||||||
```
|
```
|
||||||
|
|
||||||
To build using Homebrew's vanilla GCC compiler:
|
To build using Homebrew's vanilla GCC compiler:
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ cd ClickHouse
|
cd ClickHouse
|
||||||
$ rm -rf build
|
rm -rf build
|
||||||
$ mkdir build
|
mkdir build
|
||||||
$ cd build
|
cd build
|
||||||
$ cmake -DCMAKE_C_COMPILER=$(brew --prefix gcc)/bin/gcc-10 -DCMAKE_CXX_COMPILER=$(brew --prefix gcc)/bin/g++-10 -DCMAKE_BUILD_TYPE=RelWithDebInfo -DENABLE_JEMALLOC=OFF ..
|
cmake -DCMAKE_C_COMPILER=$(brew --prefix gcc)/bin/gcc-10 -DCMAKE_CXX_COMPILER=$(brew --prefix gcc)/bin/g++-10 -DCMAKE_BUILD_TYPE=RelWithDebInfo ..
|
||||||
$ cmake --build . --config RelWithDebInfo
|
cmake --build . --config RelWithDebInfo
|
||||||
$ cd ..
|
cd ..
|
||||||
```
|
```
|
||||||
|
|
||||||
## Caveats {#caveats}
|
## Caveats {#caveats}
|
||||||
@ -115,7 +117,7 @@ To do so, create the `/Library/LaunchDaemons/limit.maxfiles.plist` file with the
|
|||||||
Execute the following command:
|
Execute the following command:
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ sudo chown root:wheel /Library/LaunchDaemons/limit.maxfiles.plist
|
sudo chown root:wheel /Library/LaunchDaemons/limit.maxfiles.plist
|
||||||
```
|
```
|
||||||
|
|
||||||
Reboot.
|
Reboot.
|
||||||
|
@ -33,47 +33,14 @@ sudo bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)"
|
|||||||
|
|
||||||
For other Linux distribution - check the availability of the [prebuild packages](https://releases.llvm.org/download.html) or build clang [from sources](https://clang.llvm.org/get_started.html).
|
For other Linux distribution - check the availability of the [prebuild packages](https://releases.llvm.org/download.html) or build clang [from sources](https://clang.llvm.org/get_started.html).
|
||||||
|
|
||||||
#### Use clang-11 for Builds {#use-gcc-10-for-builds}
|
#### Use clang-11 for Builds
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
$ export CC=clang-11
|
$ export CC=clang-11
|
||||||
$ export CXX=clang++-11
|
$ export CXX=clang++-11
|
||||||
```
|
```
|
||||||
|
|
||||||
### Install GCC 10 {#install-gcc-10}
|
Gcc can also be used though it is discouraged.
|
||||||
|
|
||||||
We recommend building ClickHouse with clang-11, GCC-10 also supported, but it is not used for production builds.
|
|
||||||
|
|
||||||
If you want to use GCC-10 there are several ways to install it.
|
|
||||||
|
|
||||||
#### Install from Repository {#install-from-repository}
|
|
||||||
|
|
||||||
On Ubuntu 19.10 or newer:
|
|
||||||
|
|
||||||
$ sudo apt-get update
|
|
||||||
$ sudo apt-get install gcc-10 g++-10
|
|
||||||
|
|
||||||
#### Install from a PPA Package {#install-from-a-ppa-package}
|
|
||||||
|
|
||||||
On older Ubuntu:
|
|
||||||
|
|
||||||
``` bash
|
|
||||||
$ sudo apt-get install software-properties-common
|
|
||||||
$ sudo apt-add-repository ppa:ubuntu-toolchain-r/test
|
|
||||||
$ sudo apt-get update
|
|
||||||
$ sudo apt-get install gcc-10 g++-10
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Install from Sources {#install-from-sources}
|
|
||||||
|
|
||||||
See [utils/ci/build-gcc-from-sources.sh](https://github.com/ClickHouse/ClickHouse/blob/master/utils/ci/build-gcc-from-sources.sh)
|
|
||||||
|
|
||||||
#### Use GCC 10 for Builds {#use-gcc-10-for-builds}
|
|
||||||
|
|
||||||
``` bash
|
|
||||||
$ export CC=gcc-10
|
|
||||||
$ export CXX=g++-10
|
|
||||||
```
|
|
||||||
|
|
||||||
### Checkout ClickHouse Sources {#checkout-clickhouse-sources}
|
### Checkout ClickHouse Sources {#checkout-clickhouse-sources}
|
||||||
|
|
||||||
|
@ -5,36 +5,87 @@ toc_title: Third-Party Libraries Used
|
|||||||
|
|
||||||
# Third-Party Libraries Used {#third-party-libraries-used}
|
# Third-Party Libraries Used {#third-party-libraries-used}
|
||||||
|
|
||||||
| Library | License |
|
The list of third-party libraries can be obtained by the following query:
|
||||||
|---------------------|----------------------------------------------------------------------------------------------------------------------------------------------|
|
|
||||||
| base64 | [BSD 2-Clause License](https://github.com/aklomp/base64/blob/a27c565d1b6c676beaf297fe503c4518185666f7/LICENSE) |
|
```
|
||||||
| boost | [Boost Software License 1.0](https://github.com/ClickHouse-Extras/boost-extra/blob/6883b40449f378019aec792f9983ce3afc7ff16e/LICENSE_1_0.txt) |
|
SELECT library_name, license_type, license_path FROM system.licenses ORDER BY library_name COLLATE 'en'
|
||||||
| brotli | [MIT](https://github.com/google/brotli/blob/master/LICENSE) |
|
```
|
||||||
| capnproto | [MIT](https://github.com/capnproto/capnproto/blob/master/LICENSE) |
|
|
||||||
| cctz | [Apache License 2.0](https://github.com/google/cctz/blob/4f9776a310f4952454636363def82c2bf6641d5f/LICENSE.txt) |
|
[Example](https://gh-api.clickhouse.tech/play?user=play#U0VMRUNUIGxpYnJhcnlfbmFtZSwgbGljZW5zZV90eXBlLCBsaWNlbnNlX3BhdGggRlJPTSBzeXN0ZW0ubGljZW5zZXMgT1JERVIgQlkgbGlicmFyeV9uYW1lIENPTExBVEUgJ2VuJw==)
|
||||||
| double-conversion | [BSD 3-Clause License](https://github.com/google/double-conversion/blob/cf2f0f3d547dc73b4612028a155b80536902ba02/LICENSE) |
|
|
||||||
| FastMemcpy | [MIT](https://github.com/ClickHouse/ClickHouse/blob/master/libs/libmemcpy/impl/LICENSE) |
|
| library_name | license_type | license_path |
|
||||||
| googletest | [BSD 3-Clause License](https://github.com/google/googletest/blob/master/LICENSE) |
|
|:-|:-|:-|
|
||||||
| h3 | [Apache License 2.0](https://github.com/uber/h3/blob/master/LICENSE) |
|
| abseil-cpp | Apache | /contrib/abseil-cpp/LICENSE |
|
||||||
| hyperscan | [BSD 3-Clause License](https://github.com/intel/hyperscan/blob/master/LICENSE) |
|
| AMQP-CPP | Apache | /contrib/AMQP-CPP/LICENSE |
|
||||||
| libcxxabi | [BSD + MIT](https://github.com/ClickHouse/ClickHouse/blob/master/libs/libglibc-compatibility/libcxxabi/LICENSE.TXT) |
|
| arrow | Apache | /contrib/arrow/LICENSE.txt |
|
||||||
| libdivide | [Zlib License](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/libdivide/LICENSE.txt) |
|
| avro | Apache | /contrib/avro/LICENSE.txt |
|
||||||
| libgsasl | [LGPL v2.1](https://github.com/ClickHouse-Extras/libgsasl/blob/3b8948a4042e34fb00b4fb987535dc9e02e39040/LICENSE) |
|
| aws | Apache | /contrib/aws/LICENSE.txt |
|
||||||
| libhdfs3 | [Apache License 2.0](https://github.com/ClickHouse-Extras/libhdfs3/blob/bd6505cbb0c130b0db695305b9a38546fa880e5a/LICENSE.txt) |
|
| aws-c-common | Apache | /contrib/aws-c-common/LICENSE |
|
||||||
| libmetrohash | [Apache License 2.0](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/libmetrohash/LICENSE) |
|
| aws-c-event-stream | Apache | /contrib/aws-c-event-stream/LICENSE |
|
||||||
| libpcg-random | [Apache License 2.0](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/libpcg-random/LICENSE-APACHE.txt) |
|
| aws-checksums | Apache | /contrib/aws-checksums/LICENSE |
|
||||||
| libressl | [OpenSSL License](https://github.com/ClickHouse-Extras/ssl/blob/master/COPYING) |
|
| base64 | BSD 2-clause | /contrib/base64/LICENSE |
|
||||||
| librdkafka | [BSD 2-Clause License](https://github.com/edenhill/librdkafka/blob/363dcad5a23dc29381cc626620e68ae418b3af19/LICENSE) |
|
| boost | Boost | /contrib/boost/LICENSE_1_0.txt |
|
||||||
| libwidechar_width | [CC0 1.0 Universal](https://github.com/ClickHouse/ClickHouse/blob/master/libs/libwidechar_width/LICENSE) |
|
| boringssl | BSD | /contrib/boringssl/LICENSE |
|
||||||
| llvm | [BSD 3-Clause License](https://github.com/ClickHouse-Extras/llvm/blob/163def217817c90fb982a6daf384744d8472b92b/llvm/LICENSE.TXT) |
|
| brotli | MIT | /contrib/brotli/LICENSE |
|
||||||
| lz4 | [BSD 2-Clause License](https://github.com/lz4/lz4/blob/c10863b98e1503af90616ae99725ecd120265dfb/LICENSE) |
|
| capnproto | MIT | /contrib/capnproto/LICENSE |
|
||||||
| mariadb-connector-c | [LGPL v2.1](https://github.com/ClickHouse-Extras/mariadb-connector-c/blob/3.1/COPYING.LIB) |
|
| cassandra | Apache | /contrib/cassandra/LICENSE.txt |
|
||||||
| murmurhash | [Public Domain](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/murmurhash/LICENSE) |
|
| cctz | Apache | /contrib/cctz/LICENSE.txt |
|
||||||
| pdqsort | [Zlib License](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/pdqsort/license.txt) |
|
| cityhash102 | MIT | /contrib/cityhash102/COPYING |
|
||||||
| poco | [Boost Software License - Version 1.0](https://github.com/ClickHouse-Extras/poco/blob/fe5505e56c27b6ecb0dcbc40c49dc2caf4e9637f/LICENSE) |
|
| cppkafka | BSD 2-clause | /contrib/cppkafka/LICENSE |
|
||||||
| protobuf | [BSD 3-Clause License](https://github.com/ClickHouse-Extras/protobuf/blob/12735370922a35f03999afff478e1c6d7aa917a4/LICENSE) |
|
| croaring | Apache | /contrib/croaring/LICENSE |
|
||||||
| re2 | [BSD 3-Clause License](https://github.com/google/re2/blob/7cf8b88e8f70f97fd4926b56aa87e7f53b2717e0/LICENSE) |
|
| curl | Apache | /contrib/curl/docs/LICENSE-MIXING.md |
|
||||||
| sentry-native | [MIT License](https://github.com/getsentry/sentry-native/blob/master/LICENSE) |
|
| cyrus-sasl | BSD 2-clause | /contrib/cyrus-sasl/COPYING |
|
||||||
| UnixODBC | [LGPL v2.1](https://github.com/ClickHouse-Extras/UnixODBC/tree/b0ad30f7f6289c12b76f04bfb9d466374bb32168) |
|
| double-conversion | BSD 3-clause | /contrib/double-conversion/LICENSE |
|
||||||
| zlib-ng | [Zlib License](https://github.com/ClickHouse-Extras/zlib-ng/blob/develop/LICENSE.md) |
|
| dragonbox | Apache | /contrib/dragonbox/LICENSE-Apache2-LLVM |
|
||||||
| zstd | [BSD 3-Clause License](https://github.com/facebook/zstd/blob/dev/LICENSE) |
|
| fast_float | Apache | /contrib/fast_float/LICENSE |
|
||||||
|
| fastops | MIT | /contrib/fastops/LICENSE |
|
||||||
|
| flatbuffers | Apache | /contrib/flatbuffers/LICENSE.txt |
|
||||||
|
| fmtlib | Unknown | /contrib/fmtlib/LICENSE.rst |
|
||||||
|
| gcem | Apache | /contrib/gcem/LICENSE |
|
||||||
|
| googletest | BSD 3-clause | /contrib/googletest/LICENSE |
|
||||||
|
| grpc | Apache | /contrib/grpc/LICENSE |
|
||||||
|
| h3 | Apache | /contrib/h3/LICENSE |
|
||||||
|
| hyperscan | Boost | /contrib/hyperscan/LICENSE |
|
||||||
|
| icu | Public Domain | /contrib/icu/icu4c/LICENSE |
|
||||||
|
| icudata | Public Domain | /contrib/icudata/LICENSE |
|
||||||
|
| jemalloc | BSD 2-clause | /contrib/jemalloc/COPYING |
|
||||||
|
| krb5 | MIT | /contrib/krb5/src/lib/gssapi/LICENSE |
|
||||||
|
| libc-headers | LGPL | /contrib/libc-headers/LICENSE |
|
||||||
|
| libcpuid | BSD 2-clause | /contrib/libcpuid/COPYING |
|
||||||
|
| libcxx | Apache | /contrib/libcxx/LICENSE.TXT |
|
||||||
|
| libcxxabi | Apache | /contrib/libcxxabi/LICENSE.TXT |
|
||||||
|
| libdivide | zLib | /contrib/libdivide/LICENSE.txt |
|
||||||
|
| libfarmhash | MIT | /contrib/libfarmhash/COPYING |
|
||||||
|
| libgsasl | LGPL | /contrib/libgsasl/LICENSE |
|
||||||
|
| libhdfs3 | Apache | /contrib/libhdfs3/LICENSE.txt |
|
||||||
|
| libmetrohash | Apache | /contrib/libmetrohash/LICENSE |
|
||||||
|
| libpq | Unknown | /contrib/libpq/COPYRIGHT |
|
||||||
|
| libpqxx | BSD 3-clause | /contrib/libpqxx/COPYING |
|
||||||
|
| librdkafka | MIT | /contrib/librdkafka/LICENSE.murmur2 |
|
||||||
|
| libunwind | Apache | /contrib/libunwind/LICENSE.TXT |
|
||||||
|
| libuv | BSD | /contrib/libuv/LICENSE |
|
||||||
|
| llvm | Apache | /contrib/llvm/llvm/LICENSE.TXT |
|
||||||
|
| lz4 | BSD | /contrib/lz4/LICENSE |
|
||||||
|
| mariadb-connector-c | LGPL | /contrib/mariadb-connector-c/COPYING.LIB |
|
||||||
|
| miniselect | Boost | /contrib/miniselect/LICENSE_1_0.txt |
|
||||||
|
| msgpack-c | Boost | /contrib/msgpack-c/LICENSE_1_0.txt |
|
||||||
|
| murmurhash | Public Domain | /contrib/murmurhash/LICENSE |
|
||||||
|
| NuRaft | Apache | /contrib/NuRaft/LICENSE |
|
||||||
|
| openldap | Unknown | /contrib/openldap/LICENSE |
|
||||||
|
| orc | Apache | /contrib/orc/LICENSE |
|
||||||
|
| poco | Boost | /contrib/poco/LICENSE |
|
||||||
|
| protobuf | BSD 3-clause | /contrib/protobuf/LICENSE |
|
||||||
|
| rapidjson | MIT | /contrib/rapidjson/bin/jsonschema/LICENSE |
|
||||||
|
| re2 | BSD 3-clause | /contrib/re2/LICENSE |
|
||||||
|
| replxx | BSD 3-clause | /contrib/replxx/LICENSE.md |
|
||||||
|
| rocksdb | BSD 3-clause | /contrib/rocksdb/LICENSE.leveldb |
|
||||||
|
| sentry-native | MIT | /contrib/sentry-native/LICENSE |
|
||||||
|
| simdjson | Apache | /contrib/simdjson/LICENSE |
|
||||||
|
| snappy | Public Domain | /contrib/snappy/COPYING |
|
||||||
|
| sparsehash-c11 | BSD 3-clause | /contrib/sparsehash-c11/LICENSE |
|
||||||
|
| stats | Apache | /contrib/stats/LICENSE |
|
||||||
|
| thrift | Apache | /contrib/thrift/LICENSE |
|
||||||
|
| unixodbc | LGPL | /contrib/unixodbc/COPYING |
|
||||||
|
| xz | Public Domain | /contrib/xz/COPYING |
|
||||||
|
| zlib-ng | zLib | /contrib/zlib-ng/LICENSE.md |
|
||||||
|
| zstd | BSD | /contrib/zstd/LICENSE |
|
||||||
|
@ -131,17 +131,18 @@ ClickHouse uses several external libraries for building. All of them do not need
|
|||||||
|
|
||||||
## C++ Compiler {#c-compiler}
|
## C++ Compiler {#c-compiler}
|
||||||
|
|
||||||
Compilers GCC starting from version 10 and Clang version 8 or above are supported for building ClickHouse.
|
Compilers Clang starting from version 11 is supported for building ClickHouse.
|
||||||
|
|
||||||
Official Yandex builds currently use GCC because it generates machine code of slightly better performance (yielding a difference of up to several percent according to our benchmarks). And Clang is more convenient for development usually. Though, our continuous integration (CI) platform runs checks for about a dozen of build combinations.
|
Clang should be used instead of gcc. Though, our continuous integration (CI) platform runs checks for about a dozen of build combinations.
|
||||||
|
|
||||||
To install GCC on Ubuntu run: `sudo apt install gcc g++`
|
On Ubuntu/Debian you can use the automatic installation script (check [official webpage](https://apt.llvm.org/))
|
||||||
|
|
||||||
Check the version of gcc: `gcc --version`. If it is below 10, then follow the instruction here: https://clickhouse.tech/docs/en/development/build/#install-gcc-10.
|
```bash
|
||||||
|
sudo bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)"
|
||||||
|
```
|
||||||
|
|
||||||
Mac OS X build is supported only for Clang. Just run `brew install llvm`
|
Mac OS X build is also supported. Just run `brew install llvm`
|
||||||
|
|
||||||
If you decide to use Clang, you can also install `libc++` and `lld`, if you know what it is. Using `ccache` is also recommended.
|
|
||||||
|
|
||||||
## The Building Process {#the-building-process}
|
## The Building Process {#the-building-process}
|
||||||
|
|
||||||
@ -152,14 +153,7 @@ Now that you are ready to build ClickHouse we recommend you to create a separate
|
|||||||
|
|
||||||
You can have several different directories (build_release, build_debug, etc.) for different types of build.
|
You can have several different directories (build_release, build_debug, etc.) for different types of build.
|
||||||
|
|
||||||
While inside the `build` directory, configure your build by running CMake. Before the first run, you need to define environment variables that specify compiler (version 10 gcc compiler in this example).
|
While inside the `build` directory, configure your build by running CMake. Before the first run, you need to define environment variables that specify compiler.
|
||||||
|
|
||||||
Linux:
|
|
||||||
|
|
||||||
export CC=gcc-10 CXX=g++-10
|
|
||||||
cmake ..
|
|
||||||
|
|
||||||
Mac OS X:
|
|
||||||
|
|
||||||
export CC=clang CXX=clang++
|
export CC=clang CXX=clang++
|
||||||
cmake ..
|
cmake ..
|
||||||
|
@ -701,7 +701,7 @@ But other things being equal, cross-platform or portable code is preferred.
|
|||||||
|
|
||||||
**2.** Language: C++20 (see the list of available [C++20 features](https://en.cppreference.com/w/cpp/compiler_support#C.2B.2B20_features)).
|
**2.** Language: C++20 (see the list of available [C++20 features](https://en.cppreference.com/w/cpp/compiler_support#C.2B.2B20_features)).
|
||||||
|
|
||||||
**3.** Compiler: `gcc`. At this time (August 2020), the code is compiled using version 9.3. (It can also be compiled using `clang 8`.)
|
**3.** Compiler: `clang`. At this time (April 2021), the code is compiled using clang version 11. (It can also be compiled using `gcc` version 10, but it's untested and not suitable for production usage).
|
||||||
|
|
||||||
The standard library is used (`libc++`).
|
The standard library is used (`libc++`).
|
||||||
|
|
||||||
@ -711,7 +711,7 @@ The standard library is used (`libc++`).
|
|||||||
|
|
||||||
The CPU instruction set is the minimum supported set among our servers. Currently, it is SSE 4.2.
|
The CPU instruction set is the minimum supported set among our servers. Currently, it is SSE 4.2.
|
||||||
|
|
||||||
**6.** Use `-Wall -Wextra -Werror` compilation flags.
|
**6.** Use `-Wall -Wextra -Werror` compilation flags. Also `-Weverything` is used with few exceptions.
|
||||||
|
|
||||||
**7.** Use static linking with all libraries except those that are difficult to connect to statically (see the output of the `ldd` command).
|
**7.** Use static linking with all libraries except those that are difficult to connect to statically (see the output of the `ldd` command).
|
||||||
|
|
||||||
|
@ -3,15 +3,52 @@ toc_priority: 32
|
|||||||
toc_title: Atomic
|
toc_title: Atomic
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|
||||||
# Atomic {#atomic}
|
# Atomic {#atomic}
|
||||||
|
|
||||||
It supports non-blocking `DROP` and `RENAME TABLE` queries and atomic `EXCHANGE TABLES t1 AND t2` queries. `Atomic` database engine is used by default.
|
It supports non-blocking [DROP TABLE](#drop-detach-table) and [RENAME TABLE](#rename-table) queries and atomic [EXCHANGE TABLES t1 AND t2](#exchange-tables) queries. `Atomic` database engine is used by default.
|
||||||
|
|
||||||
## Creating a Database {#creating-a-database}
|
## Creating a Database {#creating-a-database}
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
CREATE DATABASE test ENGINE = Atomic;
|
CREATE DATABASE test[ ENGINE = Atomic];
|
||||||
```
|
```
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/engines/database-engines/atomic/) <!--hide-->
|
## Specifics and recommendations {#specifics-and-recommendations}
|
||||||
|
|
||||||
|
### Table UUID {#table-uuid}
|
||||||
|
|
||||||
|
All tables in database `Atomic` have persistent [UUID](../../sql-reference/data-types/uuid.md) and store data in directory `/clickhouse_path/store/xxx/xxxyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy/`, where `xxxyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy` is UUID of the table.
|
||||||
|
Usually, the UUID is generated automatically, but the user can also explicitly specify the UUID in the same way when creating the table (this is not recommended). To display the `SHOW CREATE` query with the UUID you can use setting [show_table_uuid_in_table_create_query_if_not_nil](../../operations/settings/settings.md#show_table_uuid_in_table_create_query_if_not_nil). For example:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE name UUID '28f1c61c-2970-457a-bffe-454156ddcfef' (n UInt64) ENGINE = ...;
|
||||||
|
```
|
||||||
|
### RENAME TABLE {#rename-table}
|
||||||
|
|
||||||
|
`RENAME` queries are performed without changing UUID and moving table data. These queries do not wait for the completion of queries using the table and will be executed instantly.
|
||||||
|
|
||||||
|
### DROP/DETACH TABLE {#drop-detach-table}
|
||||||
|
|
||||||
|
On `DROP TABLE` no data is removed, database `Atomic` just marks table as dropped by moving metadata to `/clickhouse_path/metadata_dropped/` and notifies background thread. Delay before final table data deletion is specify by [database_atomic_delay_before_drop_table_sec](../../operations/server-configuration-parameters/settings.md#database_atomic_delay_before_drop_table_sec) setting.
|
||||||
|
You can specify synchronous mode using `SYNC` modifier. Use the [database_atomic_wait_for_drop_and_detach_synchronously](../../operations/settings/settings.md#database_atomic_wait_for_drop_and_detach_synchronously) setting to do this. In this case `DROP` waits for running `SELECT`, `INSERT` and other queries which are using the table to finish. Table will be actually removed when it's not in use.
|
||||||
|
|
||||||
|
### EXCHANGE TABLES {#exchange-tables}
|
||||||
|
|
||||||
|
`EXCHANGE` query swaps tables atomically. So instead of this non-atomic operation:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
RENAME TABLE new_table TO tmp, old_table TO new_table, tmp TO old_table;
|
||||||
|
```
|
||||||
|
you can use one atomic query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
EXCHANGE TABLES new_table AND old_table;
|
||||||
|
```
|
||||||
|
|
||||||
|
### ReplicatedMergeTree in Atomic Database {#replicatedmergetree-in-atomic-database}
|
||||||
|
|
||||||
|
For [ReplicatedMergeTree](../table-engines/mergetree-family/replication.md#table_engines-replication) tables is recomended do not specify parameters of engine - path in ZooKeeper and replica name. In this case will be used parameters of the configuration [default_replica_path](../../operations/server-configuration-parameters/settings.md#default_replica_path) and [default_replica_name](../../operations/server-configuration-parameters/settings.md#default_replica_name). If you want specify parameters of engine explicitly than recomended to use {uuid} macros. This is useful so that unique paths are automatically generated for each table in the ZooKeeper.
|
||||||
|
|
||||||
|
## See Also
|
||||||
|
|
||||||
|
- [system.databases](../../operations/system-tables/databases.md) system table
|
||||||
|
@ -19,26 +19,26 @@ ENGINE = S3(path, [aws_access_key_id, aws_secret_access_key,] format, structure,
|
|||||||
- `path` — Bucket url with path to file. Supports following wildcards in readonly mode: `*`, `?`, `{abc,def}` and `{N..M}` where `N`, `M` — numbers, `'abc'`, `'def'` — strings. For more information see [below](#wildcards-in-path).
|
- `path` — Bucket url with path to file. Supports following wildcards in readonly mode: `*`, `?`, `{abc,def}` and `{N..M}` where `N`, `M` — numbers, `'abc'`, `'def'` — strings. For more information see [below](#wildcards-in-path).
|
||||||
- `format` — The [format](../../../interfaces/formats.md#formats) of the file.
|
- `format` — The [format](../../../interfaces/formats.md#formats) of the file.
|
||||||
- `structure` — Structure of the table. Format `'column1_name column1_type, column2_name column2_type, ...'`.
|
- `structure` — Structure of the table. Format `'column1_name column1_type, column2_name column2_type, ...'`.
|
||||||
- `compression` — Compression type. Supported values: none, gzip/gz, brotli/br, xz/LZMA, zstd/zst. Parameter is optional. By default, it will autodetect compression by file extension.
|
- `compression` — Compression type. Supported values: `none`, `gzip/gz`, `brotli/br`, `xz/LZMA`, `zstd/zst`. Parameter is optional. By default, it will autodetect compression by file extension.
|
||||||
|
|
||||||
**Example:**
|
**Example**
|
||||||
|
|
||||||
**1.** Set up the `s3_engine_table` table:
|
1. Set up the `s3_engine_table` table:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
CREATE TABLE s3_engine_table (name String, value UInt32) ENGINE=S3('https://storage.yandexcloud.net/my-test-bucket-768/test-data.csv.gz', 'CSV', 'name String, value UInt32', 'gzip')
|
CREATE TABLE s3_engine_table (name String, value UInt32) ENGINE=S3('https://storage.yandexcloud.net/my-test-bucket-768/test-data.csv.gz', 'CSV', 'name String, value UInt32', 'gzip');
|
||||||
```
|
```
|
||||||
|
|
||||||
**2.** Fill file:
|
2. Fill file:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
INSERT INTO s3_engine_table VALUES ('one', 1), ('two', 2), ('three', 3)
|
INSERT INTO s3_engine_table VALUES ('one', 1), ('two', 2), ('three', 3);
|
||||||
```
|
```
|
||||||
|
|
||||||
**3.** Query the data:
|
3. Query the data:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
SELECT * FROM s3_engine_table LIMIT 2
|
SELECT * FROM s3_engine_table LIMIT 2;
|
||||||
```
|
```
|
||||||
|
|
||||||
```text
|
```text
|
||||||
@ -73,7 +73,57 @@ For more information about virtual columns see [here](../../../engines/table-eng
|
|||||||
|
|
||||||
Constructions with `{}` are similar to the [remote](../../../sql-reference/table-functions/remote.md) table function.
|
Constructions with `{}` are similar to the [remote](../../../sql-reference/table-functions/remote.md) table function.
|
||||||
|
|
||||||
## S3-related Settings {#s3-settings}
|
**Example**
|
||||||
|
|
||||||
|
1. Suppose we have several files in CSV format with the following URIs on S3:
|
||||||
|
|
||||||
|
- ‘https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_1.csv’
|
||||||
|
- ‘https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_2.csv’
|
||||||
|
- ‘https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_3.csv’
|
||||||
|
- ‘https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_1.csv’
|
||||||
|
- ‘https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_2.csv’
|
||||||
|
- ‘https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_3.csv’
|
||||||
|
|
||||||
|
There are several ways to make a table consisting of all six files:
|
||||||
|
|
||||||
|
The first way:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
CREATE TABLE table_with_range (name String, value UInt32) ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/some_file_{1..3}', 'CSV');
|
||||||
|
```
|
||||||
|
|
||||||
|
Another way:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
CREATE TABLE table_with_question_mark (name String, value UInt32) ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/some_file_?', 'CSV');
|
||||||
|
```
|
||||||
|
|
||||||
|
Table consists of all the files in both directories (all files should satisfy format and schema described in query):
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
CREATE TABLE table_with_asterisk (name String, value UInt32) ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/*', 'CSV');
|
||||||
|
```
|
||||||
|
|
||||||
|
If the listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use `?`.
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
Create table with files named `file-000.csv`, `file-001.csv`, … , `file-999.csv`:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
CREATE TABLE big_table (name String, value UInt32) ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/big_prefix/file-{000..999}.csv', 'CSV');
|
||||||
|
```
|
||||||
|
|
||||||
|
## Virtual Columns {#virtual-columns}
|
||||||
|
|
||||||
|
- `_path` — Path to the file.
|
||||||
|
- `_file` — Name of the file.
|
||||||
|
|
||||||
|
**See Also**
|
||||||
|
|
||||||
|
- [Virtual columns](../../../engines/table-engines/index.md#table_engines-virtual_columns)
|
||||||
|
|
||||||
|
## S3-related settings {#settings}
|
||||||
|
|
||||||
The following settings can be set before query execution or placed into configuration file.
|
The following settings can be set before query execution or placed into configuration file.
|
||||||
|
|
||||||
@ -90,6 +140,7 @@ The following settings can be specified in configuration file for given endpoint
|
|||||||
- `endpoint` — Specifies prefix of an endpoint. Mandatory.
|
- `endpoint` — Specifies prefix of an endpoint. Mandatory.
|
||||||
- `access_key_id` and `secret_access_key` — Specifies credentials to use with given endpoint. Optional.
|
- `access_key_id` and `secret_access_key` — Specifies credentials to use with given endpoint. Optional.
|
||||||
- `use_environment_credentials` — If set to `true`, S3 client will try to obtain credentials from environment variables and Amazon EC2 metadata for given endpoint. Optional, default value is `false`.
|
- `use_environment_credentials` — If set to `true`, S3 client will try to obtain credentials from environment variables and Amazon EC2 metadata for given endpoint. Optional, default value is `false`.
|
||||||
|
- `use_insecure_imds_request` — If set to `true`, S3 client will use insecure IMDS request while obtaining credentials from Amazon EC2 metadata. Optional, default value is `false`.
|
||||||
- `header` — Adds specified HTTP header to a request to given endpoint. Optional, can be speficied multiple times.
|
- `header` — Adds specified HTTP header to a request to given endpoint. Optional, can be speficied multiple times.
|
||||||
- `server_side_encryption_customer_key_base64` — If specified, required headers for accessing S3 objects with SSE-C encryption will be set. Optional.
|
- `server_side_encryption_customer_key_base64` — If specified, required headers for accessing S3 objects with SSE-C encryption will be set. Optional.
|
||||||
|
|
||||||
@ -102,11 +153,13 @@ The following settings can be specified in configuration file for given endpoint
|
|||||||
<!-- <access_key_id>ACCESS_KEY_ID</access_key_id> -->
|
<!-- <access_key_id>ACCESS_KEY_ID</access_key_id> -->
|
||||||
<!-- <secret_access_key>SECRET_ACCESS_KEY</secret_access_key> -->
|
<!-- <secret_access_key>SECRET_ACCESS_KEY</secret_access_key> -->
|
||||||
<!-- <use_environment_credentials>false</use_environment_credentials> -->
|
<!-- <use_environment_credentials>false</use_environment_credentials> -->
|
||||||
|
<!-- <use_insecure_imds_request>false</use_insecure_imds_request> -->
|
||||||
<!-- <header>Authorization: Bearer SOME-TOKEN</header> -->
|
<!-- <header>Authorization: Bearer SOME-TOKEN</header> -->
|
||||||
<!-- <server_side_encryption_customer_key_base64>BASE64-ENCODED-KEY</server_side_encryption_customer_key_base64> -->
|
<!-- <server_side_encryption_customer_key_base64>BASE64-ENCODED-KEY</server_side_encryption_customer_key_base64> -->
|
||||||
</endpoint-name>
|
</endpoint-name>
|
||||||
</s3>
|
</s3>
|
||||||
```
|
```
|
||||||
|
|
||||||
## Usage {#usage-examples}
|
## Usage {#usage-examples}
|
||||||
|
|
||||||
Suppose we have several files in TSV format with the following URIs on HDFS:
|
Suppose we have several files in TSV format with the following URIs on HDFS:
|
||||||
@ -149,8 +202,7 @@ ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_p
|
|||||||
CREATE TABLE big_table (name String, value UInt32)
|
CREATE TABLE big_table (name String, value UInt32)
|
||||||
ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/big_prefix/file-{000..999}.csv', 'CSV');
|
ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/big_prefix/file-{000..999}.csv', 'CSV');
|
||||||
```
|
```
|
||||||
|
|
||||||
## See also
|
## See also
|
||||||
|
|
||||||
- [S3 table function](../../../sql-reference/table-functions/s3.md)
|
- [S3 table function](../../../sql-reference/table-functions/s3.md)
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/s3/) <!--hide-->
|
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user