mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-25 00:52:02 +00:00
Merge branch 'master' into kerberos_tests
This commit is contained in:
commit
f70dbedddc
10
.gitmodules
vendored
10
.gitmodules
vendored
@ -93,7 +93,7 @@
|
||||
url = https://github.com/ClickHouse-Extras/libunwind.git
|
||||
[submodule "contrib/simdjson"]
|
||||
path = contrib/simdjson
|
||||
url = https://github.com/ClickHouse-Extras/simdjson.git
|
||||
url = https://github.com/simdjson/simdjson.git
|
||||
[submodule "contrib/rapidjson"]
|
||||
path = contrib/rapidjson
|
||||
url = https://github.com/ClickHouse-Extras/rapidjson
|
||||
@ -133,7 +133,7 @@
|
||||
url = https://github.com/unicode-org/icu.git
|
||||
[submodule "contrib/flatbuffers"]
|
||||
path = contrib/flatbuffers
|
||||
url = https://github.com/google/flatbuffers.git
|
||||
url = https://github.com/ClickHouse-Extras/flatbuffers.git
|
||||
[submodule "contrib/libc-headers"]
|
||||
path = contrib/libc-headers
|
||||
url = https://github.com/ClickHouse-Extras/libc-headers.git
|
||||
@ -221,3 +221,9 @@
|
||||
[submodule "contrib/NuRaft"]
|
||||
path = contrib/NuRaft
|
||||
url = https://github.com/ClickHouse-Extras/NuRaft.git
|
||||
[submodule "contrib/nanodbc"]
|
||||
path = contrib/nanodbc
|
||||
url = https://github.com/ClickHouse-Extras/nanodbc.git
|
||||
[submodule "contrib/datasketches-cpp"]
|
||||
path = contrib/datasketches-cpp
|
||||
url = https://github.com/ClickHouse-Extras/datasketches-cpp.git
|
||||
|
157
CHANGELOG.md
157
CHANGELOG.md
@ -1,4 +1,157 @@
|
||||
## ClickHouse release 21.3
|
||||
## ClickHouse release 21.4
|
||||
|
||||
### ClickHouse release 21.4.1 2021-04-12
|
||||
|
||||
#### Backward Incompatible Change
|
||||
|
||||
* The `toStartOfIntervalFunction` will align hour intervals to the midnight (in previous versions they were aligned to the start of unix epoch). For example, `toStartOfInterval(x, INTERVAL 11 HOUR)` will split every day into three intervals: `00:00:00..10:59:59`, `11:00:00..21:59:59` and `22:00:00..23:59:59`. This behaviour is more suited for practical needs. This closes [#9510](https://github.com/ClickHouse/ClickHouse/issues/9510). [#22060](https://github.com/ClickHouse/ClickHouse/pull/22060) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* `Age` and `Precision` in graphite rollup configs should increase from retention to retention. Now it's checked and the wrong config raises an exception. [#21496](https://github.com/ClickHouse/ClickHouse/pull/21496) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Fix `cutToFirstSignificantSubdomainCustom()`/`firstSignificantSubdomainCustom()` returning wrong result for 3+ level domains present in custom top-level domain list. For input domains matching these custom top-level domains, the third-level domain was considered to be the first significant one. This is now fixed. This change may introduce incompatibility if the function is used in e.g. the sharding key. [#21946](https://github.com/ClickHouse/ClickHouse/pull/21946) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Column `keys` in table `system.dictionaries` was replaced to columns `key.names` and `key.types`. Columns `key.names`, `key.types`, `attribute.names`, `attribute.types` from `system.dictionaries` table does not require dictionary to be loaded. [#21884](https://github.com/ClickHouse/ClickHouse/pull/21884) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Now replicas that are processing the `ALTER TABLE ATTACH PART[ITION]` command search in their `detached/` folders before fetching the data from other replicas. As an implementation detail, a new command `ATTACH_PART` is introduced in the replicated log. Parts are searched and compared by their checksums. [#18978](https://github.com/ClickHouse/ClickHouse/pull/18978) ([Mike Kot](https://github.com/myrrc)). **Note**:
|
||||
* `ATTACH PART[ITION]` queries may not work during cluster upgrade.
|
||||
* It's not possible to rollback to older ClickHouse version after executing `ALTER ... ATTACH` query in new version as the old servers would fail to pass the `ATTACH_PART` entry in the replicated log.
|
||||
* In this version, empty `<remote_url_allow_hosts></remote_url_allow_hosts>` will block all access to remote hosts while in previous versions it did nothing. If you want to keep old behaviour and you have empty `remote_url_allow_hosts` element in configuration file, remove it. [#20058](https://github.com/ClickHouse/ClickHouse/pull/20058) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||
|
||||
|
||||
#### New Feature
|
||||
|
||||
* Extended range of `DateTime64` to support dates from year 1925 to 2283. Improved support of `DateTime` around zero date (`1970-01-01`). [#9404](https://github.com/ClickHouse/ClickHouse/pull/9404) ([alexey-milovidov](https://github.com/alexey-milovidov), [Vasily Nemkov](https://github.com/Enmk)). Not every time and date functions are working for extended range of dates.
|
||||
* Added support of Kerberos authentication for preconfigured users and HTTP requests (GSS-SPNEGO). [#14995](https://github.com/ClickHouse/ClickHouse/pull/14995) ([Denis Glazachev](https://github.com/traceon)).
|
||||
* Add `prefer_column_name_to_alias` setting to use original column names instead of aliases. it is needed to be more compatible with common databases' aliasing rules. This is for [#9715](https://github.com/ClickHouse/ClickHouse/issues/9715) and [#9887](https://github.com/ClickHouse/ClickHouse/issues/9887). [#22044](https://github.com/ClickHouse/ClickHouse/pull/22044) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Added functions `dictGetChildren(dictionary, key)`, `dictGetDescendants(dictionary, key, level)`. Function `dictGetChildren` return all children as an array if indexes. It is a inverse transformation for `dictGetHierarchy`. Function `dictGetDescendants` return all descendants as if `dictGetChildren` was applied `level` times recursively. Zero `level` value is equivalent to infinity. Closes [#14656](https://github.com/ClickHouse/ClickHouse/issues/14656). [#22096](https://github.com/ClickHouse/ClickHouse/pull/22096) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Added `executable_pool` dictionary source. Close [#14528](https://github.com/ClickHouse/ClickHouse/issues/14528). [#21321](https://github.com/ClickHouse/ClickHouse/pull/21321) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Added table function `dictionary`. It works the same way as `Dictionary` engine. Closes [#21560](https://github.com/ClickHouse/ClickHouse/issues/21560). [#21910](https://github.com/ClickHouse/ClickHouse/pull/21910) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Support `Nullable` type for `PolygonDictionary` attribute. [#21890](https://github.com/ClickHouse/ClickHouse/pull/21890) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Functions `dictGet`, `dictHas` use current database name if it is not specified for dictionaries created with DDL. Closes [#21632](https://github.com/ClickHouse/ClickHouse/issues/21632). [#21859](https://github.com/ClickHouse/ClickHouse/pull/21859) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Added function `dictGetOrNull`. It works like `dictGet`, but return `Null` in case key was not found in dictionary. Closes [#22375](https://github.com/ClickHouse/ClickHouse/issues/22375). [#22413](https://github.com/ClickHouse/ClickHouse/pull/22413) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Added async update in `ComplexKeyCache`, `SSDCache`, `SSDComplexKeyCache` dictionaries. Added support for `Nullable` type in `Cache`, `ComplexKeyCache`, `SSDCache`, `SSDComplexKeyCache` dictionaries. Added support for multiple attributes fetch with `dictGet`, `dictGetOrDefault` functions. Fixes [#21517](https://github.com/ClickHouse/ClickHouse/issues/21517). [#20595](https://github.com/ClickHouse/ClickHouse/pull/20595) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Support `dictHas` function for `RangeHashedDictionary`. Fixes [#6680](https://github.com/ClickHouse/ClickHouse/issues/6680). [#19816](https://github.com/ClickHouse/ClickHouse/pull/19816) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Add function `timezoneOf` that returns the timezone name of `DateTime` or `DateTime64` data types. This does not close [#9959](https://github.com/ClickHouse/ClickHouse/issues/9959). Fix inconsistencies in function names: add aliases `timezone` and `timeZone` as well as `toTimezone` and `toTimeZone` and `timezoneOf` and `timeZoneOf`. [#22001](https://github.com/ClickHouse/ClickHouse/pull/22001) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add new optional clause `GRANTEES` for `CREATE/ALTER USER` commands. It specifies users or roles which are allowed to receive grants from this user on condition this user has also all required access granted with grant option. By default `GRANTEES ANY` is used which means a user with grant option can grant to anyone. Syntax: `CREATE USER ... GRANTEES {user | role | ANY | NONE} [,...] [EXCEPT {user | role} [,...]]`. [#21641](https://github.com/ClickHouse/ClickHouse/pull/21641) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Add new column `slowdowns_count` to `system.clusters`. When using hedged requests, it shows how many times we switched to another replica because this replica was responding slowly. Also show actual value of `errors_count` in `system.clusters`. [#21480](https://github.com/ClickHouse/ClickHouse/pull/21480) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Add `_partition_id` virtual column for `MergeTree*` engines. Allow to prune partitions by `_partition_id`. Add `partitionID()` function to calculate partition id string. [#21401](https://github.com/ClickHouse/ClickHouse/pull/21401) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Add function `isIPAddressInRange` to test if an IPv4 or IPv6 address is contained in a given CIDR network prefix. [#21329](https://github.com/ClickHouse/ClickHouse/pull/21329) ([PHO](https://github.com/depressed-pho)).
|
||||
* Added new SQL command `ALTER TABLE 'table_name' UNFREEZE [PARTITION 'part_expr'] WITH NAME 'backup_name'`. This command is needed to properly remove 'freezed' partitions from all disks. [#21142](https://github.com/ClickHouse/ClickHouse/pull/21142) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||
* Supports implicit key type conversion for JOIN. [#19885](https://github.com/ClickHouse/ClickHouse/pull/19885) ([Vladimir](https://github.com/vdimir)).
|
||||
|
||||
#### Experimental Feature
|
||||
|
||||
* Support `RANGE OFFSET` frame (for window functions) for floating point types. Implement `lagInFrame`/`leadInFrame` window functions, which are analogous to `lag`/`lead`, but respect the window frame. They are identical when the frame is `between unbounded preceding and unbounded following`. This closes [#5485](https://github.com/ClickHouse/ClickHouse/issues/5485). [#21895](https://github.com/ClickHouse/ClickHouse/pull/21895) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
* Zero-copy replication for `ReplicatedMergeTree` over S3 storage. [#16240](https://github.com/ClickHouse/ClickHouse/pull/16240) ([ianton-ru](https://github.com/ianton-ru)).
|
||||
* Added possibility to migrate existing S3 disk to the schema with backup-restore capabilities. [#22070](https://github.com/ClickHouse/ClickHouse/pull/22070) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||
|
||||
#### Performance Improvement
|
||||
|
||||
* Supported parallel formatting in `clickhouse-local` and everywhere else. [#21630](https://github.com/ClickHouse/ClickHouse/pull/21630) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Support parallel parsing for `CSVWithNames` and `TSVWithNames` formats. This closes [#21085](https://github.com/ClickHouse/ClickHouse/issues/21085). [#21149](https://github.com/ClickHouse/ClickHouse/pull/21149) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Enable read with mmap IO for file ranges from 64 MiB (the settings `min_bytes_to_use_mmap_io`). It may lead to moderate performance improvement. [#22326](https://github.com/ClickHouse/ClickHouse/pull/22326) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add cache for files read with `min_bytes_to_use_mmap_io` setting. It makes significant (2x and more) performance improvement when the value of the setting is small by avoiding frequent mmap/munmap calls and the consequent page faults. Note that mmap IO has major drawbacks that makes it less reliable in production (e.g. hung or SIGBUS on faulty disks; less controllable memory usage). Nevertheless it is good in benchmarks. [#22206](https://github.com/ClickHouse/ClickHouse/pull/22206) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Avoid unnecessary data copy when using codec `NONE`. Please note that codec `NONE` is mostly useless - it's recommended to always use compression (`LZ4` is by default). Despite the common belief, disabling compression may not improve performance (the opposite effect is possible). The `NONE` codec is useful in some cases: - when data is uncompressable; - for synthetic benchmarks. [#22145](https://github.com/ClickHouse/ClickHouse/pull/22145) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Faster `GROUP BY` with small `max_rows_to_group_by` and `group_by_overflow_mode='any'`. [#21856](https://github.com/ClickHouse/ClickHouse/pull/21856) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Optimize performance of queries like `SELECT ... FINAL ... WHERE`. Now in queries with `FINAL` it's allowed to move to `PREWHERE` columns, which are in sorting key. [#21830](https://github.com/ClickHouse/ClickHouse/pull/21830) ([foolchi](https://github.com/foolchi)).
|
||||
* Improved performance by replacing `memcpy` to another implementation. This closes [#18583](https://github.com/ClickHouse/ClickHouse/issues/18583). [#21520](https://github.com/ClickHouse/ClickHouse/pull/21520) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Improve performance of aggregation in order of sorting key (with enabled setting `optimize_aggregation_in_order`). [#19401](https://github.com/ClickHouse/ClickHouse/pull/19401) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
|
||||
#### Improvement
|
||||
|
||||
* Add connection pool for PostgreSQL table/database engine and dictionary source. Should fix [#21444](https://github.com/ClickHouse/ClickHouse/issues/21444). [#21839](https://github.com/ClickHouse/ClickHouse/pull/21839) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Support non-default table schema for postgres storage/table-function. Closes [#21701](https://github.com/ClickHouse/ClickHouse/issues/21701). [#21711](https://github.com/ClickHouse/ClickHouse/pull/21711) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Support replicas priority for postgres dictionary source. [#21710](https://github.com/ClickHouse/ClickHouse/pull/21710) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Introduce a new merge tree setting `min_bytes_to_rebalance_partition_over_jbod` which allows assigning new parts to different disks of a JBOD volume in a balanced way. [#16481](https://github.com/ClickHouse/ClickHouse/pull/16481) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Added `Grant`, `Revoke` and `System` values of `query_kind` column for corresponding queries in `system.query_log`. [#21102](https://github.com/ClickHouse/ClickHouse/pull/21102) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||
* Allow customizing timeouts for HTTP connections used for replication independently from other HTTP timeouts. [#20088](https://github.com/ClickHouse/ClickHouse/pull/20088) ([nvartolomei](https://github.com/nvartolomei)).
|
||||
* Better exception message in client in case of exception while server is writing blocks. In previous versions client may get misleading message like `Data compressed with different methods`. [#22427](https://github.com/ClickHouse/ClickHouse/pull/22427) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix error `Directory tmp_fetch_XXX already exists` which could happen after failed fetch part. Delete temporary fetch directory if it already exists. Fixes [#14197](https://github.com/ClickHouse/ClickHouse/issues/14197). [#22411](https://github.com/ClickHouse/ClickHouse/pull/22411) ([nvartolomei](https://github.com/nvartolomei)).
|
||||
* Fix MSan report for function `range` with `UInt256` argument (support for large integers is experimental). This closes [#22157](https://github.com/ClickHouse/ClickHouse/issues/22157). [#22387](https://github.com/ClickHouse/ClickHouse/pull/22387) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add `current_database` column to `system.processes` table. It contains the current database of the query. [#22365](https://github.com/ClickHouse/ClickHouse/pull/22365) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
* Add case-insensitive history search/navigation and subword movement features to `clickhouse-client`. [#22105](https://github.com/ClickHouse/ClickHouse/pull/22105) ([Amos Bird](https://github.com/amosbird)).
|
||||
* If tuple of NULLs, e.g. `(NULL, NULL)` is on the left hand side of `IN` operator with tuples of non-NULLs on the right hand side, e.g. `SELECT (NULL, NULL) IN ((0, 0), (3, 1))` return 0 instead of throwing an exception about incompatible types. The expression may also appear due to optimization of something like `SELECT (NULL, NULL) = (8, 0) OR (NULL, NULL) = (3, 2) OR (NULL, NULL) = (0, 0) OR (NULL, NULL) = (3, 1)`. This closes [#22017](https://github.com/ClickHouse/ClickHouse/issues/22017). [#22063](https://github.com/ClickHouse/ClickHouse/pull/22063) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Update used version of simdjson to 0.9.1. This fixes [#21984](https://github.com/ClickHouse/ClickHouse/issues/21984). [#22057](https://github.com/ClickHouse/ClickHouse/pull/22057) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Added case insensitive aliases for `CONNECTION_ID()` and `VERSION()` functions. This fixes [#22028](https://github.com/ClickHouse/ClickHouse/issues/22028). [#22042](https://github.com/ClickHouse/ClickHouse/pull/22042) ([Eugene Klimov](https://github.com/Slach)).
|
||||
* Add option `strict_increase` to `windowFunnel` function to calculate each event once (resolve [#21835](https://github.com/ClickHouse/ClickHouse/issues/21835)). [#22025](https://github.com/ClickHouse/ClickHouse/pull/22025) ([Vladimir](https://github.com/vdimir)).
|
||||
* If partition key of a `MergeTree` table does not include `Date` or `DateTime` columns but includes exactly one `DateTime64` column, expose its values in the `min_time` and `max_time` columns in `system.parts` and `system.parts_columns` tables. Add `min_time` and `max_time` columns to `system.parts_columns` table (these was inconsistency to the `system.parts` table). This closes [#18244](https://github.com/ClickHouse/ClickHouse/issues/18244). [#22011](https://github.com/ClickHouse/ClickHouse/pull/22011) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Supported `replication_alter_partitions_sync=1` setting in `clickhouse-copier` for moving partitions from helping table to destination. Decreased default timeouts. Fixes [#21911](https://github.com/ClickHouse/ClickHouse/issues/21911). [#21912](https://github.com/ClickHouse/ClickHouse/pull/21912) ([turbo jason](https://github.com/songenjie)).
|
||||
* Show path to data directory of `EmbeddedRocksDB` tables in system tables. [#21903](https://github.com/ClickHouse/ClickHouse/pull/21903) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Add profile event `HedgedRequestsChangeReplica`, change read data timeout from sec to ms. [#21886](https://github.com/ClickHouse/ClickHouse/pull/21886) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* DiskS3 (experimental feature under development). Fixed bug with the impossibility to move directory if the destination is not empty and cache disk is used. [#21837](https://github.com/ClickHouse/ClickHouse/pull/21837) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||
* Better formatting for `Array` and `Map` data types in Web UI. [#21798](https://github.com/ClickHouse/ClickHouse/pull/21798) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Update clusters only if their configurations were updated. [#21685](https://github.com/ClickHouse/ClickHouse/pull/21685) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Propagate query and session settings for distributed DDL queries. Set `distributed_ddl_entry_format_version` to 2 to enable this. Added `distributed_ddl_output_mode` setting. Supported modes: `none`, `throw` (default), `null_status_on_timeout` and `never_throw`. Miscellaneous fixes and improvements for `Replicated` database engine. [#21535](https://github.com/ClickHouse/ClickHouse/pull/21535) ([tavplubix](https://github.com/tavplubix)).
|
||||
* If `PODArray` was instantiated with element size that is neither a fraction or a multiple of 16, buffer overflow was possible. No bugs in current releases exist. [#21533](https://github.com/ClickHouse/ClickHouse/pull/21533) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add `last_error_time`/`last_error_message`/`last_error_stacktrace`/`remote` columns for `system.errors`. [#21529](https://github.com/ClickHouse/ClickHouse/pull/21529) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Add aliases `simpleJSONExtract/simpleJSONHas` to `visitParam/visitParamExtract{UInt, Int, Bool, Float, Raw, String}`. Fixes #21383. [#21519](https://github.com/ClickHouse/ClickHouse/pull/21519) ([fastio](https://github.com/fastio)).
|
||||
* Add setting `optimize_skip_unused_shards_limit` to limit the number of sharding key values for `optimize_skip_unused_shards`. [#21512](https://github.com/ClickHouse/ClickHouse/pull/21512) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Improve `clickhouse-format` to not throw exception when there are extra spaces or comment after the last query, and throw exception early with readable message when format `ASTInsertQuery` with data . [#21311](https://github.com/ClickHouse/ClickHouse/pull/21311) ([flynn](https://github.com/ucasFL)).
|
||||
* Improve support of integer keys in data type `Map`. [#21157](https://github.com/ClickHouse/ClickHouse/pull/21157) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* MaterializeMySQL: attempt to reconnect to MySQL if the connection is lost. [#20961](https://github.com/ClickHouse/ClickHouse/pull/20961) ([Håvard Kvålen](https://github.com/havardk)).
|
||||
* Support more cases to rewrite `CROSS JOIN` to `INNER JOIN`. [#20392](https://github.com/ClickHouse/ClickHouse/pull/20392) ([Vladimir](https://github.com/vdimir)).
|
||||
* Do not create empty parts on INSERT when `optimize_on_insert` setting enabled. Fixes [#20304](https://github.com/ClickHouse/ClickHouse/issues/20304). [#20387](https://github.com/ClickHouse/ClickHouse/pull/20387) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* `MaterializeMySQL`: add minmax skipping index for `_version` column. [#20382](https://github.com/ClickHouse/ClickHouse/pull/20382) ([Stig Bakken](https://github.com/stigsb)).
|
||||
* Add option `--backslash` for `clickhouse-format`, which can add a backslash at the end of each line of the formatted query. [#21494](https://github.com/ClickHouse/ClickHouse/pull/21494) ([flynn](https://github.com/ucasFL)).
|
||||
* Now clickhouse will not throw `LOGICAL_ERROR` exception when we try to mutate the already covered part. Fixes [#22013](https://github.com/ClickHouse/ClickHouse/issues/22013). [#22291](https://github.com/ClickHouse/ClickHouse/pull/22291) ([alesapin](https://github.com/alesapin)).
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Remove socket from epoll before cancelling packet receiver in `HedgedConnections` to prevent possible race. Fixes [#22161](https://github.com/ClickHouse/ClickHouse/issues/22161). [#22443](https://github.com/ClickHouse/ClickHouse/pull/22443) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Add (missing) memory accounting in parallel parsing routines. In previous versions OOM was possible when the resultset contains very large blocks of data. This closes [#22008](https://github.com/ClickHouse/ClickHouse/issues/22008). [#22425](https://github.com/ClickHouse/ClickHouse/pull/22425) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix exception which may happen when `SELECT` has constant `WHERE` condition and source table has columns which names are digits. [#22270](https://github.com/ClickHouse/ClickHouse/pull/22270) ([LiuNeng](https://github.com/liuneng1994)).
|
||||
* Fix query cancellation with `use_hedged_requests=0` and `async_socket_for_remote=1`. [#22183](https://github.com/ClickHouse/ClickHouse/pull/22183) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix uncaught exception in `InterserverIOHTTPHandler`. [#22146](https://github.com/ClickHouse/ClickHouse/pull/22146) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix docker entrypoint in case `http_port` is not in the config. [#22132](https://github.com/ClickHouse/ClickHouse/pull/22132) ([Ewout](https://github.com/devwout)).
|
||||
* Fix error `Invalid number of rows in Chunk` in `JOIN` with `TOTALS` and `arrayJoin`. Closes [#19303](https://github.com/ClickHouse/ClickHouse/issues/19303). [#22129](https://github.com/ClickHouse/ClickHouse/pull/22129) ([Vladimir](https://github.com/vdimir)).
|
||||
* Fix the background thread pool name which used to poll message from Kafka. The Kafka engine with the broken thread pool will not consume the message from message queue. [#22122](https://github.com/ClickHouse/ClickHouse/pull/22122) ([fastio](https://github.com/fastio)).
|
||||
* Fix waiting for `OPTIMIZE` and `ALTER` queries for `ReplicatedMergeTree` table engines. Now the query will not hang when the table was detached or restarted. [#22118](https://github.com/ClickHouse/ClickHouse/pull/22118) ([alesapin](https://github.com/alesapin)).
|
||||
* Disable `async_socket_for_remote`/`use_hedged_requests` for buggy Linux kernels. [#22109](https://github.com/ClickHouse/ClickHouse/pull/22109) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Docker entrypoint: avoid chown of `.` in case when `LOG_PATH` is empty. Closes [#22100](https://github.com/ClickHouse/ClickHouse/issues/22100). [#22102](https://github.com/ClickHouse/ClickHouse/pull/22102) ([filimonov](https://github.com/filimonov)).
|
||||
* The function `decrypt` was lacking a check for the minimal size of data encrypted in `AEAD` mode. This closes [#21897](https://github.com/ClickHouse/ClickHouse/issues/21897). [#22064](https://github.com/ClickHouse/ClickHouse/pull/22064) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* In rare case, merge for `CollapsingMergeTree` may create granule with `index_granularity + 1` rows. Because of this, internal check, added in [#18928](https://github.com/ClickHouse/ClickHouse/issues/18928) (affects 21.2 and 21.3), may fail with error `Incomplete granules are not allowed while blocks are granules size`. This error did not allow parts to merge. [#21976](https://github.com/ClickHouse/ClickHouse/pull/21976) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Reverted [#15454](https://github.com/ClickHouse/ClickHouse/issues/15454) that may cause significant increase in memory usage while loading external dictionaries of hashed type. This closes [#21935](https://github.com/ClickHouse/ClickHouse/issues/21935). [#21948](https://github.com/ClickHouse/ClickHouse/pull/21948) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Prevent hedged connections overlaps (`Unknown packet 9 from server` error). [#21941](https://github.com/ClickHouse/ClickHouse/pull/21941) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix reading the HTTP POST request with "multipart/form-data" content type in some cases. [#21936](https://github.com/ClickHouse/ClickHouse/pull/21936) ([Ivan](https://github.com/abyss7)).
|
||||
* Fix wrong `ORDER BY` results when a query contains window functions, and optimization for reading in primary key order is applied. Fixes [#21828](https://github.com/ClickHouse/ClickHouse/issues/21828). [#21915](https://github.com/ClickHouse/ClickHouse/pull/21915) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
* Fix deadlock in first catboost model execution. Closes [#13832](https://github.com/ClickHouse/ClickHouse/issues/13832). [#21844](https://github.com/ClickHouse/ClickHouse/pull/21844) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix incorrect query result (and possible crash) which could happen when `WHERE` or `HAVING` condition is pushed before `GROUP BY`. Fixes [#21773](https://github.com/ClickHouse/ClickHouse/issues/21773). [#21841](https://github.com/ClickHouse/ClickHouse/pull/21841) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Better error handling and logging in `WriteBufferFromS3`. [#21836](https://github.com/ClickHouse/ClickHouse/pull/21836) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||
* Fix possible crashes in aggregate functions with combinator `Distinct`, while using two-level aggregation. This is a follow-up fix of [#18365](https://github.com/ClickHouse/ClickHouse/pull/18365) . Can only reproduced in production env. [#21818](https://github.com/ClickHouse/ClickHouse/pull/21818) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix scalar subquery index analysis. This fixes [#21717](https://github.com/ClickHouse/ClickHouse/issues/21717) , which was introduced in [#18896](https://github.com/ClickHouse/ClickHouse/pull/18896). [#21766](https://github.com/ClickHouse/ClickHouse/pull/21766) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix bug for `ReplicatedMerge` table engines when `ALTER MODIFY COLUMN` query doesn't change the type of `Decimal` column if its size (32 bit or 64 bit) doesn't change. [#21728](https://github.com/ClickHouse/ClickHouse/pull/21728) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix possible infinite waiting when concurrent `OPTIMIZE` and `DROP` are run for `ReplicatedMergeTree`. [#21716](https://github.com/ClickHouse/ClickHouse/pull/21716) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix function `arrayElement` with type `Map` for constant integer arguments. [#21699](https://github.com/ClickHouse/ClickHouse/pull/21699) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix SIGSEGV on not existing attributes from `ip_trie` with `access_to_key_from_attributes`. [#21692](https://github.com/ClickHouse/ClickHouse/pull/21692) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Server now start accepting connections only after `DDLWorker` and dictionaries initialization. [#21676](https://github.com/ClickHouse/ClickHouse/pull/21676) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Add type conversion for keys of tables of type `Join` (previously led to SIGSEGV). [#21646](https://github.com/ClickHouse/ClickHouse/pull/21646) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix distributed requests cancellation (for example simple select from multiple shards with limit, i.e. `select * from remote('127.{2,3}', system.numbers) limit 100`) with `async_socket_for_remote=1`. [#21643](https://github.com/ClickHouse/ClickHouse/pull/21643) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix `fsync_part_directory` for horizontal merge. [#21642](https://github.com/ClickHouse/ClickHouse/pull/21642) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Remove unknown columns from joined table in `WHERE` for queries to external database engines (MySQL, PostgreSQL). close [#14614](https://github.com/ClickHouse/ClickHouse/issues/14614), close [#19288](https://github.com/ClickHouse/ClickHouse/issues/19288) (dup), close [#19645](https://github.com/ClickHouse/ClickHouse/issues/19645) (dup). [#21640](https://github.com/ClickHouse/ClickHouse/pull/21640) ([Vladimir](https://github.com/vdimir)).
|
||||
* `std::terminate` was called if there is an error writing data into s3. [#21624](https://github.com/ClickHouse/ClickHouse/pull/21624) ([Vladimir](https://github.com/vdimir)).
|
||||
* Fix possible error `Cannot find column` when `optimize_skip_unused_shards` is enabled and zero shards are used. [#21579](https://github.com/ClickHouse/ClickHouse/pull/21579) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* In case if query has constant `WHERE` condition, and setting `optimize_skip_unused_shards` enabled, all shards may be skipped and query could return incorrect empty result. [#21550](https://github.com/ClickHouse/ClickHouse/pull/21550) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix table function `clusterAllReplicas` returns wrong `_shard_num`. close [#21481](https://github.com/ClickHouse/ClickHouse/issues/21481). [#21498](https://github.com/ClickHouse/ClickHouse/pull/21498) ([flynn](https://github.com/ucasFL)).
|
||||
* Fix that S3 table holds old credentials after config update. [#21457](https://github.com/ClickHouse/ClickHouse/pull/21457) ([Grigory Pervakov](https://github.com/GrigoryPervakov)).
|
||||
* Fixed race on SSL object inside `SecureSocket` in Poco. [#21456](https://github.com/ClickHouse/ClickHouse/pull/21456) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Fix `Avro` format parsing for `Kafka`. Fixes [#21437](https://github.com/ClickHouse/ClickHouse/issues/21437). [#21438](https://github.com/ClickHouse/ClickHouse/pull/21438) ([Ilya Golshtein](https://github.com/ilejn)).
|
||||
* Fix receive and send timeouts and non-blocking read in secure socket. [#21429](https://github.com/ClickHouse/ClickHouse/pull/21429) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* `force_drop_table` flag didn't work for `MATERIALIZED VIEW`, it's fixed. Fixes [#18943](https://github.com/ClickHouse/ClickHouse/issues/18943). [#20626](https://github.com/ClickHouse/ClickHouse/pull/20626) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix name clashes in `PredicateRewriteVisitor`. It caused incorrect `WHERE` filtration after full join. Close [#20497](https://github.com/ClickHouse/ClickHouse/issues/20497). [#20622](https://github.com/ClickHouse/ClickHouse/pull/20622) ([Vladimir](https://github.com/vdimir)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* Add [Jepsen](https://github.com/jepsen-io/jepsen) tests for ClickHouse Keeper. [#21677](https://github.com/ClickHouse/ClickHouse/pull/21677) ([alesapin](https://github.com/alesapin)).
|
||||
* Run stateless tests in parallel in CI. Depends on [#22181](https://github.com/ClickHouse/ClickHouse/issues/22181). [#22300](https://github.com/ClickHouse/ClickHouse/pull/22300) ([alesapin](https://github.com/alesapin)).
|
||||
* Enable status check for [SQLancer](https://github.com/sqlancer/sqlancer) CI run. [#22015](https://github.com/ClickHouse/ClickHouse/pull/22015) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||
* Multiple preparations for PowerPC builds: Enable the bundled openldap on `ppc64le`. [#22487](https://github.com/ClickHouse/ClickHouse/pull/22487) ([Kfir Itzhak](https://github.com/mastertheknife)). Enable compiling on `ppc64le` with Clang. [#22476](https://github.com/ClickHouse/ClickHouse/pull/22476) ([Kfir Itzhak](https://github.com/mastertheknife)). Fix compiling boost on `ppc64le`. [#22474](https://github.com/ClickHouse/ClickHouse/pull/22474) ([Kfir Itzhak](https://github.com/mastertheknife)). Fix CMake error about internal CMake variable `CMAKE_ASM_COMPILE_OBJECT` not set on `ppc64le`. [#22469](https://github.com/ClickHouse/ClickHouse/pull/22469) ([Kfir Itzhak](https://github.com/mastertheknife)). Fix Fedora/RHEL/CentOS not finding `libclang_rt.builtins` on `ppc64le`. [#22458](https://github.com/ClickHouse/ClickHouse/pull/22458) ([Kfir Itzhak](https://github.com/mastertheknife)). Enable building with `jemalloc` on `ppc64le`. [#22447](https://github.com/ClickHouse/ClickHouse/pull/22447) ([Kfir Itzhak](https://github.com/mastertheknife)). Fix ClickHouse's config embedding and cctz's timezone embedding on `ppc64le`. [#22445](https://github.com/ClickHouse/ClickHouse/pull/22445) ([Kfir Itzhak](https://github.com/mastertheknife)). Fixed compiling on `ppc64le` and use the correct instruction pointer register on `ppc64le`. [#22430](https://github.com/ClickHouse/ClickHouse/pull/22430) ([Kfir Itzhak](https://github.com/mastertheknife)).
|
||||
* Re-enable the S3 (AWS) library on `aarch64`. [#22484](https://github.com/ClickHouse/ClickHouse/pull/22484) ([Kfir Itzhak](https://github.com/mastertheknife)).
|
||||
* Add `tzdata` to Docker containers because reading `ORC` formats requires it. This closes [#14156](https://github.com/ClickHouse/ClickHouse/issues/14156). [#22000](https://github.com/ClickHouse/ClickHouse/pull/22000) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Introduce 2 arguments for `clickhouse-server` image Dockerfile: `deb_location` & `single_binary_location`. [#21977](https://github.com/ClickHouse/ClickHouse/pull/21977) ([filimonov](https://github.com/filimonov)).
|
||||
* Allow to use clang-tidy with release builds by enabling assertions if it is used. [#21914](https://github.com/ClickHouse/ClickHouse/pull/21914) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add llvm-12 binaries name to search in cmake scripts. Implicit constants conversions to mute clang warnings. Updated submodules to build with CMake 3.19. Mute recursion in macro expansion in `readpassphrase` library. Deprecated `-fuse-ld` changed to `--ld-path` for clang. [#21597](https://github.com/ClickHouse/ClickHouse/pull/21597) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||
* Updating `docker/test/testflows/runner/dockerd-entrypoint.sh` to use Yandex dockerhub-proxy, because Docker Hub has enabled very restrictive rate limits [#21551](https://github.com/ClickHouse/ClickHouse/pull/21551) ([vzakaznikov](https://github.com/vzakaznikov)).
|
||||
* Fix macOS shared lib build. [#20184](https://github.com/ClickHouse/ClickHouse/pull/20184) ([nvartolomei](https://github.com/nvartolomei)).
|
||||
* Add `ctime` option to `zookeeper-dump-tree`. It allows to dump node creation time. [#21842](https://github.com/ClickHouse/ClickHouse/pull/21842) ([Ilya](https://github.com/HumanUser)).
|
||||
|
||||
|
||||
## ClickHouse release 21.3 (LTS)
|
||||
|
||||
### ClickHouse release v21.3, 2021-03-12
|
||||
|
||||
@ -26,7 +179,7 @@
|
||||
#### Experimental feature
|
||||
|
||||
* Add experimental `Replicated` database engine. It replicates DDL queries across multiple hosts. [#16193](https://github.com/ClickHouse/ClickHouse/pull/16193) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Introduce experimental support for window functions, enabled with `allow_experimental_functions = 1`. This is a preliminary, alpha-quality implementation that is not suitable for production use and will change in backward-incompatible ways in future releases. Please see [the documentation](https://github.com/ClickHouse/ClickHouse/blob/master/docs/en/sql-reference/window-functions/index.md#experimental-window-functions) for the list of supported features. [#20337](https://github.com/ClickHouse/ClickHouse/pull/20337) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
* Introduce experimental support for window functions, enabled with `allow_experimental_window_functions = 1`. This is a preliminary, alpha-quality implementation that is not suitable for production use and will change in backward-incompatible ways in future releases. Please see [the documentation](https://github.com/ClickHouse/ClickHouse/blob/master/docs/en/sql-reference/window-functions/index.md#experimental-window-functions) for the list of supported features. [#20337](https://github.com/ClickHouse/ClickHouse/pull/20337) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
* Add the ability to backup/restore metadata files for DiskS3. [#18377](https://github.com/ClickHouse/ClickHouse/pull/18377) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||
|
||||
#### Performance Improvement
|
||||
|
@ -39,6 +39,8 @@ else()
|
||||
set(RECONFIGURE_MESSAGE_LEVEL STATUS)
|
||||
endif()
|
||||
|
||||
enable_language(C CXX ASM)
|
||||
|
||||
include (cmake/arch.cmake)
|
||||
include (cmake/target.cmake)
|
||||
include (cmake/tools.cmake)
|
||||
@ -66,17 +68,30 @@ endif ()
|
||||
|
||||
include (cmake/find/ccache.cmake)
|
||||
|
||||
option(ENABLE_CHECK_HEAVY_BUILDS "Don't allow C++ translation units to compile too long or to take too much memory while compiling" OFF)
|
||||
# Take care to add prlimit in command line before ccache, or else ccache thinks that
|
||||
# prlimit is compiler, and clang++ is its input file, and refuses to work with
|
||||
# multiple inputs, e.g in ccache log:
|
||||
# [2021-03-31T18:06:32.655327 36900] Command line: /usr/bin/ccache prlimit --as=10000000000 --data=5000000000 --cpu=600 /usr/bin/clang++-11 - ...... std=gnu++2a -MD -MT src/CMakeFiles/dbms.dir/Storages/MergeTree/IMergeTreeDataPart.cpp.o -MF src/CMakeFiles/dbms.dir/Storages/MergeTree/IMergeTreeDataPart.cpp.o.d -o src/CMakeFiles/dbms.dir/Storages/MergeTree/IMergeTreeDataPart.cpp.o -c ../src/Storages/MergeTree/IMergeTreeDataPart.cpp
|
||||
#
|
||||
# [2021-03-31T18:06:32.656704 36900] Multiple input files: /usr/bin/clang++-11 and ../src/Storages/MergeTree/IMergeTreeDataPart.cpp
|
||||
#
|
||||
# Another way would be to use --ccache-skip option before clang++-11 to make
|
||||
# ccache ignore it.
|
||||
option(ENABLE_CHECK_HEAVY_BUILDS "Don't allow C++ translation units to compile too long or to take too much memory while compiling." OFF)
|
||||
if (ENABLE_CHECK_HEAVY_BUILDS)
|
||||
# set DATA (since RSS does not work since 2.6.x+) to 2G
|
||||
set (RLIMIT_DATA 5000000000)
|
||||
# set VIRT (RLIMIT_AS) to 10G (DATA*10)
|
||||
set (RLIMIT_AS 10000000000)
|
||||
# set CPU time limit to 600 seconds
|
||||
set (RLIMIT_CPU 600)
|
||||
|
||||
# gcc10/gcc10/clang -fsanitize=memory is too heavy
|
||||
if (SANITIZE STREQUAL "memory" OR COMPILER_GCC)
|
||||
set (RLIMIT_DATA 10000000000)
|
||||
endif()
|
||||
set (CMAKE_CXX_COMPILER_LAUNCHER prlimit --as=${RLIMIT_AS} --data=${RLIMIT_DATA} --cpu=600)
|
||||
|
||||
set (CMAKE_CXX_COMPILER_LAUNCHER prlimit --as=${RLIMIT_AS} --data=${RLIMIT_DATA} --cpu=${RLIMIT_CPU} ${CMAKE_CXX_COMPILER_LAUNCHER})
|
||||
endif ()
|
||||
|
||||
if (NOT CMAKE_BUILD_TYPE OR CMAKE_BUILD_TYPE STREQUAL "None")
|
||||
@ -248,19 +263,27 @@ if (ARCH_NATIVE)
|
||||
set (COMPILER_FLAGS "${COMPILER_FLAGS} -march=native")
|
||||
endif ()
|
||||
|
||||
if (COMPILER_GCC OR COMPILER_CLANG)
|
||||
# to make numeric_limits<__int128> works with GCC
|
||||
set (_CXX_STANDARD "gnu++2a")
|
||||
else()
|
||||
set (_CXX_STANDARD "c++2a")
|
||||
endif()
|
||||
if (${CMAKE_VERSION} VERSION_LESS "3.12.4")
|
||||
# CMake < 3.12 doesn't support setting 20 as a C++ standard version.
|
||||
# We will add C++ standard controlling flag in CMAKE_CXX_FLAGS manually for now.
|
||||
|
||||
# cmake < 3.12 doesn't support 20. We'll set CMAKE_CXX_FLAGS for now
|
||||
# set (CMAKE_CXX_STANDARD 20)
|
||||
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=${_CXX_STANDARD}")
|
||||
if (COMPILER_GCC OR COMPILER_CLANG)
|
||||
# to make numeric_limits<__int128> works with GCC
|
||||
set (_CXX_STANDARD "gnu++2a")
|
||||
else ()
|
||||
set (_CXX_STANDARD "c++2a")
|
||||
endif ()
|
||||
|
||||
set (CMAKE_CXX_EXTENSIONS 0) # https://cmake.org/cmake/help/latest/prop_tgt/CXX_EXTENSIONS.html#prop_tgt:CXX_EXTENSIONS
|
||||
set (CMAKE_CXX_STANDARD_REQUIRED ON)
|
||||
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=${_CXX_STANDARD}")
|
||||
else ()
|
||||
set (CMAKE_CXX_STANDARD 20)
|
||||
set (CMAKE_CXX_EXTENSIONS ON) # Same as gnu++2a (ON) vs c++2a (OFF): https://cmake.org/cmake/help/latest/prop_tgt/CXX_EXTENSIONS.html
|
||||
set (CMAKE_CXX_STANDARD_REQUIRED ON)
|
||||
endif ()
|
||||
|
||||
set (CMAKE_C_STANDARD 11)
|
||||
set (CMAKE_C_EXTENSIONS ON)
|
||||
set (CMAKE_C_STANDARD_REQUIRED ON)
|
||||
|
||||
if (COMPILER_GCC OR COMPILER_CLANG)
|
||||
# Enable C++14 sized global deallocation functions. It should be enabled by setting -std=c++14 but I'm not sure.
|
||||
@ -454,6 +477,7 @@ find_contrib_lib(double-conversion) # Must be before parquet
|
||||
include (cmake/find/ssl.cmake)
|
||||
include (cmake/find/ldap.cmake) # after ssl
|
||||
include (cmake/find/icu.cmake)
|
||||
include (cmake/find/xz.cmake)
|
||||
include (cmake/find/zlib.cmake)
|
||||
include (cmake/find/zstd.cmake)
|
||||
include (cmake/find/ltdl.cmake) # for odbc
|
||||
@ -488,6 +512,7 @@ include (cmake/find/fastops.cmake)
|
||||
include (cmake/find/odbc.cmake)
|
||||
include (cmake/find/rocksdb.cmake)
|
||||
include (cmake/find/libpqxx.cmake)
|
||||
include (cmake/find/nanodbc.cmake)
|
||||
include (cmake/find/nuraft.cmake)
|
||||
|
||||
|
||||
@ -501,6 +526,7 @@ include (cmake/find/msgpack.cmake)
|
||||
include (cmake/find/cassandra.cmake)
|
||||
include (cmake/find/sentry.cmake)
|
||||
include (cmake/find/stats.cmake)
|
||||
include (cmake/find/datasketches.cmake)
|
||||
|
||||
set (USE_INTERNAL_CITYHASH_LIBRARY ON CACHE INTERNAL "")
|
||||
find_contrib_lib(cityhash)
|
||||
|
@ -8,7 +8,7 @@ ClickHouse® is an open-source column-oriented database management system that a
|
||||
* [Tutorial](https://clickhouse.tech/docs/en/getting_started/tutorial/) shows how to set up and query small ClickHouse cluster.
|
||||
* [Documentation](https://clickhouse.tech/docs/en/) provides more in-depth information.
|
||||
* [YouTube channel](https://www.youtube.com/c/ClickHouseDB) has a lot of content about ClickHouse in video format.
|
||||
* [Slack](https://join.slack.com/t/clickhousedb/shared_invite/zt-ly9m4w1x-6j7x5Ts_pQZqrctAbRZ3cg) and [Telegram](https://telegram.me/clickhouse_en) allow to chat with ClickHouse users in real-time.
|
||||
* [Slack](https://join.slack.com/t/clickhousedb/shared_invite/zt-nwwakmk4-xOJ6cdy0sJC3It8j348~IA) and [Telegram](https://telegram.me/clickhouse_en) allow to chat with ClickHouse users in real-time.
|
||||
* [Blog](https://clickhouse.yandex/blog/en/) contains various ClickHouse-related articles, as well as announcements and reports about events.
|
||||
* [Code Browser](https://clickhouse.tech/codebrowser/html_report/ClickHouse/index.html) with syntax highlight and navigation.
|
||||
* [Contacts](https://clickhouse.tech/#contacts) can help to get your questions answered if there are any.
|
||||
|
@ -8,6 +8,7 @@ add_subdirectory (loggers)
|
||||
add_subdirectory (pcg-random)
|
||||
add_subdirectory (widechar_width)
|
||||
add_subdirectory (readpassphrase)
|
||||
add_subdirectory (bridge)
|
||||
|
||||
if (USE_MYSQL)
|
||||
add_subdirectory (mysqlxx)
|
||||
|
7
base/bridge/CMakeLists.txt
Normal file
7
base/bridge/CMakeLists.txt
Normal file
@ -0,0 +1,7 @@
|
||||
add_library (bridge
|
||||
IBridge.cpp
|
||||
)
|
||||
|
||||
target_include_directories (daemon PUBLIC ..)
|
||||
target_link_libraries (bridge PRIVATE daemon dbms Poco::Data Poco::Data::ODBC)
|
||||
|
238
base/bridge/IBridge.cpp
Normal file
238
base/bridge/IBridge.cpp
Normal file
@ -0,0 +1,238 @@
|
||||
#include "IBridge.h"
|
||||
|
||||
#include <IO/ReadHelpers.h>
|
||||
#include <boost/program_options.hpp>
|
||||
#include <Poco/Net/NetException.h>
|
||||
#include <Poco/Util/HelpFormatter.h>
|
||||
#include <Common/StringUtils/StringUtils.h>
|
||||
#include <Formats/registerFormats.h>
|
||||
#include <common/logger_useful.h>
|
||||
#include <Common/SensitiveDataMasker.h>
|
||||
#include <Server/HTTP/HTTPServer.h>
|
||||
|
||||
#if USE_ODBC
|
||||
# include <Poco/Data/ODBC/Connector.h>
|
||||
#endif
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int ARGUMENT_OUT_OF_BOUND;
|
||||
}
|
||||
|
||||
namespace
|
||||
{
|
||||
Poco::Net::SocketAddress makeSocketAddress(const std::string & host, UInt16 port, Poco::Logger * log)
|
||||
{
|
||||
Poco::Net::SocketAddress socket_address;
|
||||
try
|
||||
{
|
||||
socket_address = Poco::Net::SocketAddress(host, port);
|
||||
}
|
||||
catch (const Poco::Net::DNSException & e)
|
||||
{
|
||||
const auto code = e.code();
|
||||
if (code == EAI_FAMILY
|
||||
#if defined(EAI_ADDRFAMILY)
|
||||
|| code == EAI_ADDRFAMILY
|
||||
#endif
|
||||
)
|
||||
{
|
||||
LOG_ERROR(log, "Cannot resolve listen_host ({}), error {}: {}. If it is an IPv6 address and your host has disabled IPv6, then consider to specify IPv4 address to listen in <listen_host> element of configuration file. Example: <listen_host>0.0.0.0</listen_host>", host, e.code(), e.message());
|
||||
}
|
||||
|
||||
throw;
|
||||
}
|
||||
return socket_address;
|
||||
}
|
||||
|
||||
Poco::Net::SocketAddress socketBindListen(Poco::Net::ServerSocket & socket, const std::string & host, UInt16 port, Poco::Logger * log)
|
||||
{
|
||||
auto address = makeSocketAddress(host, port, log);
|
||||
#if POCO_VERSION < 0x01080000
|
||||
socket.bind(address, /* reuseAddress = */ true);
|
||||
#else
|
||||
socket.bind(address, /* reuseAddress = */ true, /* reusePort = */ false);
|
||||
#endif
|
||||
|
||||
socket.listen(/* backlog = */ 64);
|
||||
|
||||
return address;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
void IBridge::handleHelp(const std::string &, const std::string &)
|
||||
{
|
||||
Poco::Util::HelpFormatter help_formatter(options());
|
||||
help_formatter.setCommand(commandName());
|
||||
help_formatter.setHeader("HTTP-proxy for odbc requests");
|
||||
help_formatter.setUsage("--http-port <port>");
|
||||
help_formatter.format(std::cerr);
|
||||
|
||||
stopOptionsProcessing();
|
||||
}
|
||||
|
||||
|
||||
void IBridge::defineOptions(Poco::Util::OptionSet & options)
|
||||
{
|
||||
options.addOption(
|
||||
Poco::Util::Option("http-port", "", "port to listen").argument("http-port", true) .binding("http-port"));
|
||||
|
||||
options.addOption(
|
||||
Poco::Util::Option("listen-host", "", "hostname or address to listen, default 127.0.0.1").argument("listen-host").binding("listen-host"));
|
||||
|
||||
options.addOption(
|
||||
Poco::Util::Option("http-timeout", "", "http timeout for socket, default 1800").argument("http-timeout").binding("http-timeout"));
|
||||
|
||||
options.addOption(
|
||||
Poco::Util::Option("max-server-connections", "", "max connections to server, default 1024").argument("max-server-connections").binding("max-server-connections"));
|
||||
|
||||
options.addOption(
|
||||
Poco::Util::Option("keep-alive-timeout", "", "keepalive timeout, default 10").argument("keep-alive-timeout").binding("keep-alive-timeout"));
|
||||
|
||||
options.addOption(
|
||||
Poco::Util::Option("log-level", "", "sets log level, default info") .argument("log-level").binding("logger.level"));
|
||||
|
||||
options.addOption(
|
||||
Poco::Util::Option("log-path", "", "log path for all logs, default console").argument("log-path").binding("logger.log"));
|
||||
|
||||
options.addOption(
|
||||
Poco::Util::Option("err-log-path", "", "err log path for all logs, default no").argument("err-log-path").binding("logger.errorlog"));
|
||||
|
||||
options.addOption(
|
||||
Poco::Util::Option("stdout-path", "", "stdout log path, default console").argument("stdout-path").binding("logger.stdout"));
|
||||
|
||||
options.addOption(
|
||||
Poco::Util::Option("stderr-path", "", "stderr log path, default console").argument("stderr-path").binding("logger.stderr"));
|
||||
|
||||
using Me = std::decay_t<decltype(*this)>;
|
||||
|
||||
options.addOption(
|
||||
Poco::Util::Option("help", "", "produce this help message").binding("help").callback(Poco::Util::OptionCallback<Me>(this, &Me::handleHelp)));
|
||||
|
||||
ServerApplication::defineOptions(options); // NOLINT Don't need complex BaseDaemon's .xml config
|
||||
}
|
||||
|
||||
|
||||
void IBridge::initialize(Application & self)
|
||||
{
|
||||
BaseDaemon::closeFDs();
|
||||
is_help = config().has("help");
|
||||
|
||||
if (is_help)
|
||||
return;
|
||||
|
||||
config().setString("logger", bridgeName());
|
||||
|
||||
/// Redirect stdout, stderr to specified files.
|
||||
/// Some libraries and sanitizers write to stderr in case of errors.
|
||||
const auto stdout_path = config().getString("logger.stdout", "");
|
||||
if (!stdout_path.empty())
|
||||
{
|
||||
if (!freopen(stdout_path.c_str(), "a+", stdout))
|
||||
throw Poco::OpenFileException("Cannot attach stdout to " + stdout_path);
|
||||
|
||||
/// Disable buffering for stdout.
|
||||
setbuf(stdout, nullptr);
|
||||
}
|
||||
const auto stderr_path = config().getString("logger.stderr", "");
|
||||
if (!stderr_path.empty())
|
||||
{
|
||||
if (!freopen(stderr_path.c_str(), "a+", stderr))
|
||||
throw Poco::OpenFileException("Cannot attach stderr to " + stderr_path);
|
||||
|
||||
/// Disable buffering for stderr.
|
||||
setbuf(stderr, nullptr);
|
||||
}
|
||||
|
||||
buildLoggers(config(), logger(), self.commandName());
|
||||
|
||||
BaseDaemon::logRevision();
|
||||
|
||||
log = &logger();
|
||||
hostname = config().getString("listen-host", "127.0.0.1");
|
||||
port = config().getUInt("http-port");
|
||||
if (port > 0xFFFF)
|
||||
throw Exception("Out of range 'http-port': " + std::to_string(port), ErrorCodes::ARGUMENT_OUT_OF_BOUND);
|
||||
|
||||
http_timeout = config().getUInt("http-timeout", DEFAULT_HTTP_READ_BUFFER_TIMEOUT);
|
||||
max_server_connections = config().getUInt("max-server-connections", 1024);
|
||||
keep_alive_timeout = config().getUInt("keep-alive-timeout", 10);
|
||||
|
||||
initializeTerminationAndSignalProcessing();
|
||||
|
||||
#if USE_ODBC
|
||||
if (bridgeName() == "ODBCBridge")
|
||||
Poco::Data::ODBC::Connector::registerConnector();
|
||||
#endif
|
||||
|
||||
ServerApplication::initialize(self); // NOLINT
|
||||
}
|
||||
|
||||
|
||||
void IBridge::uninitialize()
|
||||
{
|
||||
BaseDaemon::uninitialize();
|
||||
}
|
||||
|
||||
|
||||
int IBridge::main(const std::vector<std::string> & /*args*/)
|
||||
{
|
||||
if (is_help)
|
||||
return Application::EXIT_OK;
|
||||
|
||||
registerFormats();
|
||||
LOG_INFO(log, "Starting up {} on host: {}, port: {}", bridgeName(), hostname, port);
|
||||
|
||||
Poco::Net::ServerSocket socket;
|
||||
auto address = socketBindListen(socket, hostname, port, log);
|
||||
socket.setReceiveTimeout(http_timeout);
|
||||
socket.setSendTimeout(http_timeout);
|
||||
|
||||
Poco::ThreadPool server_pool(3, max_server_connections);
|
||||
|
||||
Poco::Net::HTTPServerParams::Ptr http_params = new Poco::Net::HTTPServerParams;
|
||||
http_params->setTimeout(http_timeout);
|
||||
http_params->setKeepAliveTimeout(keep_alive_timeout);
|
||||
|
||||
auto shared_context = Context::createShared();
|
||||
auto context = Context::createGlobal(shared_context.get());
|
||||
context->makeGlobalContext();
|
||||
|
||||
if (config().has("query_masking_rules"))
|
||||
SensitiveDataMasker::setInstance(std::make_unique<SensitiveDataMasker>(config(), "query_masking_rules"));
|
||||
|
||||
auto server = HTTPServer(
|
||||
context,
|
||||
getHandlerFactoryPtr(context),
|
||||
server_pool,
|
||||
socket,
|
||||
http_params);
|
||||
|
||||
SCOPE_EXIT({
|
||||
LOG_DEBUG(log, "Received termination signal.");
|
||||
LOG_DEBUG(log, "Waiting for current connections to close.");
|
||||
|
||||
server.stop();
|
||||
|
||||
for (size_t count : ext::range(1, 6))
|
||||
{
|
||||
if (server.currentConnections() == 0)
|
||||
break;
|
||||
LOG_DEBUG(log, "Waiting for {} connections, try {}", server.currentConnections(), count);
|
||||
std::this_thread::sleep_for(std::chrono::milliseconds(1000));
|
||||
}
|
||||
});
|
||||
|
||||
server.start();
|
||||
LOG_INFO(log, "Listening http://{}", address.toString());
|
||||
|
||||
waitForTerminationRequest();
|
||||
return Application::EXIT_OK;
|
||||
}
|
||||
|
||||
}
|
51
base/bridge/IBridge.h
Normal file
51
base/bridge/IBridge.h
Normal file
@ -0,0 +1,51 @@
|
||||
#pragma once
|
||||
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Server/HTTP/HTTPRequestHandlerFactory.h>
|
||||
#include <daemon/BaseDaemon.h>
|
||||
|
||||
#include <Poco/Logger.h>
|
||||
#include <Poco/Util/ServerApplication.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
/// Class represents base for clickhouse-odbc-bridge and clickhouse-library-bridge servers.
|
||||
/// Listens to incoming HTTP POST and GET requests on specified port and host.
|
||||
/// Has two handlers '/' for all incoming POST requests and /ping for GET request about service status.
|
||||
class IBridge : public BaseDaemon
|
||||
{
|
||||
|
||||
public:
|
||||
/// Define command line arguments
|
||||
void defineOptions(Poco::Util::OptionSet & options) override;
|
||||
|
||||
protected:
|
||||
using HandlerFactoryPtr = std::shared_ptr<HTTPRequestHandlerFactory>;
|
||||
|
||||
void initialize(Application & self) override;
|
||||
|
||||
void uninitialize() override;
|
||||
|
||||
int main(const std::vector<std::string> & args) override;
|
||||
|
||||
virtual std::string bridgeName() const = 0;
|
||||
|
||||
virtual HandlerFactoryPtr getHandlerFactoryPtr(ContextPtr context) const = 0;
|
||||
|
||||
size_t keep_alive_timeout;
|
||||
|
||||
private:
|
||||
void handleHelp(const std::string &, const std::string &);
|
||||
|
||||
bool is_help;
|
||||
std::string hostname;
|
||||
size_t port;
|
||||
std::string log_level;
|
||||
size_t max_server_connections;
|
||||
size_t http_timeout;
|
||||
|
||||
Poco::Logger * log;
|
||||
};
|
||||
}
|
@ -7,8 +7,7 @@
|
||||
#include <condition_variable>
|
||||
|
||||
#include <common/defines.h>
|
||||
|
||||
#include <Common/MoveOrCopyIfThrow.h>
|
||||
#include <common/MoveOrCopyIfThrow.h>
|
||||
|
||||
/** Pool for limited size objects that cannot be used from different threads simultaneously.
|
||||
* The main use case is to have fixed size of objects that can be reused in difference threads during their lifetime
|
@ -47,6 +47,10 @@ endif()
|
||||
|
||||
target_include_directories(common PUBLIC .. ${CMAKE_CURRENT_BINARY_DIR}/..)
|
||||
|
||||
if (OS_DARWIN AND NOT MAKE_STATIC_LIBRARIES)
|
||||
target_link_libraries(common PUBLIC -Wl,-U,_inside_main)
|
||||
endif()
|
||||
|
||||
# Allow explicit fallback to readline
|
||||
if (NOT ENABLE_REPLXX AND ENABLE_READLINE)
|
||||
message (STATUS "Attempt to fallback to readline explicitly")
|
||||
|
@ -152,7 +152,7 @@ const DateLUTImpl & DateLUT::getImplementation(const std::string & time_zone) co
|
||||
|
||||
auto it = impls.emplace(time_zone, nullptr).first;
|
||||
if (!it->second)
|
||||
it->second = std::make_unique<DateLUTImpl>(time_zone);
|
||||
it->second = std::unique_ptr<DateLUTImpl>(new DateLUTImpl(time_zone));
|
||||
|
||||
return *it->second;
|
||||
}
|
||||
|
@ -32,7 +32,6 @@ public:
|
||||
|
||||
return date_lut.getImplementation(time_zone);
|
||||
}
|
||||
|
||||
static void setDefaultTimezone(const std::string & time_zone)
|
||||
{
|
||||
auto & date_lut = getInstance();
|
||||
|
@ -46,24 +46,41 @@ DateLUTImpl::DateLUTImpl(const std::string & time_zone_)
|
||||
if (&inside_main)
|
||||
assert(inside_main);
|
||||
|
||||
size_t i = 0;
|
||||
time_t start_of_day = 0;
|
||||
|
||||
cctz::time_zone cctz_time_zone;
|
||||
if (!cctz::load_time_zone(time_zone, &cctz_time_zone))
|
||||
throw Poco::Exception("Cannot load time zone " + time_zone_);
|
||||
|
||||
cctz::time_zone::absolute_lookup start_of_epoch_lookup = cctz_time_zone.lookup(std::chrono::system_clock::from_time_t(start_of_day));
|
||||
offset_at_start_of_epoch = start_of_epoch_lookup.offset;
|
||||
offset_is_whole_number_of_hours_everytime = true;
|
||||
constexpr cctz::civil_day epoch{1970, 1, 1};
|
||||
constexpr cctz::civil_day lut_start{DATE_LUT_MIN_YEAR, 1, 1};
|
||||
time_t start_of_day;
|
||||
|
||||
cctz::civil_day date{1970, 1, 1};
|
||||
/// Note: it's validated against all timezones in the system.
|
||||
static_assert((epoch - lut_start) == daynum_offset_epoch);
|
||||
|
||||
offset_at_start_of_epoch = cctz_time_zone.lookup(cctz_time_zone.lookup(epoch).pre).offset;
|
||||
offset_at_start_of_lut = cctz_time_zone.lookup(cctz_time_zone.lookup(lut_start).pre).offset;
|
||||
offset_is_whole_number_of_hours_during_epoch = true;
|
||||
|
||||
cctz::civil_day date = lut_start;
|
||||
|
||||
UInt32 i = 0;
|
||||
do
|
||||
{
|
||||
cctz::time_zone::civil_lookup lookup = cctz_time_zone.lookup(date);
|
||||
|
||||
start_of_day = std::chrono::system_clock::to_time_t(lookup.pre); /// Ambiguity is possible.
|
||||
/// Ambiguity is possible if time was changed backwards at the midnight
|
||||
/// or after midnight time has been changed back to midnight, for example one hour backwards at 01:00
|
||||
/// or after midnight time has been changed to the previous day, for example two hours backwards at 01:00
|
||||
/// Then midnight appears twice. Usually time change happens exactly at 00:00 or 01:00.
|
||||
|
||||
/// If transition did not involve previous day, we should use the first midnight as the start of the day,
|
||||
/// otherwise it's better to use the second midnight.
|
||||
|
||||
std::chrono::time_point start_of_day_time_point = lookup.trans < lookup.post
|
||||
? lookup.post /* Second midnight appears after transition, so there was a piece of previous day after transition */
|
||||
: lookup.pre;
|
||||
|
||||
start_of_day = std::chrono::system_clock::to_time_t(start_of_day_time_point);
|
||||
|
||||
Values & values = lut[i];
|
||||
values.year = date.year();
|
||||
@ -72,7 +89,7 @@ DateLUTImpl::DateLUTImpl(const std::string & time_zone_)
|
||||
values.day_of_week = getDayOfWeek(date);
|
||||
values.date = start_of_day;
|
||||
|
||||
assert(values.year >= DATE_LUT_MIN_YEAR && values.year <= DATE_LUT_MAX_YEAR);
|
||||
assert(values.year >= DATE_LUT_MIN_YEAR && values.year <= DATE_LUT_MAX_YEAR + 1);
|
||||
assert(values.month >= 1 && values.month <= 12);
|
||||
assert(values.day_of_month >= 1 && values.day_of_month <= 31);
|
||||
assert(values.day_of_week >= 1 && values.day_of_week <= 7);
|
||||
@ -85,50 +102,42 @@ DateLUTImpl::DateLUTImpl(const std::string & time_zone_)
|
||||
else
|
||||
values.days_in_month = i != 0 ? lut[i - 1].days_in_month : 31;
|
||||
|
||||
values.time_at_offset_change = 0;
|
||||
values.amount_of_offset_change = 0;
|
||||
values.time_at_offset_change_value = 0;
|
||||
values.amount_of_offset_change_value = 0;
|
||||
|
||||
if (start_of_day % 3600)
|
||||
offset_is_whole_number_of_hours_everytime = false;
|
||||
if (offset_is_whole_number_of_hours_during_epoch && start_of_day > 0 && start_of_day % 3600)
|
||||
offset_is_whole_number_of_hours_during_epoch = false;
|
||||
|
||||
/// If UTC offset was changed in previous day.
|
||||
if (i != 0)
|
||||
/// If UTC offset was changed this day.
|
||||
/// Change in time zone without transition is possible, e.g. Moscow 1991 Sun, 31 Mar, 02:00 MSK to EEST
|
||||
cctz::time_zone::civil_transition transition{};
|
||||
if (cctz_time_zone.next_transition(start_of_day_time_point - std::chrono::seconds(1), &transition)
|
||||
&& (cctz::civil_day(transition.from) == date || cctz::civil_day(transition.to) == date)
|
||||
&& transition.from != transition.to)
|
||||
{
|
||||
auto amount_of_offset_change_at_prev_day = 86400 - (lut[i].date - lut[i - 1].date);
|
||||
if (amount_of_offset_change_at_prev_day)
|
||||
{
|
||||
lut[i - 1].amount_of_offset_change = amount_of_offset_change_at_prev_day;
|
||||
values.time_at_offset_change_value = (transition.from - cctz::civil_second(date)) / Values::OffsetChangeFactor;
|
||||
values.amount_of_offset_change_value = (transition.to - transition.from) / Values::OffsetChangeFactor;
|
||||
|
||||
const auto utc_offset_at_beginning_of_day = cctz_time_zone.lookup(std::chrono::system_clock::from_time_t(lut[i - 1].date)).offset;
|
||||
// std::cerr << time_zone << ", " << date << ": change from " << transition.from << " to " << transition.to << "\n";
|
||||
// std::cerr << time_zone << ", " << date << ": change at " << values.time_at_offset_change() << " with " << values.amount_of_offset_change() << "\n";
|
||||
|
||||
/// Find a time (timestamp offset from beginning of day),
|
||||
/// when UTC offset was changed. Search is performed with 15-minute granularity, assuming it is enough.
|
||||
/// We don't support too large changes.
|
||||
if (values.amount_of_offset_change_value > 24 * 4)
|
||||
values.amount_of_offset_change_value = 24 * 4;
|
||||
else if (values.amount_of_offset_change_value < -24 * 4)
|
||||
values.amount_of_offset_change_value = -24 * 4;
|
||||
|
||||
time_t time_at_offset_change = 900;
|
||||
while (time_at_offset_change < 86400)
|
||||
{
|
||||
auto utc_offset_at_current_time = cctz_time_zone.lookup(std::chrono::system_clock::from_time_t(
|
||||
lut[i - 1].date + time_at_offset_change)).offset;
|
||||
|
||||
if (utc_offset_at_current_time != utc_offset_at_beginning_of_day)
|
||||
break;
|
||||
|
||||
time_at_offset_change += 900;
|
||||
}
|
||||
|
||||
lut[i - 1].time_at_offset_change = time_at_offset_change;
|
||||
|
||||
/// We doesn't support cases when time change results in switching to previous day.
|
||||
if (static_cast<int>(lut[i - 1].time_at_offset_change) + static_cast<int>(lut[i - 1].amount_of_offset_change) < 0)
|
||||
lut[i - 1].time_at_offset_change = -lut[i - 1].amount_of_offset_change;
|
||||
}
|
||||
/// We don't support cases when time change results in switching to previous day.
|
||||
/// Shift the point of time change later.
|
||||
if (values.time_at_offset_change_value + values.amount_of_offset_change_value < 0)
|
||||
values.time_at_offset_change_value = -values.amount_of_offset_change_value;
|
||||
}
|
||||
|
||||
/// Going to next day.
|
||||
++date;
|
||||
++i;
|
||||
}
|
||||
while (start_of_day <= DATE_LUT_MAX && i <= DATE_LUT_MAX_DAY_NUM);
|
||||
while (i < DATE_LUT_SIZE && lut[i - 1].year <= DATE_LUT_MAX_YEAR);
|
||||
|
||||
/// Fill excessive part of lookup table. This is needed only to simplify handling of overflow cases.
|
||||
while (i < DATE_LUT_SIZE)
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -7,3 +7,8 @@
|
||||
* See DateLUTImpl for usage examples.
|
||||
*/
|
||||
STRONG_TYPEDEF(UInt16, DayNum)
|
||||
|
||||
/** Represent number of days since 1970-01-01 but in extended range,
|
||||
* for dates before 1970-01-01 and after 2105
|
||||
*/
|
||||
STRONG_TYPEDEF(Int32, ExtendedDayNum)
|
||||
|
@ -92,20 +92,10 @@ public:
|
||||
LocalDate(const LocalDate &) noexcept = default;
|
||||
LocalDate & operator= (const LocalDate &) noexcept = default;
|
||||
|
||||
LocalDate & operator= (time_t time)
|
||||
{
|
||||
init(time);
|
||||
return *this;
|
||||
}
|
||||
|
||||
operator time_t() const
|
||||
{
|
||||
return DateLUT::instance().makeDate(m_year, m_month, m_day);
|
||||
}
|
||||
|
||||
DayNum getDayNum() const
|
||||
{
|
||||
return DateLUT::instance().makeDayNum(m_year, m_month, m_day);
|
||||
const auto & lut = DateLUT::instance();
|
||||
return DayNum(lut.makeDayNum(m_year, m_month, m_day).toUnderType());
|
||||
}
|
||||
|
||||
operator DayNum() const
|
||||
@ -166,12 +156,3 @@ public:
|
||||
};
|
||||
|
||||
static_assert(sizeof(LocalDate) == 4);
|
||||
|
||||
|
||||
namespace std
|
||||
{
|
||||
inline string to_string(const LocalDate & date)
|
||||
{
|
||||
return date.toString();
|
||||
}
|
||||
}
|
||||
|
@ -29,29 +29,16 @@ private:
|
||||
/// NOTE We may use attribute packed instead, but it is less portable.
|
||||
unsigned char pad = 0;
|
||||
|
||||
void init(time_t time)
|
||||
void init(time_t time, const DateLUTImpl & time_zone)
|
||||
{
|
||||
if (unlikely(time > DATE_LUT_MAX || time == 0))
|
||||
{
|
||||
m_year = 0;
|
||||
m_month = 0;
|
||||
m_day = 0;
|
||||
m_hour = 0;
|
||||
m_minute = 0;
|
||||
m_second = 0;
|
||||
DateLUTImpl::DateTimeComponents components = time_zone.toDateTimeComponents(time);
|
||||
|
||||
return;
|
||||
}
|
||||
|
||||
const auto & date_lut = DateLUT::instance();
|
||||
const auto & values = date_lut.getValues(time);
|
||||
|
||||
m_year = values.year;
|
||||
m_month = values.month;
|
||||
m_day = values.day_of_month;
|
||||
m_hour = date_lut.toHour(time);
|
||||
m_minute = date_lut.toMinute(time);
|
||||
m_second = date_lut.toSecond(time);
|
||||
m_year = components.date.year;
|
||||
m_month = components.date.month;
|
||||
m_day = components.date.day;
|
||||
m_hour = components.time.hour;
|
||||
m_minute = components.time.minute;
|
||||
m_second = components.time.second;
|
||||
|
||||
(void)pad; /// Suppress unused private field warning.
|
||||
}
|
||||
@ -73,9 +60,9 @@ private:
|
||||
}
|
||||
|
||||
public:
|
||||
explicit LocalDateTime(time_t time)
|
||||
explicit LocalDateTime(time_t time, const DateLUTImpl & time_zone = DateLUT::instance())
|
||||
{
|
||||
init(time);
|
||||
init(time, time_zone);
|
||||
}
|
||||
|
||||
LocalDateTime(unsigned short year_, unsigned char month_, unsigned char day_,
|
||||
@ -104,19 +91,6 @@ public:
|
||||
LocalDateTime(const LocalDateTime &) noexcept = default;
|
||||
LocalDateTime & operator= (const LocalDateTime &) noexcept = default;
|
||||
|
||||
LocalDateTime & operator= (time_t time)
|
||||
{
|
||||
init(time);
|
||||
return *this;
|
||||
}
|
||||
|
||||
operator time_t() const
|
||||
{
|
||||
return m_year == 0
|
||||
? 0
|
||||
: DateLUT::instance().makeDateTime(m_year, m_month, m_day, m_hour, m_minute, m_second);
|
||||
}
|
||||
|
||||
unsigned short year() const { return m_year; }
|
||||
unsigned char month() const { return m_month; }
|
||||
unsigned char day() const { return m_day; }
|
||||
@ -132,8 +106,30 @@ public:
|
||||
void second(unsigned char x) { m_second = x; }
|
||||
|
||||
LocalDate toDate() const { return LocalDate(m_year, m_month, m_day); }
|
||||
LocalDateTime toStartOfDate() const { return LocalDateTime(m_year, m_month, m_day, 0, 0, 0); }
|
||||
|
||||
LocalDateTime toStartOfDate() { return LocalDateTime(m_year, m_month, m_day, 0, 0, 0); }
|
||||
std::string toString() const
|
||||
{
|
||||
std::string s{"0000-00-00 00:00:00"};
|
||||
|
||||
s[0] += m_year / 1000;
|
||||
s[1] += (m_year / 100) % 10;
|
||||
s[2] += (m_year / 10) % 10;
|
||||
s[3] += m_year % 10;
|
||||
s[5] += m_month / 10;
|
||||
s[6] += m_month % 10;
|
||||
s[8] += m_day / 10;
|
||||
s[9] += m_day % 10;
|
||||
|
||||
s[11] += m_hour / 10;
|
||||
s[12] += m_hour % 10;
|
||||
s[14] += m_minute / 10;
|
||||
s[15] += m_minute % 10;
|
||||
s[17] += m_second / 10;
|
||||
s[18] += m_second % 10;
|
||||
|
||||
return s;
|
||||
}
|
||||
|
||||
bool operator< (const LocalDateTime & other) const
|
||||
{
|
||||
@ -167,14 +163,3 @@ public:
|
||||
};
|
||||
|
||||
static_assert(sizeof(LocalDateTime) == 8);
|
||||
|
||||
|
||||
namespace std
|
||||
{
|
||||
inline string to_string(const LocalDateTime & datetime)
|
||||
{
|
||||
stringstream str;
|
||||
str << datetime;
|
||||
return str.str();
|
||||
}
|
||||
}
|
||||
|
@ -25,6 +25,12 @@ namespace common
|
||||
return x - y;
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
inline auto NO_SANITIZE_UNDEFINED negateIgnoreOverflow(T x)
|
||||
{
|
||||
return -x;
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
inline bool addOverflow(T x, T y, T & res)
|
||||
{
|
||||
|
@ -1,45 +1,28 @@
|
||||
// https://stackoverflow.com/questions/1413445/reading-a-password-from-stdcin
|
||||
|
||||
#include <common/setTerminalEcho.h>
|
||||
#include <common/errnoToString.h>
|
||||
#include <stdexcept>
|
||||
#include <cstring>
|
||||
#include <string>
|
||||
|
||||
#ifdef WIN32
|
||||
#include <windows.h>
|
||||
#else
|
||||
#include <termios.h>
|
||||
#include <unistd.h>
|
||||
#include <errno.h>
|
||||
#endif
|
||||
|
||||
|
||||
void setTerminalEcho(bool enable)
|
||||
{
|
||||
#ifdef WIN32
|
||||
auto handle = GetStdHandle(STD_INPUT_HANDLE);
|
||||
DWORD mode;
|
||||
if (!GetConsoleMode(handle, &mode))
|
||||
throw std::runtime_error(std::string("setTerminalEcho failed get: ") + std::to_string(GetLastError()));
|
||||
/// Obtain terminal attributes,
|
||||
/// toggle the ECHO flag
|
||||
/// and set them back.
|
||||
|
||||
if (!enable)
|
||||
mode &= ~ENABLE_ECHO_INPUT;
|
||||
else
|
||||
mode |= ENABLE_ECHO_INPUT;
|
||||
struct termios tty{};
|
||||
|
||||
if (!SetConsoleMode(handle, mode))
|
||||
throw std::runtime_error(std::string("setTerminalEcho failed set: ") + std::to_string(GetLastError()));
|
||||
#else
|
||||
struct termios tty;
|
||||
if (tcgetattr(STDIN_FILENO, &tty))
|
||||
if (0 != tcgetattr(STDIN_FILENO, &tty))
|
||||
throw std::runtime_error(std::string("setTerminalEcho failed get: ") + errnoToString(errno));
|
||||
if (!enable)
|
||||
tty.c_lflag &= ~ECHO;
|
||||
else
|
||||
tty.c_lflag |= ECHO;
|
||||
|
||||
auto ret = tcsetattr(STDIN_FILENO, TCSANOW, &tty);
|
||||
if (ret)
|
||||
if (enable)
|
||||
tty.c_lflag |= ECHO;
|
||||
else
|
||||
tty.c_lflag &= ~ECHO;
|
||||
|
||||
if (0 != tcsetattr(STDIN_FILENO, TCSANOW, &tty))
|
||||
throw std::runtime_error(std::string("setTerminalEcho failed set: ") + errnoToString(errno));
|
||||
#endif
|
||||
}
|
||||
|
@ -12,6 +12,7 @@ private:
|
||||
T t;
|
||||
|
||||
public:
|
||||
using UnderlyingType = T;
|
||||
template <class Enable = typename std::is_copy_constructible<T>::type>
|
||||
explicit StrongTypedef(const T & t_) : t(t_) {}
|
||||
template <class Enable = typename std::is_move_constructible<T>::type>
|
||||
|
@ -1,25 +1,2 @@
|
||||
include (${ClickHouse_SOURCE_DIR}/cmake/add_check.cmake)
|
||||
|
||||
add_executable (date_lut2 date_lut2.cpp)
|
||||
add_executable (date_lut3 date_lut3.cpp)
|
||||
add_executable (date_lut_default_timezone date_lut_default_timezone.cpp)
|
||||
add_executable (local_date_time_comparison local_date_time_comparison.cpp)
|
||||
add_executable (realloc-perf allocator.cpp)
|
||||
|
||||
set(PLATFORM_LIBS ${CMAKE_DL_LIBS})
|
||||
|
||||
target_link_libraries (date_lut2 PRIVATE common ${PLATFORM_LIBS})
|
||||
target_link_libraries (date_lut3 PRIVATE common ${PLATFORM_LIBS})
|
||||
target_link_libraries (date_lut_default_timezone PRIVATE common ${PLATFORM_LIBS})
|
||||
target_link_libraries (local_date_time_comparison PRIVATE common ${PLATFORM_LIBS})
|
||||
target_link_libraries (realloc-perf PRIVATE common)
|
||||
add_check(local_date_time_comparison)
|
||||
|
||||
if(USE_GTEST)
|
||||
add_executable(unit_tests_libcommon gtest_json_test.cpp gtest_strong_typedef.cpp gtest_find_symbols.cpp)
|
||||
target_link_libraries(unit_tests_libcommon PRIVATE common ${GTEST_MAIN_LIBRARIES} ${GTEST_LIBRARIES})
|
||||
add_check(unit_tests_libcommon)
|
||||
endif()
|
||||
|
||||
add_executable (dump_variable dump_variable.cpp)
|
||||
target_link_libraries (dump_variable PRIVATE clickhouse_common_io)
|
||||
|
@ -1,47 +0,0 @@
|
||||
#include <cstdlib>
|
||||
#include <cstring>
|
||||
#include <vector>
|
||||
#include <thread>
|
||||
|
||||
|
||||
void thread_func()
|
||||
{
|
||||
for (size_t i = 0; i < 100; ++i)
|
||||
{
|
||||
size_t size = 4096;
|
||||
|
||||
void * buf = malloc(size);
|
||||
if (!buf)
|
||||
abort();
|
||||
memset(buf, 0, size);
|
||||
|
||||
while (size < 1048576)
|
||||
{
|
||||
size_t next_size = size * 4;
|
||||
|
||||
void * new_buf = realloc(buf, next_size);
|
||||
if (!new_buf)
|
||||
abort();
|
||||
buf = new_buf;
|
||||
|
||||
memset(reinterpret_cast<char*>(buf) + size, 0, next_size - size);
|
||||
size = next_size;
|
||||
}
|
||||
|
||||
free(buf);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
int main(int, char **)
|
||||
{
|
||||
std::vector<std::thread> threads(16);
|
||||
for (size_t i = 0; i < 1000; ++i)
|
||||
{
|
||||
for (auto & thread : threads)
|
||||
thread = std::thread(thread_func);
|
||||
for (auto & thread : threads)
|
||||
thread.join();
|
||||
}
|
||||
return 0;
|
||||
}
|
@ -1,53 +0,0 @@
|
||||
#include <iostream>
|
||||
#include <cstring>
|
||||
|
||||
#include <common/DateLUT.h>
|
||||
|
||||
|
||||
static std::string toString(time_t Value)
|
||||
{
|
||||
struct tm tm;
|
||||
char buf[96];
|
||||
|
||||
localtime_r(&Value, &tm);
|
||||
snprintf(buf, sizeof(buf), "%04d-%02d-%02d %02d:%02d:%02d",
|
||||
tm.tm_year + 1900, tm.tm_mon + 1, tm.tm_mday, tm.tm_hour, tm.tm_min, tm.tm_sec);
|
||||
|
||||
return buf;
|
||||
}
|
||||
|
||||
static time_t orderedIdentifierToDate(unsigned value)
|
||||
{
|
||||
struct tm tm;
|
||||
|
||||
memset(&tm, 0, sizeof(tm));
|
||||
|
||||
tm.tm_year = value / 10000 - 1900;
|
||||
tm.tm_mon = (value % 10000) / 100 - 1;
|
||||
tm.tm_mday = value % 100;
|
||||
tm.tm_isdst = -1;
|
||||
|
||||
return mktime(&tm);
|
||||
}
|
||||
|
||||
|
||||
void loop(time_t begin, time_t end, int step)
|
||||
{
|
||||
const auto & date_lut = DateLUT::instance();
|
||||
|
||||
for (time_t t = begin; t < end; t += step)
|
||||
std::cout << toString(t)
|
||||
<< ", " << toString(date_lut.toTime(t))
|
||||
<< ", " << date_lut.toHour(t)
|
||||
<< std::endl;
|
||||
}
|
||||
|
||||
|
||||
int main(int, char **)
|
||||
{
|
||||
loop(orderedIdentifierToDate(20101031), orderedIdentifierToDate(20101101), 15 * 60);
|
||||
loop(orderedIdentifierToDate(20100328), orderedIdentifierToDate(20100330), 15 * 60);
|
||||
loop(orderedIdentifierToDate(20141020), orderedIdentifierToDate(20141106), 15 * 60);
|
||||
|
||||
return 0;
|
||||
}
|
@ -1,62 +0,0 @@
|
||||
#include <iostream>
|
||||
#include <cstring>
|
||||
|
||||
#include <Poco/Exception.h>
|
||||
|
||||
#include <common/DateLUT.h>
|
||||
|
||||
|
||||
static std::string toString(time_t Value)
|
||||
{
|
||||
struct tm tm;
|
||||
char buf[96];
|
||||
|
||||
localtime_r(&Value, &tm);
|
||||
snprintf(buf, sizeof(buf), "%04d-%02d-%02d %02d:%02d:%02d",
|
||||
tm.tm_year + 1900, tm.tm_mon + 1, tm.tm_mday, tm.tm_hour, tm.tm_min, tm.tm_sec);
|
||||
|
||||
return buf;
|
||||
}
|
||||
|
||||
static time_t orderedIdentifierToDate(unsigned value)
|
||||
{
|
||||
struct tm tm;
|
||||
|
||||
memset(&tm, 0, sizeof(tm));
|
||||
|
||||
tm.tm_year = value / 10000 - 1900;
|
||||
tm.tm_mon = (value % 10000) / 100 - 1;
|
||||
tm.tm_mday = value % 100;
|
||||
tm.tm_isdst = -1;
|
||||
|
||||
return mktime(&tm);
|
||||
}
|
||||
|
||||
|
||||
void loop(time_t begin, time_t end, int step)
|
||||
{
|
||||
const auto & date_lut = DateLUT::instance();
|
||||
|
||||
for (time_t t = begin; t < end; t += step)
|
||||
{
|
||||
time_t t2 = date_lut.makeDateTime(date_lut.toYear(t), date_lut.toMonth(t), date_lut.toDayOfMonth(t),
|
||||
date_lut.toHour(t), date_lut.toMinute(t), date_lut.toSecond(t));
|
||||
|
||||
std::string s1 = toString(t);
|
||||
std::string s2 = toString(t2);
|
||||
|
||||
std::cerr << s1 << ", " << s2 << std::endl;
|
||||
|
||||
if (s1 != s2)
|
||||
throw Poco::Exception("Test failed.");
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
int main(int, char **)
|
||||
{
|
||||
loop(orderedIdentifierToDate(20101031), orderedIdentifierToDate(20101101), 15 * 60);
|
||||
loop(orderedIdentifierToDate(20100328), orderedIdentifierToDate(20100330), 15 * 60);
|
||||
|
||||
return 0;
|
||||
}
|
@ -1,31 +0,0 @@
|
||||
#include <iostream>
|
||||
#include <common/DateLUT.h>
|
||||
#include <Poco/Exception.h>
|
||||
|
||||
int main(int, char **)
|
||||
{
|
||||
try
|
||||
{
|
||||
const auto & date_lut = DateLUT::instance();
|
||||
std::cout << "Detected default timezone: `" << date_lut.getTimeZone() << "'" << std::endl;
|
||||
time_t now = time(nullptr);
|
||||
std::cout << "Current time: " << date_lut.timeToString(now)
|
||||
<< ", UTC: " << DateLUT::instance("UTC").timeToString(now) << std::endl;
|
||||
}
|
||||
catch (const Poco::Exception & e)
|
||||
{
|
||||
std::cerr << e.displayText() << std::endl;
|
||||
return 1;
|
||||
}
|
||||
catch (std::exception & e)
|
||||
{
|
||||
std::cerr << "std::exception: " << e.what() << std::endl;
|
||||
return 2;
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
std::cerr << "Some exception" << std::endl;
|
||||
return 3;
|
||||
}
|
||||
return 0;
|
||||
}
|
@ -1,656 +0,0 @@
|
||||
#include <vector>
|
||||
#include <string>
|
||||
#include <exception>
|
||||
#include <common/JSON.h>
|
||||
|
||||
#include <boost/range/irange.hpp>
|
||||
|
||||
using namespace std::literals::string_literals;
|
||||
|
||||
#include <gtest/gtest.h>
|
||||
|
||||
enum class ResultType
|
||||
{
|
||||
Return,
|
||||
Throw
|
||||
};
|
||||
|
||||
struct GetStringTestRecord
|
||||
{
|
||||
const char * input;
|
||||
ResultType result_type;
|
||||
const char * result;
|
||||
};
|
||||
|
||||
TEST(JSONSuite, SimpleTest)
|
||||
{
|
||||
std::vector<GetStringTestRecord> test_data =
|
||||
{
|
||||
{ R"("name")", ResultType::Return, "name" },
|
||||
{ R"("Вафельница Vitek WX-1102 FL")", ResultType::Return, "Вафельница Vitek WX-1102 FL" },
|
||||
{ R"("brand")", ResultType::Return, "brand" },
|
||||
{ R"("184509")", ResultType::Return, "184509" },
|
||||
{ R"("category")", ResultType::Return, "category" },
|
||||
{ R"("Все для детей/Детская техника/Vitek")", ResultType::Return, "Все для детей/Детская техника/Vitek" },
|
||||
{ R"("variant")", ResultType::Return, "variant" },
|
||||
{ R"("В наличии")", ResultType::Return, "В наличии" },
|
||||
{ R"("price")", ResultType::Return, "price" },
|
||||
{ R"("2390.00")", ResultType::Return, "2390.00" },
|
||||
{ R"("list")", ResultType::Return, "list" },
|
||||
{ R"("Карточка")", ResultType::Return, "Карточка" },
|
||||
{ R"("position")", ResultType::Return, "position" },
|
||||
{ R"("detail")", ResultType::Return, "detail" },
|
||||
{ R"("actionField")", ResultType::Return, "actionField" },
|
||||
{ R"("list")", ResultType::Return, "list" },
|
||||
{ R"("http://www.techport.ru/q/?t=вафельница&sort=price&sdim=asc")", ResultType::Return, "http://www.techport.ru/q/?t=вафельница&sort=price&sdim=asc" },
|
||||
{ R"("action")", ResultType::Return, "action" },
|
||||
{ R"("detail")", ResultType::Return, "detail" },
|
||||
{ R"("products")", ResultType::Return, "products" },
|
||||
{ R"("name")", ResultType::Return, "name" },
|
||||
{ R"("Вафельница Vitek WX-1102 FL")", ResultType::Return, "Вафельница Vitek WX-1102 FL" },
|
||||
{ R"("id")", ResultType::Return, "id" },
|
||||
{ R"("184509")", ResultType::Return, "184509" },
|
||||
{ R"("price")", ResultType::Return, "price" },
|
||||
{ R"("2390.00")", ResultType::Return, "2390.00" },
|
||||
{ R"("brand")", ResultType::Return, "brand" },
|
||||
{ R"("Vitek")", ResultType::Return, "Vitek" },
|
||||
{ R"("category")", ResultType::Return, "category" },
|
||||
{ R"("Все для детей/Детская техника/Vitek")", ResultType::Return, "Все для детей/Детская техника/Vitek" },
|
||||
{ R"("variant")", ResultType::Return, "variant" },
|
||||
{ R"("В наличии")", ResultType::Return, "В наличии" },
|
||||
{ R"("ru")", ResultType::Return, "ru" },
|
||||
{ R"("experiments")", ResultType::Return, "experiments" },
|
||||
{ R"("lang")", ResultType::Return, "lang" },
|
||||
{ R"("ru")", ResultType::Return, "ru" },
|
||||
{ R"("los_portal")", ResultType::Return, "los_portal" },
|
||||
{ R"("los_level")", ResultType::Return, "los_level" },
|
||||
{ R"("none")", ResultType::Return, "none" },
|
||||
{ R"("isAuthorized")", ResultType::Return, "isAuthorized" },
|
||||
{ R"("isSubscriber")", ResultType::Return, "isSubscriber" },
|
||||
{ R"("postType")", ResultType::Return, "postType" },
|
||||
{ R"("Новости")", ResultType::Return, "Новости" },
|
||||
{ R"("experiments")", ResultType::Return, "experiments" },
|
||||
{ R"("lang")", ResultType::Return, "lang" },
|
||||
{ R"("ru")", ResultType::Return, "ru" },
|
||||
{ R"("los_portal")", ResultType::Return, "los_portal" },
|
||||
{ R"("los_level")", ResultType::Return, "los_level" },
|
||||
{ R"("none")", ResultType::Return, "none" },
|
||||
{ R"("lang")", ResultType::Return, "lang" },
|
||||
{ R"("ru")", ResultType::Return, "ru" },
|
||||
{ R"("Электроплита GEFEST Брест ЭПНД 5140-01 0001")", ResultType::Return, "Электроплита GEFEST Брест ЭПНД 5140-01 0001" },
|
||||
{ R"("price")", ResultType::Return, "price" },
|
||||
{ R"("currencyCode")", ResultType::Return, "currencyCode" },
|
||||
{ R"("RUB")", ResultType::Return, "RUB" },
|
||||
{ R"("lang")", ResultType::Return, "lang" },
|
||||
{ R"("ru")", ResultType::Return, "ru" },
|
||||
{ R"("experiments")", ResultType::Return, "experiments" },
|
||||
{ R"("lang")", ResultType::Return, "lang" },
|
||||
{ R"("ru")", ResultType::Return, "ru" },
|
||||
{ R"("los_portal")", ResultType::Return, "los_portal" },
|
||||
{ R"("los_level")", ResultType::Return, "los_level" },
|
||||
{ R"("none")", ResultType::Return, "none" },
|
||||
{ R"("trash_login")", ResultType::Return, "trash_login" },
|
||||
{ R"("novikoff")", ResultType::Return, "novikoff" },
|
||||
{ R"("trash_cat_link")", ResultType::Return, "trash_cat_link" },
|
||||
{ R"("progs")", ResultType::Return, "progs" },
|
||||
{ R"("trash_parent_link")", ResultType::Return, "trash_parent_link" },
|
||||
{ R"("content")", ResultType::Return, "content" },
|
||||
{ R"("trash_posted_parent")", ResultType::Return, "trash_posted_parent" },
|
||||
{ R"("content.01.2016")", ResultType::Return, "content.01.2016" },
|
||||
{ R"("trash_posted_cat")", ResultType::Return, "trash_posted_cat" },
|
||||
{ R"("progs.01.2016")", ResultType::Return, "progs.01.2016" },
|
||||
{ R"("trash_virus_count")", ResultType::Return, "trash_virus_count" },
|
||||
{ R"("trash_is_android")", ResultType::Return, "trash_is_android" },
|
||||
{ R"("trash_is_wp8")", ResultType::Return, "trash_is_wp8" },
|
||||
{ R"("trash_is_ios")", ResultType::Return, "trash_is_ios" },
|
||||
{ R"("trash_posted")", ResultType::Return, "trash_posted" },
|
||||
{ R"("01.2016")", ResultType::Return, "01.2016" },
|
||||
{ R"("experiments")", ResultType::Return, "experiments" },
|
||||
{ R"("lang")", ResultType::Return, "lang" },
|
||||
{ R"("ru")", ResultType::Return, "ru" },
|
||||
{ R"("los_portal")", ResultType::Return, "los_portal" },
|
||||
{ R"("los_level")", ResultType::Return, "los_level" },
|
||||
{ R"("none")", ResultType::Return, "none" },
|
||||
{ R"("merchantId")", ResultType::Return, "merchantId" },
|
||||
{ R"("13694_49246")", ResultType::Return, "13694_49246" },
|
||||
{ R"("cps-source")", ResultType::Return, "cps-source" },
|
||||
{ R"("wargaming")", ResultType::Return, "wargaming" },
|
||||
{ R"("cps_provider")", ResultType::Return, "cps_provider" },
|
||||
{ R"("default")", ResultType::Return, "default" },
|
||||
{ R"("errorReason")", ResultType::Return, "errorReason" },
|
||||
{ R"("no errors")", ResultType::Return, "no errors" },
|
||||
{ R"("scid")", ResultType::Return, "scid" },
|
||||
{ R"("isAuthPayment")", ResultType::Return, "isAuthPayment" },
|
||||
{ R"("lang")", ResultType::Return, "lang" },
|
||||
{ R"("ru")", ResultType::Return, "ru" },
|
||||
{ R"("rubric")", ResultType::Return, "rubric" },
|
||||
{ R"("")", ResultType::Return, "" },
|
||||
{ R"("rubric")", ResultType::Return, "rubric" },
|
||||
{ R"("Мир")", ResultType::Return, "Мир" },
|
||||
{ R"("lang")", ResultType::Return, "lang" },
|
||||
{ R"("ru")", ResultType::Return, "ru" },
|
||||
{ R"("experiments")", ResultType::Return, "experiments" },
|
||||
{ R"("lang")", ResultType::Return, "lang" },
|
||||
{ R"("ru")", ResultType::Return, "ru" },
|
||||
{ R"("los_portal")", ResultType::Return, "los_portal" },
|
||||
{ R"("los_level")", ResultType::Return, "los_level" },
|
||||
{ R"("none")", ResultType::Return, "none" },
|
||||
{ R"("lang")", ResultType::Return, "lang" },
|
||||
{ R"("ru")", ResultType::Return, "ru" },
|
||||
{ R"("__ym")", ResultType::Return, "__ym" },
|
||||
{ R"("ecommerce")", ResultType::Return, "ecommerce" },
|
||||
{ R"("impressions")", ResultType::Return, "impressions" },
|
||||
{ R"("id")", ResultType::Return, "id" },
|
||||
{ R"("863813")", ResultType::Return, "863813" },
|
||||
{ R"("name")", ResultType::Return, "name" },
|
||||
{ R"("Футболка детская 3D Happy, возраст 1-2 года, трикотаж")", ResultType::Return, "Футболка детская 3D Happy, возраст 1-2 года, трикотаж" },
|
||||
{ R"("category")", ResultType::Return, "category" },
|
||||
{ R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" },
|
||||
{ R"("variant")", ResultType::Return, "variant" },
|
||||
{ R"("")", ResultType::Return, "" },
|
||||
{ R"("price")", ResultType::Return, "price" },
|
||||
{ R"("390.00")", ResultType::Return, "390.00" },
|
||||
{ R"("list")", ResultType::Return, "list" },
|
||||
{ R"("/retailrocket/")", ResultType::Return, "/retailrocket/" },
|
||||
{ R"("position")", ResultType::Return, "position" },
|
||||
{ R"("brand")", ResultType::Return, "brand" },
|
||||
{ R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" },
|
||||
{ R"("id")", ResultType::Return, "id" },
|
||||
{ R"("863839")", ResultType::Return, "863839" },
|
||||
{ R"("name")", ResultType::Return, "name" },
|
||||
{ R"("Футболка детская 3D Pretty kitten, возраст 1-2 года, трикотаж")", ResultType::Return, "Футболка детская 3D Pretty kitten, возраст 1-2 года, трикотаж" },
|
||||
{ R"("category")", ResultType::Return, "category" },
|
||||
{ R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" },
|
||||
{ R"("variant")", ResultType::Return, "variant" },
|
||||
{ R"("")", ResultType::Return, "" },
|
||||
{ R"("price")", ResultType::Return, "price" },
|
||||
{ R"("390.00")", ResultType::Return, "390.00" },
|
||||
{ R"("list")", ResultType::Return, "list" },
|
||||
{ R"("/retailrocket/")", ResultType::Return, "/retailrocket/" },
|
||||
{ R"("position")", ResultType::Return, "position" },
|
||||
{ R"("brand")", ResultType::Return, "brand" },
|
||||
{ R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" },
|
||||
{ R"("id")", ResultType::Return, "id" },
|
||||
{ R"("863847")", ResultType::Return, "863847" },
|
||||
{ R"("name")", ResultType::Return, "name" },
|
||||
{ R"("Футболка детская 3D Little tiger, возраст 1-2 года, трикотаж")", ResultType::Return, "Футболка детская 3D Little tiger, возраст 1-2 года, трикотаж" },
|
||||
{ R"("category")", ResultType::Return, "category" },
|
||||
{ R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" },
|
||||
{ R"("variant")", ResultType::Return, "variant" },
|
||||
{ R"("")", ResultType::Return, "" },
|
||||
{ R"("price")", ResultType::Return, "price" },
|
||||
{ R"("390.00")", ResultType::Return, "390.00" },
|
||||
{ R"("list")", ResultType::Return, "list" },
|
||||
{ R"("/retailrocket/")", ResultType::Return, "/retailrocket/" },
|
||||
{ R"("position")", ResultType::Return, "position" },
|
||||
{ R"("brand")", ResultType::Return, "brand" },
|
||||
{ R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" },
|
||||
{ R"("id")", ResultType::Return, "id" },
|
||||
{ R"("911480")", ResultType::Return, "911480" },
|
||||
{ R"("name")", ResultType::Return, "name" },
|
||||
{ R"("Футболка детская 3D Puppy, возраст 1-2 года, трикотаж")", ResultType::Return, "Футболка детская 3D Puppy, возраст 1-2 года, трикотаж" },
|
||||
{ R"("category")", ResultType::Return, "category" },
|
||||
{ R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" },
|
||||
{ R"("variant")", ResultType::Return, "variant" },
|
||||
{ R"("")", ResultType::Return, "" },
|
||||
{ R"("price")", ResultType::Return, "price" },
|
||||
{ R"("390.00")", ResultType::Return, "390.00" },
|
||||
{ R"("list")", ResultType::Return, "list" },
|
||||
{ R"("/retailrocket/")", ResultType::Return, "/retailrocket/" },
|
||||
{ R"("position")", ResultType::Return, "position" },
|
||||
{ R"("brand")", ResultType::Return, "brand" },
|
||||
{ R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" },
|
||||
{ R"("id")", ResultType::Return, "id" },
|
||||
{ R"("911484")", ResultType::Return, "911484" },
|
||||
{ R"("name")", ResultType::Return, "name" },
|
||||
{ R"("Футболка детская 3D Little bears, возраст 1-2 года, трикотаж")", ResultType::Return, "Футболка детская 3D Little bears, возраст 1-2 года, трикотаж" },
|
||||
{ R"("category")", ResultType::Return, "category" },
|
||||
{ R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" },
|
||||
{ R"("variant")", ResultType::Return, "variant" },
|
||||
{ R"("")", ResultType::Return, "" },
|
||||
{ R"("price")", ResultType::Return, "price" },
|
||||
{ R"("390.00")", ResultType::Return, "390.00" },
|
||||
{ R"("list")", ResultType::Return, "list" },
|
||||
{ R"("/retailrocket/")", ResultType::Return, "/retailrocket/" },
|
||||
{ R"("position")", ResultType::Return, "position" },
|
||||
{ R"("brand")", ResultType::Return, "brand" },
|
||||
{ R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" },
|
||||
{ R"("id")", ResultType::Return, "id" },
|
||||
{ R"("911489")", ResultType::Return, "911489" },
|
||||
{ R"("name")", ResultType::Return, "name" },
|
||||
{ R"("Футболка детская 3D Dolphin, возраст 2-4 года, трикотаж")", ResultType::Return, "Футболка детская 3D Dolphin, возраст 2-4 года, трикотаж" },
|
||||
{ R"("category")", ResultType::Return, "category" },
|
||||
{ R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" },
|
||||
{ R"("variant")", ResultType::Return, "variant" },
|
||||
{ R"("")", ResultType::Return, "" },
|
||||
{ R"("price")", ResultType::Return, "price" },
|
||||
{ R"("390.00")", ResultType::Return, "390.00" },
|
||||
{ R"("list")", ResultType::Return, "list" },
|
||||
{ R"("/retailrocket/")", ResultType::Return, "/retailrocket/" },
|
||||
{ R"("position")", ResultType::Return, "position" },
|
||||
{ R"("brand")", ResultType::Return, "brand" },
|
||||
{ R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" },
|
||||
{ R"("id")", ResultType::Return, "id" },
|
||||
{ R"("911496")", ResultType::Return, "911496" },
|
||||
{ R"("name")", ResultType::Return, "name" },
|
||||
{ R"("Футболка детская 3D Pretty, возраст 1-2 года, трикотаж")", ResultType::Return, "Футболка детская 3D Pretty, возраст 1-2 года, трикотаж" },
|
||||
{ R"("category")", ResultType::Return, "category" },
|
||||
{ R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" },
|
||||
{ R"("variant")", ResultType::Return, "variant" },
|
||||
{ R"("")", ResultType::Return, "" },
|
||||
{ R"("price")", ResultType::Return, "price" },
|
||||
{ R"("390.00")", ResultType::Return, "390.00" },
|
||||
{ R"("list")", ResultType::Return, "list" },
|
||||
{ R"("/retailrocket/")", ResultType::Return, "/retailrocket/" },
|
||||
{ R"("position")", ResultType::Return, "position" },
|
||||
{ R"("brand")", ResultType::Return, "brand" },
|
||||
{ R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" },
|
||||
{ R"("id")", ResultType::Return, "id" },
|
||||
{ R"("911504")", ResultType::Return, "911504" },
|
||||
{ R"("name")", ResultType::Return, "name" },
|
||||
{ R"("Футболка детская 3D Fairytale, возраст 1-2 года, трикотаж")", ResultType::Return, "Футболка детская 3D Fairytale, возраст 1-2 года, трикотаж" },
|
||||
{ R"("category")", ResultType::Return, "category" },
|
||||
{ R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" },
|
||||
{ R"("variant")", ResultType::Return, "variant" },
|
||||
{ R"("")", ResultType::Return, "" },
|
||||
{ R"("price")", ResultType::Return, "price" },
|
||||
{ R"("390.00")", ResultType::Return, "390.00" },
|
||||
{ R"("list")", ResultType::Return, "list" },
|
||||
{ R"("/retailrocket/")", ResultType::Return, "/retailrocket/" },
|
||||
{ R"("position")", ResultType::Return, "position" },
|
||||
{ R"("brand")", ResultType::Return, "brand" },
|
||||
{ R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" },
|
||||
{ R"("id")", ResultType::Return, "id" },
|
||||
{ R"("911508")", ResultType::Return, "911508" },
|
||||
{ R"("name")", ResultType::Return, "name" },
|
||||
{ R"("Футболка детская 3D Kittens, возраст 1-2 года, трикотаж")", ResultType::Return, "Футболка детская 3D Kittens, возраст 1-2 года, трикотаж" },
|
||||
{ R"("category")", ResultType::Return, "category" },
|
||||
{ R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" },
|
||||
{ R"("variant")", ResultType::Return, "variant" },
|
||||
{ R"("")", ResultType::Return, "" },
|
||||
{ R"("price")", ResultType::Return, "price" },
|
||||
{ R"("390.00")", ResultType::Return, "390.00" },
|
||||
{ R"("list")", ResultType::Return, "list" },
|
||||
{ R"("/retailrocket/")", ResultType::Return, "/retailrocket/" },
|
||||
{ R"("position")", ResultType::Return, "position" },
|
||||
{ R"("brand")", ResultType::Return, "brand" },
|
||||
{ R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" },
|
||||
{ R"("id")", ResultType::Return, "id" },
|
||||
{ R"("911512")", ResultType::Return, "911512" },
|
||||
{ R"("name")", ResultType::Return, "name" },
|
||||
{ R"("Футболка детская 3D Sunshine, возраст 1-2 года, трикотаж")", ResultType::Return, "Футболка детская 3D Sunshine, возраст 1-2 года, трикотаж" },
|
||||
{ R"("category")", ResultType::Return, "category" },
|
||||
{ R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" },
|
||||
{ R"("variant")", ResultType::Return, "variant" },
|
||||
{ R"("")", ResultType::Return, "" },
|
||||
{ R"("price")", ResultType::Return, "price" },
|
||||
{ R"("390.00")", ResultType::Return, "390.00" },
|
||||
{ R"("list")", ResultType::Return, "list" },
|
||||
{ R"("/retailrocket/")", ResultType::Return, "/retailrocket/" },
|
||||
{ R"("position")", ResultType::Return, "position" },
|
||||
{ R"("brand")", ResultType::Return, "brand" },
|
||||
{ R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" },
|
||||
{ R"("id")", ResultType::Return, "id" },
|
||||
{ R"("911516")", ResultType::Return, "911516" },
|
||||
{ R"("name")", ResultType::Return, "name" },
|
||||
{ R"("Футболка детская 3D Dog in bag, возраст 1-2 года, трикотаж")", ResultType::Return, "Футболка детская 3D Dog in bag, возраст 1-2 года, трикотаж" },
|
||||
{ R"("category")", ResultType::Return, "category" },
|
||||
{ R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" },
|
||||
{ R"("variant")", ResultType::Return, "variant" },
|
||||
{ R"("")", ResultType::Return, "" },
|
||||
{ R"("price")", ResultType::Return, "price" },
|
||||
{ R"("390.00")", ResultType::Return, "390.00" },
|
||||
{ R"("list")", ResultType::Return, "list" },
|
||||
{ R"("/retailrocket/")", ResultType::Return, "/retailrocket/" },
|
||||
{ R"("position")", ResultType::Return, "position" },
|
||||
{ R"("brand")", ResultType::Return, "brand" },
|
||||
{ R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" },
|
||||
{ R"("id")", ResultType::Return, "id" },
|
||||
{ R"("911520")", ResultType::Return, "911520" },
|
||||
{ R"("name")", ResultType::Return, "name" },
|
||||
{ R"("Футболка детская 3D Cute puppy, возраст 1-2 года, трикотаж")", ResultType::Return, "Футболка детская 3D Cute puppy, возраст 1-2 года, трикотаж" },
|
||||
{ R"("category")", ResultType::Return, "category" },
|
||||
{ R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" },
|
||||
{ R"("variant")", ResultType::Return, "variant" },
|
||||
{ R"("")", ResultType::Return, "" },
|
||||
{ R"("price")", ResultType::Return, "price" },
|
||||
{ R"("390.00")", ResultType::Return, "390.00" },
|
||||
{ R"("list")", ResultType::Return, "list" },
|
||||
{ R"("/retailrocket/")", ResultType::Return, "/retailrocket/" },
|
||||
{ R"("position")", ResultType::Return, "position" },
|
||||
{ R"("brand")", ResultType::Return, "brand" },
|
||||
{ R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" },
|
||||
{ R"("id")", ResultType::Return, "id" },
|
||||
{ R"("911524")", ResultType::Return, "911524" },
|
||||
{ R"("name")", ResultType::Return, "name" },
|
||||
{ R"("Футболка детская 3D Rabbit, возраст 1-2 года, трикотаж")", ResultType::Return, "Футболка детская 3D Rabbit, возраст 1-2 года, трикотаж" },
|
||||
{ R"("category")", ResultType::Return, "category" },
|
||||
{ R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" },
|
||||
{ R"("variant")", ResultType::Return, "variant" },
|
||||
{ R"("")", ResultType::Return, "" },
|
||||
{ R"("price")", ResultType::Return, "price" },
|
||||
{ R"("390.00")", ResultType::Return, "390.00" },
|
||||
{ R"("list")", ResultType::Return, "list" },
|
||||
{ R"("/retailrocket/")", ResultType::Return, "/retailrocket/" },
|
||||
{ R"("position")", ResultType::Return, "position" },
|
||||
{ R"("brand")", ResultType::Return, "brand" },
|
||||
{ R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" },
|
||||
{ R"("id")", ResultType::Return, "id" },
|
||||
{ R"("911528")", ResultType::Return, "911528" },
|
||||
{ R"("name")", ResultType::Return, "name" },
|
||||
{ R"("Футболка детская 3D Turtle, возраст 1-2 года, трикотаж")", ResultType::Return, "Футболка детская 3D Turtle, возраст 1-2 года, трикотаж" },
|
||||
{ R"("category")", ResultType::Return, "category" },
|
||||
{ R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" },
|
||||
{ R"("variant")", ResultType::Return, "variant" },
|
||||
{ R"("")", ResultType::Return, "" },
|
||||
{ R"("price")", ResultType::Return, "price" },
|
||||
{ R"("390.00")", ResultType::Return, "390.00" },
|
||||
{ R"("list")", ResultType::Return, "list" },
|
||||
{ R"("/retailrocket/")", ResultType::Return, "/retailrocket/" },
|
||||
{ R"("position")", ResultType::Return, "position" },
|
||||
{ R"("brand")", ResultType::Return, "brand" },
|
||||
{ R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" },
|
||||
{ R"("id")", ResultType::Return, "id" },
|
||||
{ R"("888616")", ResultType::Return, "888616" },
|
||||
{ R"("name")", ResultType::Return, "name" },
|
||||
{ "\"3Д Футболка мужская \\\"Collorista\\\" Светлое завтра р-р XL(52-54), 100% хлопок, трикотаж\"", ResultType::Return, "3Д Футболка мужская \"Collorista\" Светлое завтра р-р XL(52-54), 100% хлопок, трикотаж" },
|
||||
{ R"("category")", ResultType::Return, "category" },
|
||||
{ R"("/Одежда и обувь/Мужская одежда/Футболки/")", ResultType::Return, "/Одежда и обувь/Мужская одежда/Футболки/" },
|
||||
{ R"("variant")", ResultType::Return, "variant" },
|
||||
{ R"("")", ResultType::Return, "" },
|
||||
{ R"("price")", ResultType::Return, "price" },
|
||||
{ R"("406.60")", ResultType::Return, "406.60" },
|
||||
{ R"("list")", ResultType::Return, "list" },
|
||||
{ R"("/retailrocket/")", ResultType::Return, "/retailrocket/" },
|
||||
{ R"("position")", ResultType::Return, "position" },
|
||||
{ R"("brand")", ResultType::Return, "brand" },
|
||||
{ R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" },
|
||||
{ R"("id")", ResultType::Return, "id" },
|
||||
{ R"("913361")", ResultType::Return, "913361" },
|
||||
{ R"("name")", ResultType::Return, "name" },
|
||||
{ R"("3Д Футболка детская World р-р 8-10, 100% хлопок, трикотаж")", ResultType::Return, "3Д Футболка детская World р-р 8-10, 100% хлопок, трикотаж" },
|
||||
{ R"("category")", ResultType::Return, "category" },
|
||||
{ R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" },
|
||||
{ R"("variant")", ResultType::Return, "variant" },
|
||||
{ R"("")", ResultType::Return, "" },
|
||||
{ R"("price")", ResultType::Return, "price" },
|
||||
{ R"("470.00")", ResultType::Return, "470.00" },
|
||||
{ R"("list")", ResultType::Return, "list" },
|
||||
{ R"("/retailrocket/")", ResultType::Return, "/retailrocket/" },
|
||||
{ R"("position")", ResultType::Return, "position" },
|
||||
{ R"("brand")", ResultType::Return, "brand" },
|
||||
{ R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" },
|
||||
{ R"("id")", ResultType::Return, "id" },
|
||||
{ R"("913364")", ResultType::Return, "913364" },
|
||||
{ R"("name")", ResultType::Return, "name" },
|
||||
{ R"("3Д Футболка детская Force р-р 8-10, 100% хлопок, трикотаж")", ResultType::Return, "3Д Футболка детская Force р-р 8-10, 100% хлопок, трикотаж" },
|
||||
{ R"("category")", ResultType::Return, "category" },
|
||||
{ R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" },
|
||||
{ R"("variant")", ResultType::Return, "variant" },
|
||||
{ R"("")", ResultType::Return, "" },
|
||||
{ R"("price")", ResultType::Return, "price" },
|
||||
{ R"("470.00")", ResultType::Return, "470.00" },
|
||||
{ R"("list")", ResultType::Return, "list" },
|
||||
{ R"("/retailrocket/")", ResultType::Return, "/retailrocket/" },
|
||||
{ R"("position")", ResultType::Return, "position" },
|
||||
{ R"("brand")", ResultType::Return, "brand" },
|
||||
{ R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" },
|
||||
{ R"("id")", ResultType::Return, "id" },
|
||||
{ R"("913367")", ResultType::Return, "913367" },
|
||||
{ R"("name")", ResultType::Return, "name" },
|
||||
{ R"("3Д Футболка детская Winter tale р-р 8-10, 100% хлопок, трикотаж")", ResultType::Return, "3Д Футболка детская Winter tale р-р 8-10, 100% хлопок, трикотаж" },
|
||||
{ R"("category")", ResultType::Return, "category" },
|
||||
{ R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" },
|
||||
{ R"("variant")", ResultType::Return, "variant" },
|
||||
{ R"("")", ResultType::Return, "" },
|
||||
{ R"("price")", ResultType::Return, "price" },
|
||||
{ R"("470.00")", ResultType::Return, "470.00" },
|
||||
{ R"("list")", ResultType::Return, "list" },
|
||||
{ R"("/retailrocket/")", ResultType::Return, "/retailrocket/" },
|
||||
{ R"("position")", ResultType::Return, "position" },
|
||||
{ R"("brand")", ResultType::Return, "brand" },
|
||||
{ R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" },
|
||||
{ R"("id")", ResultType::Return, "id" },
|
||||
{ R"("913385")", ResultType::Return, "913385" },
|
||||
{ R"("name")", ResultType::Return, "name" },
|
||||
{ R"("3Д Футболка детская Moonshine р-р 8-10, 100% хлопок, трикотаж")", ResultType::Return, "3Д Футболка детская Moonshine р-р 8-10, 100% хлопок, трикотаж" },
|
||||
{ R"("category")", ResultType::Return, "category" },
|
||||
{ R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" },
|
||||
{ R"("variant")", ResultType::Return, "variant" },
|
||||
{ R"("")", ResultType::Return, "" },
|
||||
{ R"("price")", ResultType::Return, "price" },
|
||||
{ R"("470.00")", ResultType::Return, "470.00" },
|
||||
{ R"("list")", ResultType::Return, "list" },
|
||||
{ R"("/retailrocket/")", ResultType::Return, "/retailrocket/" },
|
||||
{ R"("position")", ResultType::Return, "position" },
|
||||
{ R"("brand")", ResultType::Return, "brand" },
|
||||
{ R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" },
|
||||
{ R"("id")", ResultType::Return, "id" },
|
||||
{ R"("913391")", ResultType::Return, "913391" },
|
||||
{ R"("name")", ResultType::Return, "name" },
|
||||
{ R"("3Д Футболка детская Shaman р-р 8-10, 100% хлопок, трикотаж")", ResultType::Return, "3Д Футболка детская Shaman р-р 8-10, 100% хлопок, трикотаж" },
|
||||
{ R"("category")", ResultType::Return, "category" },
|
||||
{ R"("/Летние товары/Летний текстиль/")", ResultType::Return, "/Летние товары/Летний текстиль/" },
|
||||
{ R"("variant")", ResultType::Return, "variant" },
|
||||
{ R"("")", ResultType::Return, "" },
|
||||
{ R"("price")", ResultType::Return, "price" },
|
||||
{ R"("470.00")", ResultType::Return, "470.00" },
|
||||
{ R"("list")", ResultType::Return, "list" },
|
||||
{ R"("/retailrocket/")", ResultType::Return, "/retailrocket/" },
|
||||
{ R"("position")", ResultType::Return, "position" },
|
||||
{ R"("brand")", ResultType::Return, "brand" },
|
||||
{ R"("/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/")", ResultType::Return, "/911488/futbolka-detskaya-3d-dolphin-vozrast-1-2-goda-trikotazh/" },
|
||||
{ R"("usertype")", ResultType::Return, "usertype" },
|
||||
{ R"("visitor")", ResultType::Return, "visitor" },
|
||||
{ R"("lang")", ResultType::Return, "lang" },
|
||||
{ R"("ru")", ResultType::Return, "ru" },
|
||||
{ R"("__ym")", ResultType::Return, "__ym" },
|
||||
{ R"("ecommerce")", ResultType::Return, "ecommerce" },
|
||||
{ R"("impressions")", ResultType::Return, "impressions" },
|
||||
{ R"("experiments")", ResultType::Return, "experiments" },
|
||||
{ R"("lang")", ResultType::Return, "lang" },
|
||||
{ R"("ru")", ResultType::Return, "ru" },
|
||||
{ R"("los_portal")", ResultType::Return, "los_portal" },
|
||||
{ R"("los_level")", ResultType::Return, "los_level" },
|
||||
{ R"("none")", ResultType::Return, "none" },
|
||||
{ R"("experiments")", ResultType::Return, "experiments" },
|
||||
{ R"("lang")", ResultType::Return, "lang" },
|
||||
{ R"("ru")", ResultType::Return, "ru" },
|
||||
{ R"("los_portal")", ResultType::Return, "los_portal" },
|
||||
{ R"("los_level")", ResultType::Return, "los_level" },
|
||||
{ R"("none")", ResultType::Return, "none" },
|
||||
{ R"("experiments")", ResultType::Return, "experiments" },
|
||||
{ R"("lang")", ResultType::Return, "lang" },
|
||||
{ R"("ru")", ResultType::Return, "ru" },
|
||||
{ R"("los_portal")", ResultType::Return, "los_portal" },
|
||||
{ R"("los_level")", ResultType::Return, "los_level" },
|
||||
{ R"("none")", ResultType::Return, "none" },
|
||||
{ R"("experiments")", ResultType::Return, "experiments" },
|
||||
{ R"("lang")", ResultType::Return, "lang" },
|
||||
{ R"("ru")", ResultType::Return, "ru" },
|
||||
{ R"("los_portal")", ResultType::Return, "los_portal" },
|
||||
{ R"("los_level")", ResultType::Return, "los_level" },
|
||||
{ R"("none")", ResultType::Return, "none" },
|
||||
{ R"("experiments")", ResultType::Return, "experiments" },
|
||||
{ R"("lang")", ResultType::Return, "lang" },
|
||||
{ R"("ru")", ResultType::Return, "ru" },
|
||||
{ R"("los_portal")", ResultType::Return, "los_portal" },
|
||||
{ R"("los_level")", ResultType::Return, "los_level" },
|
||||
{ R"("none")", ResultType::Return, "none" },
|
||||
{ R"("__ym")", ResultType::Return, "__ym" },
|
||||
{ R"("ecommerce")", ResultType::Return, "ecommerce" },
|
||||
{ R"("currencyCode")", ResultType::Return, "currencyCode" },
|
||||
{ R"("RUR")", ResultType::Return, "RUR" },
|
||||
{ R"("impressions")", ResultType::Return, "impressions" },
|
||||
{ R"("name")", ResultType::Return, "name" },
|
||||
{ R"("Чайник электрический Mystery MEK-1627, белый")", ResultType::Return, "Чайник электрический Mystery MEK-1627, белый" },
|
||||
{ R"("brand")", ResultType::Return, "brand" },
|
||||
{ R"("Mystery")", ResultType::Return, "Mystery" },
|
||||
{ R"("id")", ResultType::Return, "id" },
|
||||
{ R"("187180")", ResultType::Return, "187180" },
|
||||
{ R"("category")", ResultType::Return, "category" },
|
||||
{ R"("Мелкая бытовая техника/Мелкие кухонные приборы/Чайники электрические/Mystery")", ResultType::Return, "Мелкая бытовая техника/Мелкие кухонные приборы/Чайники электрические/Mystery" },
|
||||
{ R"("variant")", ResultType::Return, "variant" },
|
||||
{ R"("В наличии")", ResultType::Return, "В наличии" },
|
||||
{ R"("price")", ResultType::Return, "price" },
|
||||
{ R"("1630.00")", ResultType::Return, "1630.00" },
|
||||
{ R"("list")", ResultType::Return, "list" },
|
||||
{ R"("Карточка")", ResultType::Return, "Карточка" },
|
||||
{ R"("position")", ResultType::Return, "position" },
|
||||
{ R"("detail")", ResultType::Return, "detail" },
|
||||
{ R"("actionField")", ResultType::Return, "actionField" },
|
||||
{ R"("list")", ResultType::Return, "list" },
|
||||
{ "\0\"", ResultType::Throw, "JSON: expected \", got \0" },
|
||||
{ "\"/igrushki/konstruktory\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/1290414/komplekt-zhenskiy-dzhemper-plusbryuki-m-254-09-malina-plustemno-siniy-\0a", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/Творчество/Рисование/Инструменты и кра\0a", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Строительство и ремонт/Силовая техника/Зарядные устройства для автомобильных аккумуляторов/Пуско-зарядные устр\xD0\0a", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Строительство и ремонт/Силовая техника/Зарядные устройств\xD0\0t", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Строительство и ремонт/Силовая техника/Зарядные устройства для автомобиль\0k", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\0t", ResultType::Throw, "JSON: expected \", got \0" },
|
||||
{ "\"/Хозтовары/Хранение вещей и организа\xD1\0t", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/Хозтовары/Товары для стир\0a", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"li\0a", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/734859/samolet-radioupravlyaemyy-istrebitel-rabotaet-o\0k", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/kosmetika-i-parfyum/parfyumeriya/mu\0t", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/ko\0\x04", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "", ResultType::Throw, "JSON: begin >= end." },
|
||||
{ "\"/stroitelstvo-i-remont/stroit\0t", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/stroitelstvo-i-remont/stroitelnyy-instrument/av\0k", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/s\0a", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/Строительство и ремонт/Строительный инструмент/Изм\0e", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/avto/soputstvuy\0l", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/str\0\xD0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Отвертка 2 в 1 \\\"TUNDRA basic\\\" 5х75 мм (+,-) \0\xFF", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/stroitelstvo-i-remont/stroitelnyy-instrument/avtoinstrumen\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Мелкая бытовая техника/Мелки\xD0\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Пряжа \\\"Бамбук стрейч\\0\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Карандаш чёрнографитны\xD0\0\xD0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/Творчество/Рукоделие, аппликации/Пряжа и шерсть для \xD0\0l", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/1071547/karandash-chernografitnyy-volshebstvo-nv-kruglyy-d-7-2mm-dl-176mm-plast-tuba/\0e", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"ca\0e", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"ca\0e", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/1165424/chipbord-vyrubnoy-dlya-skrapbukinga-malyshi-mikki-maus-disney-bebi\0t", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/posuda/kuhonnye-prinadlezhnosti-i-i\0d", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/Канцтовары/Ежедневники и блокн\xD0\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/kanctovary/ezhednevniki-i-blok\0a", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Стакан \xD0\0a", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Набор бумаги для скрапбукинга \\\"Мои первый годик\\\": Микки Маус, Дисней бэби, 12 листов 29.5 х 29.5 см, 160\0\x80", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"c\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Органайзер для хранения аксессуаров, \0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"quantity\00", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Сменный блок для тетрадей на кольцах А5, 160 листов клетка, офсет \xE2\x84\0=", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/Сувениры/Ф\xD0\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"\0\"", ResultType::Return, "\0" },
|
||||
{ "\"\0\x04", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"va\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"ca\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"В \0\x04", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/letnie-tovary/z\0\x04", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Посудомоечная машина Ha\0=", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Крупная бытов\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Полочная акустическая система Magnat Needl\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"brand\00", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"\0d", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"pos\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"c\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"var\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Телевизоры и видеотехника/Всё для домашних кинотеатр\0=", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Флеш-диск Transcend JetFlash 620 8GB (TS8GJF62\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Табурет Мег\0\xD0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"variant\0\x04", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Катал\xD0\0\"", ResultType::Return, "Катал\xD0\0" },
|
||||
{ "\"К\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Полочная акустическая система Magnat Needl\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"brand\00", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"\0d", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"pos\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"c\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"17\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/igrushki/razvivayusc\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Ключница \\\"\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/Игр\xD1\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/Игрушки/Игрушки для девочек/Игровые модули дл\xD1\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Крупная бытовая техника/Стиральные машины/С фронт\xD0\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\0 ", ResultType::Throw, "JSON: expected \", got \0" },
|
||||
{ "\"Светодиодная лента SMD3528, 5 м. IP33, 60LED, зеленый, 4,8W/мет\xD1\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Сантехника/Мебель для ванных комнат/Стол\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\0o", ResultType::Throw, "JSON: expected \", got \0" },
|
||||
{ "\"/igrushki/konstruktory\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/posuda/kuhonnye-prinadlezhnosti-i-instrumenty/kuhonnye-pr\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/1290414/komplekt-zhenskiy-dzhemper-plusbryuki-m-254-09-malina-plustemno-siniy-\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/Творчество/Рисование/Инструменты и кра\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Строительство и ремонт/Силовая техника/Зарядные устройства для автомобильных аккумуляторов/Пуско-зарядные устр\xD0\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Строительство и ремонт/Силовая техника/Зарядные устройств\xD0\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Строительство и ремонт/Силовая техника/Зарядные устройства для автомобиль\0d", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\0 ", ResultType::Throw, "JSON: expected \", got \0" },
|
||||
{ "\"/Хозтовары/Хранение вещей и организа\xD1\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/Хозтовары/Товары для стир\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"li\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/igrushki/igrus\0d", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/734859/samolet-radioupravlyaemyy-istrebitel-rabotaet-o\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/kosmetika-i-parfyum/parfyumeriya/mu\00", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/ko\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/avto/avtomobilnyy\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/stroitelstvo-i-remont/stroit\00", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/stroitelstvo-i-remont/stroitelnyy-instrument/av\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/s\0d", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/Строительство и ремонт/Строительный инструмент/Изм\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/avto/soputstvuy\0\"", ResultType::Return, "/avto/soputstvuy\0" },
|
||||
{ "\"/str\0k", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Отвертка 2 в 1 \\\"TUNDRA basic\\\" 5х75 мм (+,-) \0\xD0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/stroitelstvo-i-remont/stroitelnyy-instrument/avtoinstrumen\0=", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Чайник электрический Vitesse\0=", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Мелкая бытовая техника/Мелки\xD0\0\xD0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Пряжа \\\"Бамбук стрейч\\0о", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Карандаш чёрнографитны\xD0\0k", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/Творчество/Рукоделие, аппликации/Пряжа и шерсть для \xD0\0\"", ResultType::Return, "/Творчество/Рукоделие, аппликации/Пряжа и шерсть для \xD0\0" },
|
||||
{ "\"/1071547/karandash-chernografitnyy-volshebstvo-nv-kruglyy-d-7-2mm-dl-176mm-plast-tuba/\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"ca\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/Подаро\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Средство для прочис\xD1\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"i\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/p\0\"", ResultType::Return, "/p\0" },
|
||||
{ "\"/Сувениры/Магниты, н\xD0\0k", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Дерев\xD0\0=", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/prazdniki/svadba/svadebnaya-c\0\xD0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/Канцт\0d", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/Праздники/То\xD0\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"v\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/Косметика \xD0\0d", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/Спорт и отдых/Настольные игры/Покер, руле\xD1\0\xD0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"categ\0=", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/retailr\0k", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/retailrocket\0k", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Ежедневник недат А5 140л кл,ляссе,обл пв\0=", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/432809/ezhednevnik-organayzer-sredniy-s-remeshkom-na-knopke-v-oblozhke-kalkulyator-kalendar-do-\0\xD0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/1165424/chipbord-vyrubnoy-dlya-skrapbukinga-malyshi-mikki-maus-disney-bebi\0d", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/posuda/kuhonnye-prinadlezhnosti-i-i\0 ", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/Канцтовары/Ежедневники и блокн\xD0\0o", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"/kanctovary/ezhednevniki-i-blok\00", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Стакан \xD0\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"Набор бумаги для скрапбукинга \\\"Мои первый годик\\\": Микки Маус, Дисней бэби, 12 листов 29.5 х 29.5 см, 160\0\0", ResultType::Throw, "JSON: incorrect syntax (expected end of string, found end of JSON)." },
|
||||
{ "\"c\0\"", ResultType::Return, "c\0" },
|
||||
};
|
||||
|
||||
for (auto i : boost::irange(0, 1/*00000*/))
|
||||
{
|
||||
static_cast<void>(i);
|
||||
|
||||
for (auto & r : test_data)
|
||||
{
|
||||
try
|
||||
{
|
||||
JSON j(r.input, r.input + strlen(r.input));
|
||||
|
||||
ASSERT_EQ(j.getString(), r.result);
|
||||
ASSERT_TRUE(r.result_type == ResultType::Return);
|
||||
}
|
||||
catch (JSONException & e)
|
||||
{
|
||||
ASSERT_TRUE(r.result_type == ResultType::Throw);
|
||||
ASSERT_EQ(e.message(), r.result);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@ -5,6 +5,11 @@ add_library (daemon
|
||||
)
|
||||
|
||||
target_include_directories (daemon PUBLIC ..)
|
||||
|
||||
if (OS_DARWIN AND NOT MAKE_STATIC_LIBRARIES)
|
||||
target_link_libraries (daemon PUBLIC -Wl,-undefined,dynamic_lookup)
|
||||
endif()
|
||||
|
||||
target_link_libraries (daemon PUBLIC loggers PRIVATE clickhouse_common_io clickhouse_common_config common ${EXECINFO_LIBRARIES})
|
||||
|
||||
if (USE_SENTRY)
|
||||
|
@ -9,6 +9,7 @@
|
||||
#include <common/getMemoryAmount.h>
|
||||
#include <common/logger_useful.h>
|
||||
|
||||
#include <Common/formatReadable.h>
|
||||
#include <Common/SymbolIndex.h>
|
||||
#include <Common/StackTrace.h>
|
||||
#include <Common/getNumberOfPhysicalCPUCores.h>
|
||||
|
66
base/ext/scope_guard_safe.h
Normal file
66
base/ext/scope_guard_safe.h
Normal file
@ -0,0 +1,66 @@
|
||||
#pragma once
|
||||
|
||||
#include <ext/scope_guard.h>
|
||||
#include <common/logger_useful.h>
|
||||
#include <Common/MemoryTracker.h>
|
||||
|
||||
/// Same as SCOPE_EXIT() but block the MEMORY_LIMIT_EXCEEDED errors.
|
||||
///
|
||||
/// Typical example of SCOPE_EXIT_MEMORY() usage is when code under it may do
|
||||
/// some tiny allocations, that may fail under high memory pressure or/and low
|
||||
/// max_memory_usage (and related limits).
|
||||
///
|
||||
/// NOTE: it should be used with caution.
|
||||
#define SCOPE_EXIT_MEMORY(...) SCOPE_EXIT( \
|
||||
MemoryTracker::LockExceptionInThread lock_memory_tracker; \
|
||||
__VA_ARGS__; \
|
||||
)
|
||||
|
||||
/// Same as SCOPE_EXIT() but try/catch/tryLogCurrentException any exceptions.
|
||||
///
|
||||
/// SCOPE_EXIT_SAFE() should be used in case the exception during the code
|
||||
/// under SCOPE_EXIT() is not "that fatal" and error message in log is enough.
|
||||
///
|
||||
/// Good example is calling CurrentThread::detachQueryIfNotDetached().
|
||||
///
|
||||
/// Anti-pattern is calling WriteBuffer::finalize() under SCOPE_EXIT_SAFE()
|
||||
/// (since finalize() can do final write and it is better to fail abnormally
|
||||
/// instead of ignoring write error).
|
||||
///
|
||||
/// NOTE: it should be used with double caution.
|
||||
#define SCOPE_EXIT_SAFE(...) SCOPE_EXIT( \
|
||||
try \
|
||||
{ \
|
||||
__VA_ARGS__; \
|
||||
} \
|
||||
catch (...) \
|
||||
{ \
|
||||
tryLogCurrentException(__PRETTY_FUNCTION__); \
|
||||
} \
|
||||
)
|
||||
|
||||
/// Same as SCOPE_EXIT() but:
|
||||
/// - block the MEMORY_LIMIT_EXCEEDED errors,
|
||||
/// - try/catch/tryLogCurrentException any exceptions.
|
||||
///
|
||||
/// SCOPE_EXIT_MEMORY_SAFE() can be used when the error can be ignored, and in
|
||||
/// addition to SCOPE_EXIT_SAFE() it will also lock MEMORY_LIMIT_EXCEEDED to
|
||||
/// avoid such exceptions.
|
||||
///
|
||||
/// It does exists as a separate helper, since you do not need to lock
|
||||
/// MEMORY_LIMIT_EXCEEDED always (there are cases when code under SCOPE_EXIT does
|
||||
/// not do any allocations, while LockExceptionInThread increment atomic
|
||||
/// variable).
|
||||
///
|
||||
/// NOTE: it should be used with triple caution.
|
||||
#define SCOPE_EXIT_MEMORY_SAFE(...) SCOPE_EXIT( \
|
||||
try \
|
||||
{ \
|
||||
MemoryTracker::LockExceptionInThread lock_memory_tracker; \
|
||||
__VA_ARGS__; \
|
||||
} \
|
||||
catch (...) \
|
||||
{ \
|
||||
tryLogCurrentException(__PRETTY_FUNCTION__); \
|
||||
} \
|
||||
)
|
@ -178,7 +178,7 @@ tail:
|
||||
size -= padding;
|
||||
}
|
||||
|
||||
/// Aligned unrolled copy. We will use all available SSE registers.
|
||||
/// Aligned unrolled copy. We will use half of available SSE registers.
|
||||
/// It's not possible to have both src and dst aligned.
|
||||
/// So, we will use aligned stores and unaligned loads.
|
||||
__m128i c0, c1, c2, c3, c4, c5, c6, c7;
|
||||
|
@ -2,7 +2,6 @@
|
||||
#include <ctime>
|
||||
#include <random>
|
||||
#include <thread>
|
||||
|
||||
#include <mysqlxx/PoolWithFailover.h>
|
||||
|
||||
|
||||
@ -15,9 +14,12 @@ static bool startsWith(const std::string & s, const char * prefix)
|
||||
|
||||
using namespace mysqlxx;
|
||||
|
||||
PoolWithFailover::PoolWithFailover(const Poco::Util::AbstractConfiguration & config_,
|
||||
const std::string & config_name_, const unsigned default_connections_,
|
||||
const unsigned max_connections_, const size_t max_tries_)
|
||||
PoolWithFailover::PoolWithFailover(
|
||||
const Poco::Util::AbstractConfiguration & config_,
|
||||
const std::string & config_name_,
|
||||
const unsigned default_connections_,
|
||||
const unsigned max_connections_,
|
||||
const size_t max_tries_)
|
||||
: max_tries(max_tries_)
|
||||
{
|
||||
shareable = config_.getBool(config_name_ + ".share_connection", false);
|
||||
@ -59,16 +61,38 @@ PoolWithFailover::PoolWithFailover(const Poco::Util::AbstractConfiguration & con
|
||||
}
|
||||
}
|
||||
|
||||
PoolWithFailover::PoolWithFailover(const std::string & config_name_, const unsigned default_connections_,
|
||||
const unsigned max_connections_, const size_t max_tries_)
|
||||
: PoolWithFailover{
|
||||
Poco::Util::Application::instance().config(), config_name_,
|
||||
default_connections_, max_connections_, max_tries_}
|
||||
|
||||
PoolWithFailover::PoolWithFailover(
|
||||
const std::string & config_name_,
|
||||
const unsigned default_connections_,
|
||||
const unsigned max_connections_,
|
||||
const size_t max_tries_)
|
||||
: PoolWithFailover{Poco::Util::Application::instance().config(),
|
||||
config_name_, default_connections_, max_connections_, max_tries_}
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
PoolWithFailover::PoolWithFailover(
|
||||
const std::string & database,
|
||||
const RemoteDescription & addresses,
|
||||
const std::string & user,
|
||||
const std::string & password,
|
||||
size_t max_tries_)
|
||||
: max_tries(max_tries_)
|
||||
, shareable(false)
|
||||
{
|
||||
/// Replicas have the same priority, but traversed replicas are moved to the end of the queue.
|
||||
for (const auto & [host, port] : addresses)
|
||||
{
|
||||
replicas_by_priority[0].emplace_back(std::make_shared<Pool>(database, host, user, password, port));
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
PoolWithFailover::PoolWithFailover(const PoolWithFailover & other)
|
||||
: max_tries{other.max_tries}, shareable{other.shareable}
|
||||
: max_tries{other.max_tries}
|
||||
, shareable{other.shareable}
|
||||
{
|
||||
if (shareable)
|
||||
{
|
||||
|
@ -11,6 +11,8 @@
|
||||
namespace mysqlxx
|
||||
{
|
||||
/** MySQL connection pool with support of failover.
|
||||
*
|
||||
* For dictionary source:
|
||||
* Have information about replicas and their priorities.
|
||||
* Tries to connect to replica in an order of priority. When equal priority, choose replica with maximum time without connections.
|
||||
*
|
||||
@ -68,42 +70,58 @@ namespace mysqlxx
|
||||
using PoolPtr = std::shared_ptr<Pool>;
|
||||
using Replicas = std::vector<PoolPtr>;
|
||||
|
||||
/// [priority][index] -> replica.
|
||||
/// [priority][index] -> replica. Highest priority is 0.
|
||||
using ReplicasByPriority = std::map<int, Replicas>;
|
||||
|
||||
ReplicasByPriority replicas_by_priority;
|
||||
|
||||
/// Number of connection tries.
|
||||
size_t max_tries;
|
||||
/// Mutex for set of replicas.
|
||||
std::mutex mutex;
|
||||
|
||||
/// Can the Pool be shared
|
||||
bool shareable;
|
||||
|
||||
public:
|
||||
using Entry = Pool::Entry;
|
||||
using RemoteDescription = std::vector<std::pair<std::string, uint16_t>>;
|
||||
|
||||
/**
|
||||
* config_name Name of parameter in configuration file.
|
||||
* * Mysql dictionary sourse related params:
|
||||
* config_name Name of parameter in configuration file for dictionary source.
|
||||
*
|
||||
* * Mysql storage related parameters:
|
||||
* replicas_description
|
||||
*
|
||||
* * Mutual parameters:
|
||||
* default_connections Number of connection in pool to each replica at start.
|
||||
* max_connections Maximum number of connections in pool to each replica.
|
||||
* max_tries_ Max number of connection tries.
|
||||
*/
|
||||
PoolWithFailover(const std::string & config_name_,
|
||||
PoolWithFailover(
|
||||
const std::string & config_name_,
|
||||
unsigned default_connections_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_START_CONNECTIONS,
|
||||
unsigned max_connections_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_CONNECTIONS,
|
||||
size_t max_tries_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_TRIES);
|
||||
|
||||
PoolWithFailover(const Poco::Util::AbstractConfiguration & config_,
|
||||
PoolWithFailover(
|
||||
const Poco::Util::AbstractConfiguration & config_,
|
||||
const std::string & config_name_,
|
||||
unsigned default_connections_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_START_CONNECTIONS,
|
||||
unsigned max_connections_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_CONNECTIONS,
|
||||
size_t max_tries_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_TRIES);
|
||||
|
||||
PoolWithFailover(
|
||||
const std::string & database,
|
||||
const RemoteDescription & addresses,
|
||||
const std::string & user,
|
||||
const std::string & password,
|
||||
size_t max_tries_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_TRIES);
|
||||
|
||||
PoolWithFailover(const PoolWithFailover & other);
|
||||
|
||||
/** Allocates a connection to use. */
|
||||
Entry get();
|
||||
};
|
||||
|
||||
using PoolWithFailoverPtr = std::shared_ptr<PoolWithFailover>;
|
||||
}
|
||||
|
@ -1,5 +1,2 @@
|
||||
add_executable (mysqlxx_test mysqlxx_test.cpp)
|
||||
target_link_libraries (mysqlxx_test PRIVATE mysqlxx)
|
||||
|
||||
add_executable (mysqlxx_pool_test mysqlxx_pool_test.cpp)
|
||||
target_link_libraries (mysqlxx_pool_test PRIVATE mysqlxx)
|
||||
|
@ -1,21 +0,0 @@
|
||||
<?xml version = '1.0' encoding = 'utf-8'?>
|
||||
<yandex>
|
||||
<mysql_goals>
|
||||
<port>3306</port>
|
||||
<user>root</user>
|
||||
<db>Metrica</db>
|
||||
<password>qwerty</password>
|
||||
<replica>
|
||||
<host>example02t</host>
|
||||
<priority>0</priority>
|
||||
</replica>
|
||||
<replica>
|
||||
<host>example02t</host>
|
||||
<port>3306</port>
|
||||
<user>root</user>
|
||||
<password>qwerty</password>
|
||||
<db>Metrica</db>
|
||||
<priority>1</priority>
|
||||
</replica>
|
||||
</mysql_goals>
|
||||
</yandex>
|
@ -1,77 +0,0 @@
|
||||
#include <iostream>
|
||||
#include <mysqlxx/mysqlxx.h>
|
||||
|
||||
|
||||
int main(int, char **)
|
||||
{
|
||||
try
|
||||
{
|
||||
mysqlxx::Connection connection("test", "127.0.0.1", "root", "qwerty", 3306);
|
||||
std::cerr << "Connected." << std::endl;
|
||||
|
||||
{
|
||||
mysqlxx::Query query = connection.query();
|
||||
query << "SELECT 1 x, '2010-01-01 01:01:01' d";
|
||||
mysqlxx::UseQueryResult result = query.use();
|
||||
std::cerr << "use() called." << std::endl;
|
||||
|
||||
while (mysqlxx::Row row = result.fetch())
|
||||
{
|
||||
std::cerr << "Fetched row." << std::endl;
|
||||
std::cerr << row[0] << ", " << row["x"] << std::endl;
|
||||
std::cerr << row[1] << ", " << row["d"]
|
||||
<< ", " << row[1].getDate()
|
||||
<< ", " << row[1].getDateTime()
|
||||
<< ", " << row[1].getDate()
|
||||
<< ", " << row[1].getDateTime()
|
||||
<< std::endl
|
||||
<< row[1].getDate() << ", " << row[1].getDateTime() << std::endl
|
||||
<< row[1].getDate() << ", " << row[1].getDateTime() << std::endl
|
||||
<< row[1].getDate() << ", " << row[1].getDateTime() << std::endl
|
||||
<< row[1].getDate() << ", " << row[1].getDateTime() << std::endl
|
||||
;
|
||||
|
||||
time_t t1 = row[0];
|
||||
time_t t2 = row[1];
|
||||
std::cerr << t1 << ", " << LocalDateTime(t1) << std::endl;
|
||||
std::cerr << t2 << ", " << LocalDateTime(t2) << std::endl;
|
||||
}
|
||||
}
|
||||
|
||||
{
|
||||
mysqlxx::UseQueryResult result = connection.query("SELECT 'abc\\\\def' x").use();
|
||||
mysqlxx::Row row = result.fetch();
|
||||
std::cerr << row << std::endl;
|
||||
std::cerr << row << std::endl;
|
||||
}
|
||||
|
||||
{
|
||||
/// Копирование Query
|
||||
mysqlxx::Query query1 = connection.query("SELECT");
|
||||
mysqlxx::Query query2 = query1;
|
||||
query2 << " 1";
|
||||
|
||||
std::cerr << query1.str() << ", " << query2.str() << std::endl;
|
||||
}
|
||||
|
||||
{
|
||||
/// NULL
|
||||
mysqlxx::Null<int> x = mysqlxx::null;
|
||||
std::cerr << (x == mysqlxx::null ? "Ok" : "Fail") << std::endl;
|
||||
std::cerr << (x == 0 ? "Fail" : "Ok") << std::endl;
|
||||
std::cerr << (x.isNull() ? "Ok" : "Fail") << std::endl;
|
||||
x = 1;
|
||||
std::cerr << (x == mysqlxx::null ? "Fail" : "Ok") << std::endl;
|
||||
std::cerr << (x == 0 ? "Fail" : "Ok") << std::endl;
|
||||
std::cerr << (x == 1 ? "Ok" : "Fail") << std::endl;
|
||||
std::cerr << (x.isNull() ? "Fail" : "Ok") << std::endl;
|
||||
}
|
||||
}
|
||||
catch (const mysqlxx::Exception & e)
|
||||
{
|
||||
std::cerr << e.code() << ", " << e.message() << std::endl;
|
||||
throw;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
@ -16,6 +16,10 @@ if (ENABLE_CLANG_TIDY)
|
||||
|
||||
set (USE_CLANG_TIDY ON)
|
||||
|
||||
# clang-tidy requires assertions to guide the analysis
|
||||
# Note that NDEBUG is set implicitly by CMake for non-debug builds
|
||||
set(COMPILER_FLAGS "${COMPILER_FLAGS} -UNDEBUG")
|
||||
|
||||
# The variable CMAKE_CXX_CLANG_TIDY will be set inside src and base directories with non third-party code.
|
||||
# set (CMAKE_CXX_CLANG_TIDY "${CLANG_TIDY_PATH}")
|
||||
elseif (FAIL_ON_UNSUPPORTED_OPTIONS_COMBINATION)
|
||||
|
@ -1,9 +1,9 @@
|
||||
# This strings autochanged from release_lib.sh:
|
||||
SET(VERSION_REVISION 54449)
|
||||
SET(VERSION_REVISION 54450)
|
||||
SET(VERSION_MAJOR 21)
|
||||
SET(VERSION_MINOR 4)
|
||||
SET(VERSION_MINOR 5)
|
||||
SET(VERSION_PATCH 1)
|
||||
SET(VERSION_GITHASH af2135ef9dc72f16fa4f229b731262c3f0a8bbdc)
|
||||
SET(VERSION_DESCRIBE v21.4.1.1-prestable)
|
||||
SET(VERSION_STRING 21.4.1.1)
|
||||
SET(VERSION_GITHASH 3827789b3d8fd2021952e57e5110343d26daa1a1)
|
||||
SET(VERSION_DESCRIBE v21.5.1.1-prestable)
|
||||
SET(VERSION_STRING 21.5.1.1)
|
||||
# end of autochange
|
||||
|
@ -1,11 +1,11 @@
|
||||
set (DEFAULT_LIBS "-nodefaultlibs")
|
||||
|
||||
if (NOT COMPILER_CLANG)
|
||||
message (FATAL_ERROR "Darwin build is supported only for Clang")
|
||||
endif ()
|
||||
|
||||
set (DEFAULT_LIBS "${DEFAULT_LIBS} ${COVERAGE_OPTION} -lc -lm -lpthread -ldl")
|
||||
|
||||
if (COMPILER_GCC)
|
||||
set (DEFAULT_LIBS "${DEFAULT_LIBS} -lgcc_eh")
|
||||
endif ()
|
||||
|
||||
message(STATUS "Default libraries: ${DEFAULT_LIBS}")
|
||||
|
||||
set(CMAKE_CXX_STANDARD_LIBRARIES ${DEFAULT_LIBS})
|
||||
|
@ -1,3 +1,8 @@
|
||||
if (OS_DARWIN AND COMPILER_GCC)
|
||||
# AMQP-CPP requires libuv which cannot be built with GCC in macOS due to a bug: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93082
|
||||
set (ENABLE_AMQPCPP OFF CACHE INTERNAL "")
|
||||
endif()
|
||||
|
||||
option(ENABLE_AMQPCPP "Enalbe AMQP-CPP" ${ENABLE_LIBRARIES})
|
||||
|
||||
if (NOT ENABLE_AMQPCPP)
|
||||
|
@ -1,4 +1,8 @@
|
||||
option (ENABLE_BASE64 "Enable base64" ${ENABLE_LIBRARIES})
|
||||
if(ARCH_AMD64 OR ARCH_ARM)
|
||||
option (ENABLE_BASE64 "Enable base64" ${ENABLE_LIBRARIES})
|
||||
elseif(ENABLE_BASE64)
|
||||
message (${RECONFIGURE_MESSAGE_LEVEL} "base64 library is only supported on x86_64 and aarch64")
|
||||
endif()
|
||||
|
||||
if (NOT ENABLE_BASE64)
|
||||
return()
|
||||
|
@ -1,3 +1,8 @@
|
||||
if (OS_DARWIN AND COMPILER_GCC)
|
||||
# Cassandra requires libuv which cannot be built with GCC in macOS due to a bug: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93082
|
||||
set (ENABLE_CASSANDRA OFF CACHE INTERNAL "")
|
||||
endif()
|
||||
|
||||
option(ENABLE_CASSANDRA "Enable Cassandra" ${ENABLE_LIBRARIES})
|
||||
|
||||
if (NOT ENABLE_CASSANDRA)
|
||||
|
@ -32,7 +32,9 @@ if (CCACHE_FOUND AND NOT COMPILER_MATCHES_CCACHE)
|
||||
if (CCACHE_VERSION VERSION_GREATER "3.2.0" OR NOT CMAKE_CXX_COMPILER_ID STREQUAL "Clang")
|
||||
message(STATUS "Using ${CCACHE_FOUND} ${CCACHE_VERSION}")
|
||||
|
||||
set_property (GLOBAL PROPERTY RULE_LAUNCH_COMPILE ${CCACHE_FOUND})
|
||||
set (CMAKE_CXX_COMPILER_LAUNCHER ${CCACHE_FOUND} ${CMAKE_CXX_COMPILER_LAUNCHER})
|
||||
set (CMAKE_C_COMPILER_LAUNCHER ${CCACHE_FOUND} ${CMAKE_C_COMPILER_LAUNCHER})
|
||||
|
||||
set_property (GLOBAL PROPERTY RULE_LAUNCH_LINK ${CCACHE_FOUND})
|
||||
|
||||
# debian (debhelpers) set SOURCE_DATE_EPOCH environment variable, that is
|
||||
|
29
cmake/find/datasketches.cmake
Normal file
29
cmake/find/datasketches.cmake
Normal file
@ -0,0 +1,29 @@
|
||||
option (ENABLE_DATASKETCHES "Enable DataSketches" ${ENABLE_LIBRARIES})
|
||||
|
||||
if (ENABLE_DATASKETCHES)
|
||||
|
||||
option (USE_INTERNAL_DATASKETCHES_LIBRARY "Set to FALSE to use system DataSketches library instead of bundled" ${NOT_UNBUNDLED})
|
||||
|
||||
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/datasketches-cpp/theta/CMakeLists.txt")
|
||||
if (USE_INTERNAL_DATASKETCHES_LIBRARY)
|
||||
message(WARNING "submodule contrib/datasketches-cpp is missing. to fix try run: \n git submodule update --init --recursive")
|
||||
endif()
|
||||
set(MISSING_INTERNAL_DATASKETCHES_LIBRARY 1)
|
||||
set(USE_INTERNAL_DATASKETCHES_LIBRARY 0)
|
||||
endif()
|
||||
|
||||
if (USE_INTERNAL_DATASKETCHES_LIBRARY)
|
||||
set(DATASKETCHES_LIBRARY theta)
|
||||
set(DATASKETCHES_INCLUDE_DIR "${ClickHouse_SOURCE_DIR}/contrib/datasketches-cpp/common/include" "${ClickHouse_SOURCE_DIR}/contrib/datasketches-cpp/theta/include")
|
||||
elseif (NOT MISSING_INTERNAL_DATASKETCHES_LIBRARY)
|
||||
find_library(DATASKETCHES_LIBRARY theta)
|
||||
find_path(DATASKETCHES_INCLUDE_DIR NAMES theta_sketch.hpp PATHS ${DATASKETCHES_INCLUDE_PATHS})
|
||||
endif()
|
||||
|
||||
if (DATASKETCHES_LIBRARY AND DATASKETCHES_INCLUDE_DIR)
|
||||
set(USE_DATASKETCHES 1)
|
||||
endif()
|
||||
|
||||
endif()
|
||||
|
||||
message (STATUS "Using datasketches=${USE_DATASKETCHES}: ${DATASKETCHES_INCLUDE_DIR} : ${DATASKETCHES_LIBRARY}")
|
@ -1,7 +1,7 @@
|
||||
if(NOT ARCH_ARM AND NOT OS_FREEBSD AND NOT OS_DARWIN)
|
||||
if(ARCH_AMD64 AND NOT OS_FREEBSD AND NOT OS_DARWIN)
|
||||
option(ENABLE_FASTOPS "Enable fast vectorized mathematical functions library by Mikhail Parakhin" ${ENABLE_LIBRARIES})
|
||||
elseif(ENABLE_FASTOPS)
|
||||
message (${RECONFIGURE_MESSAGE_LEVEL} "Fastops library is not supported on ARM, FreeBSD and Darwin")
|
||||
message (${RECONFIGURE_MESSAGE_LEVEL} "Fastops library is supported on x86_64 only, and not FreeBSD or Darwin")
|
||||
endif()
|
||||
|
||||
if(NOT ENABLE_FASTOPS)
|
||||
|
@ -1,4 +1,4 @@
|
||||
if(NOT ARCH_ARM AND NOT OS_FREEBSD AND NOT APPLE AND USE_PROTOBUF)
|
||||
if(NOT ARCH_ARM AND NOT OS_FREEBSD AND NOT APPLE AND USE_PROTOBUF AND NOT ARCH_PPC64LE)
|
||||
option(ENABLE_HDFS "Enable HDFS" ${ENABLE_LIBRARIES})
|
||||
elseif(ENABLE_HDFS OR USE_INTERNAL_HDFS3_LIBRARY)
|
||||
message (${RECONFIGURE_MESSAGE_LEVEL} "Cannot use HDFS3 with current configuration")
|
||||
|
@ -62,6 +62,7 @@ if (NOT OPENLDAP_FOUND AND NOT MISSING_INTERNAL_LDAP_LIBRARY)
|
||||
if (
|
||||
( "${_system_name}" STREQUAL "linux" AND "${_system_processor}" STREQUAL "x86_64" ) OR
|
||||
( "${_system_name}" STREQUAL "linux" AND "${_system_processor}" STREQUAL "aarch64" ) OR
|
||||
( "${_system_name}" STREQUAL "linux" AND "${_system_processor}" STREQUAL "ppc64le" ) OR
|
||||
( "${_system_name}" STREQUAL "freebsd" AND "${_system_processor}" STREQUAL "x86_64" ) OR
|
||||
( "${_system_name}" STREQUAL "darwin" AND "${_system_processor}" STREQUAL "x86_64" )
|
||||
)
|
||||
|
35
cmake/find/nanodbc.cmake
Normal file
35
cmake/find/nanodbc.cmake
Normal file
@ -0,0 +1,35 @@
|
||||
option(ENABLE_NANODBC "Enalbe nanodbc" ${ENABLE_LIBRARIES})
|
||||
|
||||
if (NOT ENABLE_NANODBC)
|
||||
set (USE_ODBC 0)
|
||||
return()
|
||||
endif()
|
||||
|
||||
if (NOT ENABLE_ODBC)
|
||||
set (USE_NANODBC 0)
|
||||
message (STATUS "Using nanodbc=${USE_NANODBC}")
|
||||
return()
|
||||
endif()
|
||||
|
||||
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/nanodbc/CMakeLists.txt")
|
||||
message (WARNING "submodule contrib/nanodbc is missing. to fix try run: \n git submodule update --init --recursive")
|
||||
message (${RECONFIGURE_MESSAGE_LEVEL} "Can't find internal nanodbc library")
|
||||
set (USE_NANODBC 0)
|
||||
return()
|
||||
endif()
|
||||
|
||||
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/unixodbc/include")
|
||||
message (ERROR "submodule contrib/unixodbc is missing. to fix try run: \n git submodule update --init --recursive")
|
||||
message (${RECONFIGURE_MESSAGE_LEVEL} "Can't find internal unixodbc needed for nanodbc")
|
||||
set (USE_NANODBC 0)
|
||||
return()
|
||||
endif()
|
||||
|
||||
set (USE_NANODBC 1)
|
||||
|
||||
set (NANODBC_LIBRARY nanodbc)
|
||||
|
||||
set (NANODBC_INCLUDE_DIR "${ClickHouse_SOURCE_DIR}/contrib/nanodbc/nanodbce")
|
||||
|
||||
message (STATUS "Using nanodbc=${USE_NANODBC}: ${NANODBC_INCLUDE_DIR} : ${NANODBC_LIBRARY}")
|
||||
message (STATUS "Using unixodbc")
|
@ -11,7 +11,7 @@ if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/NuRaft/CMakeLists.txt")
|
||||
return()
|
||||
endif ()
|
||||
|
||||
if (NOT OS_FREEBSD AND NOT OS_DARWIN)
|
||||
if (NOT OS_FREEBSD)
|
||||
set (USE_NURAFT 1)
|
||||
set (NURAFT_LIBRARY nuraft)
|
||||
|
||||
|
@ -1,7 +1,7 @@
|
||||
if(NOT OS_FREEBSD AND NOT APPLE AND NOT ARCH_ARM)
|
||||
if(NOT OS_FREEBSD AND NOT APPLE)
|
||||
option(ENABLE_S3 "Enable S3" ${ENABLE_LIBRARIES})
|
||||
elseif(ENABLE_S3 OR USE_INTERNAL_AWS_S3_LIBRARY)
|
||||
message (${RECONFIGURE_MESSAGE_LEVEL} "Can't use S3 on ARM, Apple or FreeBSD")
|
||||
message (${RECONFIGURE_MESSAGE_LEVEL} "Can't use S3 on Apple or FreeBSD")
|
||||
endif()
|
||||
|
||||
if(NOT ENABLE_S3)
|
||||
|
27
cmake/find/xz.cmake
Normal file
27
cmake/find/xz.cmake
Normal file
@ -0,0 +1,27 @@
|
||||
option (USE_INTERNAL_XZ_LIBRARY "Set to OFF to use system xz (lzma) library instead of bundled" ${NOT_UNBUNDLED})
|
||||
|
||||
if(NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/xz/src/liblzma/api/lzma.h")
|
||||
if(USE_INTERNAL_XZ_LIBRARY)
|
||||
message(WARNING "submodule contrib/xz is missing. to fix try run: \n git submodule update --init --recursive")
|
||||
message (${RECONFIGURE_MESSAGE_LEVEL} "Can't find internal xz (lzma) library")
|
||||
set(USE_INTERNAL_XZ_LIBRARY 0)
|
||||
endif()
|
||||
set(MISSING_INTERNAL_XZ_LIBRARY 1)
|
||||
endif()
|
||||
|
||||
if (NOT USE_INTERNAL_XZ_LIBRARY)
|
||||
find_library (XZ_LIBRARY lzma)
|
||||
find_path (XZ_INCLUDE_DIR NAMES lzma.h PATHS ${XZ_INCLUDE_PATHS})
|
||||
if (NOT XZ_LIBRARY OR NOT XZ_INCLUDE_DIR)
|
||||
message (${RECONFIGURE_MESSAGE_LEVEL} "Can't find system xz (lzma) library")
|
||||
endif ()
|
||||
endif ()
|
||||
|
||||
if (XZ_LIBRARY AND XZ_INCLUDE_DIR)
|
||||
elseif (NOT MISSING_INTERNAL_XZ_LIBRARY)
|
||||
set (USE_INTERNAL_XZ_LIBRARY 1)
|
||||
set (XZ_LIBRARY liblzma)
|
||||
set (XZ_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/contrib/xz/src/liblzma/api)
|
||||
endif ()
|
||||
|
||||
message (STATUS "Using xz (lzma): ${XZ_INCLUDE_DIR} : ${XZ_LIBRARY}")
|
@ -6,7 +6,7 @@ set (DEFAULT_LIBS "-nodefaultlibs")
|
||||
# We need builtins from Clang's RT even without libcxx - for ubsan+int128.
|
||||
# See https://bugs.llvm.org/show_bug.cgi?id=16404
|
||||
if (COMPILER_CLANG AND NOT (CMAKE_CROSSCOMPILING AND ARCH_AARCH64))
|
||||
execute_process (COMMAND ${CMAKE_CXX_COMPILER} --print-file-name=libclang_rt.builtins-${CMAKE_SYSTEM_PROCESSOR}.a OUTPUT_VARIABLE BUILTINS_LIBRARY OUTPUT_STRIP_TRAILING_WHITESPACE)
|
||||
execute_process (COMMAND ${CMAKE_CXX_COMPILER} --print-libgcc-file-name --rtlib=compiler-rt OUTPUT_VARIABLE BUILTINS_LIBRARY OUTPUT_STRIP_TRAILING_WHITESPACE)
|
||||
else ()
|
||||
set (BUILTINS_LIBRARY "-lgcc")
|
||||
endif ()
|
||||
|
@ -86,8 +86,3 @@ if (LINKER_NAME)
|
||||
message(STATUS "Using custom linker by name: ${LINKER_NAME}")
|
||||
endif ()
|
||||
|
||||
if (ARCH_PPC64LE)
|
||||
if (COMPILER_CLANG OR (COMPILER_GCC AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 8))
|
||||
message(FATAL_ERROR "Only gcc-8 or higher is supported for powerpc architecture")
|
||||
endif ()
|
||||
endif ()
|
||||
|
@ -11,11 +11,6 @@ if (NOT MSVC)
|
||||
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wextra")
|
||||
endif ()
|
||||
|
||||
if (USE_DEBUG_HELPERS)
|
||||
set (INCLUDE_DEBUG_HELPERS "-I${ClickHouse_SOURCE_DIR}/base -include ${ClickHouse_SOURCE_DIR}/src/Core/iostream_debug_helpers.h")
|
||||
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${INCLUDE_DEBUG_HELPERS}")
|
||||
endif ()
|
||||
|
||||
# Add some warnings that are not available even with -Wall -Wextra -Wpedantic.
|
||||
# Intended for exploration of new compiler warnings that may be found useful.
|
||||
# Applies to clang only
|
||||
|
34
contrib/CMakeLists.txt
vendored
34
contrib/CMakeLists.txt
vendored
@ -47,7 +47,10 @@ add_subdirectory (lz4-cmake)
|
||||
add_subdirectory (murmurhash)
|
||||
add_subdirectory (replxx-cmake)
|
||||
add_subdirectory (unixodbc-cmake)
|
||||
add_subdirectory (xz)
|
||||
|
||||
if (USE_INTERNAL_XZ_LIBRARY)
|
||||
add_subdirectory (xz)
|
||||
endif()
|
||||
|
||||
add_subdirectory (poco-cmake)
|
||||
add_subdirectory (croaring-cmake)
|
||||
@ -215,15 +218,17 @@ if (USE_EMBEDDED_COMPILER AND USE_INTERNAL_LLVM_LIBRARY)
|
||||
set (LLVM_ENABLE_RTTI 1 CACHE INTERNAL "")
|
||||
set (LLVM_ENABLE_PIC 0 CACHE INTERNAL "")
|
||||
set (LLVM_TARGETS_TO_BUILD "X86;AArch64" CACHE STRING "")
|
||||
# Yes it is set globally, but this is not enough, since llvm will add -std=c++11 after default
|
||||
# And c++2a cannot be used, due to ambiguous operator !=
|
||||
if (COMPILER_GCC OR COMPILER_CLANG)
|
||||
set (_CXX_STANDARD "gnu++17")
|
||||
else()
|
||||
set (_CXX_STANDARD "c++17")
|
||||
endif()
|
||||
set (LLVM_CXX_STD ${_CXX_STANDARD} CACHE STRING "" FORCE)
|
||||
|
||||
# Need to use C++17 since the compilation is not possible with C++20 currently, due to ambiguous operator != etc.
|
||||
# LLVM project will set its default value for the -std=... but our global setting from CMake will override it.
|
||||
set (CMAKE_CXX_STANDARD_bak ${CMAKE_CXX_STANDARD})
|
||||
set (CMAKE_CXX_STANDARD 17)
|
||||
|
||||
add_subdirectory (llvm/llvm)
|
||||
|
||||
set (CMAKE_CXX_STANDARD ${CMAKE_CXX_STANDARD_bak})
|
||||
unset (CMAKE_CXX_STANDARD_bak)
|
||||
|
||||
target_include_directories(LLVMSupport SYSTEM BEFORE PRIVATE ${ZLIB_INCLUDE_DIR})
|
||||
endif ()
|
||||
|
||||
@ -280,7 +285,14 @@ if (USE_AMQPCPP)
|
||||
add_subdirectory (amqpcpp-cmake)
|
||||
endif()
|
||||
if (USE_CASSANDRA)
|
||||
# Need to use C++17 since the compilation is not possible with C++20 currently.
|
||||
set (CMAKE_CXX_STANDARD_bak ${CMAKE_CXX_STANDARD})
|
||||
set (CMAKE_CXX_STANDARD 17)
|
||||
|
||||
add_subdirectory (cassandra)
|
||||
|
||||
set (CMAKE_CXX_STANDARD ${CMAKE_CXX_STANDARD_bak})
|
||||
unset (CMAKE_CXX_STANDARD_bak)
|
||||
endif()
|
||||
|
||||
# Should go before:
|
||||
@ -314,6 +326,10 @@ if (USE_LIBPQXX)
|
||||
add_subdirectory (libpqxx-cmake)
|
||||
endif()
|
||||
|
||||
if (USE_NANODBC)
|
||||
add_subdirectory (nanodbc-cmake)
|
||||
endif()
|
||||
|
||||
if (USE_NURAFT)
|
||||
add_subdirectory(nuraft-cmake)
|
||||
endif()
|
||||
|
2
contrib/NuRaft
vendored
2
contrib/NuRaft
vendored
@ -1 +1 @@
|
||||
Subproject commit 3d3683e77753cfe015a05fae95ddf418e19f59e1
|
||||
Subproject commit d2feb5978b979729a07c3ca76eaa4ab94cef4ceb
|
2
contrib/antlr4-runtime
vendored
2
contrib/antlr4-runtime
vendored
@ -1 +1 @@
|
||||
Subproject commit a2fa7b76e2ee16d2ad955e9214a90bbf79da66fc
|
||||
Subproject commit 672643e9a427ef803abf13bc8cb4989606553d64
|
2
contrib/arrow
vendored
2
contrib/arrow
vendored
@ -1 +1 @@
|
||||
Subproject commit 744bdfe188f018e5e05f5deebd4e9ee0a7706cf4
|
||||
Subproject commit 616b3dc76a0c8450b4027ded8a78e9619d7c845f
|
@ -160,6 +160,12 @@ if (NOT EXTERNAL_BOOST_FOUND)
|
||||
enable_language(ASM)
|
||||
SET(ASM_OPTIONS "-x assembler-with-cpp")
|
||||
|
||||
set (SRCS_CONTEXT
|
||||
${LIBRARY_DIR}/libs/context/src/dummy.cpp
|
||||
${LIBRARY_DIR}/libs/context/src/execution_context.cpp
|
||||
${LIBRARY_DIR}/libs/context/src/posix/stack_traits.cpp
|
||||
)
|
||||
|
||||
if (SANITIZE AND (SANITIZE STREQUAL "address" OR SANITIZE STREQUAL "thread"))
|
||||
add_compile_definitions(BOOST_USE_UCONTEXT)
|
||||
|
||||
@ -169,39 +175,34 @@ if (NOT EXTERNAL_BOOST_FOUND)
|
||||
add_compile_definitions(BOOST_USE_TSAN)
|
||||
endif()
|
||||
|
||||
set (SRCS_CONTEXT
|
||||
set (SRCS_CONTEXT ${SRCS_CONTEXT}
|
||||
${LIBRARY_DIR}/libs/context/src/fiber.cpp
|
||||
${LIBRARY_DIR}/libs/context/src/continuation.cpp
|
||||
${LIBRARY_DIR}/libs/context/src/dummy.cpp
|
||||
${LIBRARY_DIR}/libs/context/src/execution_context.cpp
|
||||
${LIBRARY_DIR}/libs/context/src/posix/stack_traits.cpp
|
||||
)
|
||||
elseif (ARCH_ARM)
|
||||
set (SRCS_CONTEXT
|
||||
endif()
|
||||
if (ARCH_ARM)
|
||||
set (SRCS_CONTEXT ${SRCS_CONTEXT}
|
||||
${LIBRARY_DIR}/libs/context/src/asm/jump_arm64_aapcs_elf_gas.S
|
||||
${LIBRARY_DIR}/libs/context/src/asm/make_arm64_aapcs_elf_gas.S
|
||||
${LIBRARY_DIR}/libs/context/src/asm/ontop_arm64_aapcs_elf_gas.S
|
||||
${LIBRARY_DIR}/libs/context/src/dummy.cpp
|
||||
${LIBRARY_DIR}/libs/context/src/execution_context.cpp
|
||||
${LIBRARY_DIR}/libs/context/src/posix/stack_traits.cpp
|
||||
)
|
||||
elseif (ARCH_PPC64LE)
|
||||
set (SRCS_CONTEXT ${SRCS_CONTEXT}
|
||||
${LIBRARY_DIR}/libs/context/src/asm/jump_ppc64_sysv_elf_gas.S
|
||||
${LIBRARY_DIR}/libs/context/src/asm/make_ppc64_sysv_elf_gas.S
|
||||
${LIBRARY_DIR}/libs/context/src/asm/ontop_ppc64_sysv_elf_gas.S
|
||||
)
|
||||
elseif(OS_DARWIN)
|
||||
set (SRCS_CONTEXT
|
||||
set (SRCS_CONTEXT ${SRCS_CONTEXT}
|
||||
${LIBRARY_DIR}/libs/context/src/asm/jump_x86_64_sysv_macho_gas.S
|
||||
${LIBRARY_DIR}/libs/context/src/asm/make_x86_64_sysv_macho_gas.S
|
||||
${LIBRARY_DIR}/libs/context/src/asm/ontop_x86_64_sysv_macho_gas.S
|
||||
${LIBRARY_DIR}/libs/context/src/dummy.cpp
|
||||
${LIBRARY_DIR}/libs/context/src/execution_context.cpp
|
||||
${LIBRARY_DIR}/libs/context/src/posix/stack_traits.cpp
|
||||
)
|
||||
else()
|
||||
set (SRCS_CONTEXT
|
||||
set (SRCS_CONTEXT ${SRCS_CONTEXT}
|
||||
${LIBRARY_DIR}/libs/context/src/asm/jump_x86_64_sysv_elf_gas.S
|
||||
${LIBRARY_DIR}/libs/context/src/asm/make_x86_64_sysv_elf_gas.S
|
||||
${LIBRARY_DIR}/libs/context/src/asm/ontop_x86_64_sysv_elf_gas.S
|
||||
${LIBRARY_DIR}/libs/context/src/dummy.cpp
|
||||
${LIBRARY_DIR}/libs/context/src/execution_context.cpp
|
||||
${LIBRARY_DIR}/libs/context/src/posix/stack_traits.cpp
|
||||
)
|
||||
endif()
|
||||
|
||||
|
2
contrib/boringssl
vendored
2
contrib/boringssl
vendored
@ -1 +1 @@
|
||||
Subproject commit fd9ce1a0406f571507068b9555d0b545b8a18332
|
||||
Subproject commit 83c1cda8a0224dc817cbad2966c7ed4acc35f02a
|
@ -16,7 +16,7 @@ endif()
|
||||
|
||||
if(CMAKE_COMPILER_IS_GNUCXX OR CLANG)
|
||||
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -fvisibility=hidden -fno-common -fno-exceptions -fno-rtti")
|
||||
if(APPLE)
|
||||
if(APPLE AND CLANG)
|
||||
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -stdlib=libc++")
|
||||
endif()
|
||||
|
||||
|
@ -97,12 +97,19 @@ if (NOT EXTERNAL_CCTZ_LIBRARY_FOUND OR NOT EXTERNAL_CCTZ_LIBRARY_WORKS)
|
||||
set(TZ_OBJS ${TZ_OBJS} ${TZ_OBJ})
|
||||
|
||||
# https://stackoverflow.com/questions/14776463/compile-and-add-an-object-file-from-a-binary-with-cmake
|
||||
add_custom_command(OUTPUT ${TZ_OBJ}
|
||||
COMMAND cp ${TZDIR}/${TIMEZONE} ${CMAKE_CURRENT_BINARY_DIR}/${TIMEZONE_ID}
|
||||
COMMAND cd ${CMAKE_CURRENT_BINARY_DIR} && ${OBJCOPY_PATH} -I binary ${OBJCOPY_ARCH_OPTIONS}
|
||||
# PPC64LE fails to do this with objcopy, use ld or lld instead
|
||||
if (ARCH_PPC64LE)
|
||||
add_custom_command(OUTPUT ${TZ_OBJ}
|
||||
COMMAND cp ${TZDIR}/${TIMEZONE} ${CMAKE_CURRENT_BINARY_DIR}/${TIMEZONE_ID}
|
||||
COMMAND cd ${CMAKE_CURRENT_BINARY_DIR} && ${CMAKE_LINKER} -m elf64lppc -r -b binary -o ${TZ_OBJ} ${TIMEZONE_ID}
|
||||
COMMAND rm ${CMAKE_CURRENT_BINARY_DIR}/${TIMEZONE_ID})
|
||||
else()
|
||||
add_custom_command(OUTPUT ${TZ_OBJ}
|
||||
COMMAND cp ${TZDIR}/${TIMEZONE} ${CMAKE_CURRENT_BINARY_DIR}/${TIMEZONE_ID}
|
||||
COMMAND cd ${CMAKE_CURRENT_BINARY_DIR} && ${OBJCOPY_PATH} -I binary ${OBJCOPY_ARCH_OPTIONS}
|
||||
--rename-section .data=.rodata,alloc,load,readonly,data,contents ${TIMEZONE_ID} ${TZ_OBJ}
|
||||
COMMAND rm ${CMAKE_CURRENT_BINARY_DIR}/${TIMEZONE_ID})
|
||||
|
||||
COMMAND rm ${CMAKE_CURRENT_BINARY_DIR}/${TIMEZONE_ID})
|
||||
endif()
|
||||
set_source_files_properties(${TZ_OBJ} PROPERTIES EXTERNAL_OBJECT true GENERATED true)
|
||||
endforeach(TIMEZONE)
|
||||
|
||||
|
1
contrib/datasketches-cpp
vendored
Submodule
1
contrib/datasketches-cpp
vendored
Submodule
@ -0,0 +1 @@
|
||||
Subproject commit f915d35b2de676683493c86c585141a1e1c83334
|
2
contrib/flatbuffers
vendored
2
contrib/flatbuffers
vendored
@ -1 +1 @@
|
||||
Subproject commit 6df40a2471737b27271bdd9b900ab5f3aec746c7
|
||||
Subproject commit 22e3ffc66d2d7d72d1414390aa0f04ffd114a5a1
|
2
contrib/grpc
vendored
2
contrib/grpc
vendored
@ -1 +1 @@
|
||||
Subproject commit 7436366ceb341ba5c00ea29f1645e02a2b70bf93
|
||||
Subproject commit 8d558f03fe370240081424fafa76cdc9301ea14b
|
@ -1,7 +1,7 @@
|
||||
if (SANITIZE OR NOT (ARCH_AMD64 OR ARCH_ARM) OR NOT (OS_LINUX OR OS_FREEBSD OR OS_DARWIN))
|
||||
if (SANITIZE OR NOT (ARCH_AMD64 OR ARCH_ARM OR ARCH_PPC64LE) OR NOT (OS_LINUX OR OS_FREEBSD OR OS_DARWIN))
|
||||
if (ENABLE_JEMALLOC)
|
||||
message (${RECONFIGURE_MESSAGE_LEVEL}
|
||||
"jemalloc is disabled implicitly: it doesn't work with sanitizers and can only be used with x86_64 or aarch64 on linux or freebsd.")
|
||||
"jemalloc is disabled implicitly: it doesn't work with sanitizers and can only be used with x86_64, aarch64 or ppc64le on linux or freebsd.")
|
||||
endif()
|
||||
set (ENABLE_JEMALLOC OFF)
|
||||
else()
|
||||
@ -107,6 +107,8 @@ if (ARCH_AMD64)
|
||||
set(JEMALLOC_INCLUDE_PREFIX "${JEMALLOC_INCLUDE_PREFIX}_x86_64")
|
||||
elseif (ARCH_ARM)
|
||||
set(JEMALLOC_INCLUDE_PREFIX "${JEMALLOC_INCLUDE_PREFIX}_aarch64")
|
||||
elseif (ARCH_PPC64LE)
|
||||
set(JEMALLOC_INCLUDE_PREFIX "${JEMALLOC_INCLUDE_PREFIX}_ppc64le")
|
||||
else ()
|
||||
message (FATAL_ERROR "internal jemalloc: This arch is not supported")
|
||||
endif ()
|
||||
@ -119,12 +121,14 @@ target_include_directories(jemalloc SYSTEM PRIVATE
|
||||
target_compile_definitions(jemalloc PRIVATE -DJEMALLOC_NO_PRIVATE_NAMESPACE)
|
||||
|
||||
if (CMAKE_BUILD_TYPE_UC STREQUAL "DEBUG")
|
||||
target_compile_definitions(jemalloc PRIVATE -DJEMALLOC_DEBUG=1 -DJEMALLOC_PROF=1)
|
||||
target_compile_definitions(jemalloc PRIVATE -DJEMALLOC_DEBUG=1)
|
||||
endif ()
|
||||
|
||||
if (USE_UNWIND)
|
||||
target_compile_definitions (jemalloc PRIVATE -DJEMALLOC_PROF_LIBUNWIND=1)
|
||||
target_link_libraries (jemalloc PRIVATE unwind)
|
||||
endif ()
|
||||
target_compile_definitions(jemalloc PRIVATE -DJEMALLOC_PROF=1)
|
||||
|
||||
if (USE_UNWIND)
|
||||
target_compile_definitions (jemalloc PRIVATE -DJEMALLOC_PROF_LIBUNWIND=1)
|
||||
target_link_libraries (jemalloc PRIVATE unwind)
|
||||
endif ()
|
||||
|
||||
target_compile_options(jemalloc PRIVATE -Wno-redundant-decls)
|
||||
|
@ -0,0 +1,367 @@
|
||||
/* include/jemalloc/internal/jemalloc_internal_defs.h. Generated from jemalloc_internal_defs.h.in by configure. */
|
||||
#ifndef JEMALLOC_INTERNAL_DEFS_H_
|
||||
#define JEMALLOC_INTERNAL_DEFS_H_
|
||||
/*
|
||||
* If JEMALLOC_PREFIX is defined via --with-jemalloc-prefix, it will cause all
|
||||
* public APIs to be prefixed. This makes it possible, with some care, to use
|
||||
* multiple allocators simultaneously.
|
||||
*/
|
||||
/* #undef JEMALLOC_PREFIX */
|
||||
/* #undef JEMALLOC_CPREFIX */
|
||||
|
||||
/*
|
||||
* Define overrides for non-standard allocator-related functions if they are
|
||||
* present on the system.
|
||||
*/
|
||||
#define JEMALLOC_OVERRIDE___LIBC_CALLOC
|
||||
#define JEMALLOC_OVERRIDE___LIBC_FREE
|
||||
#define JEMALLOC_OVERRIDE___LIBC_MALLOC
|
||||
#define JEMALLOC_OVERRIDE___LIBC_MEMALIGN
|
||||
#define JEMALLOC_OVERRIDE___LIBC_REALLOC
|
||||
#define JEMALLOC_OVERRIDE___LIBC_VALLOC
|
||||
/* #undef JEMALLOC_OVERRIDE___POSIX_MEMALIGN */
|
||||
|
||||
/*
|
||||
* JEMALLOC_PRIVATE_NAMESPACE is used as a prefix for all library-private APIs.
|
||||
* For shared libraries, symbol visibility mechanisms prevent these symbols
|
||||
* from being exported, but for static libraries, naming collisions are a real
|
||||
* possibility.
|
||||
*/
|
||||
#define JEMALLOC_PRIVATE_NAMESPACE je_
|
||||
|
||||
/*
|
||||
* Hyper-threaded CPUs may need a special instruction inside spin loops in
|
||||
* order to yield to another virtual CPU.
|
||||
*/
|
||||
#define CPU_SPINWAIT
|
||||
/* 1 if CPU_SPINWAIT is defined, 0 otherwise. */
|
||||
#define HAVE_CPU_SPINWAIT 0
|
||||
|
||||
/*
|
||||
* Number of significant bits in virtual addresses. This may be less than the
|
||||
* total number of bits in a pointer, e.g. on x64, for which the uppermost 16
|
||||
* bits are the same as bit 47.
|
||||
*/
|
||||
#define LG_VADDR 64
|
||||
|
||||
/* Defined if C11 atomics are available. */
|
||||
#define JEMALLOC_C11_ATOMICS 1
|
||||
|
||||
/* Defined if GCC __atomic atomics are available. */
|
||||
#define JEMALLOC_GCC_ATOMIC_ATOMICS 1
|
||||
/* and the 8-bit variant support. */
|
||||
#define JEMALLOC_GCC_U8_ATOMIC_ATOMICS 1
|
||||
|
||||
/* Defined if GCC __sync atomics are available. */
|
||||
#define JEMALLOC_GCC_SYNC_ATOMICS 1
|
||||
/* and the 8-bit variant support. */
|
||||
#define JEMALLOC_GCC_U8_SYNC_ATOMICS 1
|
||||
|
||||
/*
|
||||
* Defined if __builtin_clz() and __builtin_clzl() are available.
|
||||
*/
|
||||
#define JEMALLOC_HAVE_BUILTIN_CLZ
|
||||
|
||||
/*
|
||||
* Defined if os_unfair_lock_*() functions are available, as provided by Darwin.
|
||||
*/
|
||||
/* #undef JEMALLOC_OS_UNFAIR_LOCK */
|
||||
|
||||
/* Defined if syscall(2) is usable. */
|
||||
#define JEMALLOC_USE_SYSCALL
|
||||
|
||||
/*
|
||||
* Defined if secure_getenv(3) is available.
|
||||
*/
|
||||
// #define JEMALLOC_HAVE_SECURE_GETENV
|
||||
|
||||
/*
|
||||
* Defined if issetugid(2) is available.
|
||||
*/
|
||||
/* #undef JEMALLOC_HAVE_ISSETUGID */
|
||||
|
||||
/* Defined if pthread_atfork(3) is available. */
|
||||
#define JEMALLOC_HAVE_PTHREAD_ATFORK
|
||||
|
||||
/* Defined if pthread_setname_np(3) is available. */
|
||||
#define JEMALLOC_HAVE_PTHREAD_SETNAME_NP
|
||||
|
||||
/*
|
||||
* Defined if clock_gettime(CLOCK_MONOTONIC_COARSE, ...) is available.
|
||||
*/
|
||||
#define JEMALLOC_HAVE_CLOCK_MONOTONIC_COARSE 1
|
||||
|
||||
/*
|
||||
* Defined if clock_gettime(CLOCK_MONOTONIC, ...) is available.
|
||||
*/
|
||||
#define JEMALLOC_HAVE_CLOCK_MONOTONIC 1
|
||||
|
||||
/*
|
||||
* Defined if mach_absolute_time() is available.
|
||||
*/
|
||||
/* #undef JEMALLOC_HAVE_MACH_ABSOLUTE_TIME */
|
||||
|
||||
/*
|
||||
* Defined if _malloc_thread_cleanup() exists. At least in the case of
|
||||
* FreeBSD, pthread_key_create() allocates, which if used during malloc
|
||||
* bootstrapping will cause recursion into the pthreads library. Therefore, if
|
||||
* _malloc_thread_cleanup() exists, use it as the basis for thread cleanup in
|
||||
* malloc_tsd.
|
||||
*/
|
||||
/* #undef JEMALLOC_MALLOC_THREAD_CLEANUP */
|
||||
|
||||
/*
|
||||
* Defined if threaded initialization is known to be safe on this platform.
|
||||
* Among other things, it must be possible to initialize a mutex without
|
||||
* triggering allocation in order for threaded allocation to be safe.
|
||||
*/
|
||||
#define JEMALLOC_THREADED_INIT
|
||||
|
||||
/*
|
||||
* Defined if the pthreads implementation defines
|
||||
* _pthread_mutex_init_calloc_cb(), in which case the function is used in order
|
||||
* to avoid recursive allocation during mutex initialization.
|
||||
*/
|
||||
/* #undef JEMALLOC_MUTEX_INIT_CB */
|
||||
|
||||
/* Non-empty if the tls_model attribute is supported. */
|
||||
#define JEMALLOC_TLS_MODEL __attribute__((tls_model("initial-exec")))
|
||||
|
||||
/*
|
||||
* JEMALLOC_DEBUG enables assertions and other sanity checks, and disables
|
||||
* inline functions.
|
||||
*/
|
||||
/* #undef JEMALLOC_DEBUG */
|
||||
|
||||
/* JEMALLOC_STATS enables statistics calculation. */
|
||||
#define JEMALLOC_STATS
|
||||
|
||||
/* JEMALLOC_EXPERIMENTAL_SMALLOCX_API enables experimental smallocx API. */
|
||||
/* #undef JEMALLOC_EXPERIMENTAL_SMALLOCX_API */
|
||||
|
||||
/* JEMALLOC_PROF enables allocation profiling. */
|
||||
/* #undef JEMALLOC_PROF */
|
||||
|
||||
/* Use libunwind for profile backtracing if defined. */
|
||||
/* #undef JEMALLOC_PROF_LIBUNWIND */
|
||||
|
||||
/* Use libgcc for profile backtracing if defined. */
|
||||
/* #undef JEMALLOC_PROF_LIBGCC */
|
||||
|
||||
/* Use gcc intrinsics for profile backtracing if defined. */
|
||||
/* #undef JEMALLOC_PROF_GCC */
|
||||
|
||||
/*
|
||||
* JEMALLOC_DSS enables use of sbrk(2) to allocate extents from the data storage
|
||||
* segment (DSS).
|
||||
*/
|
||||
#define JEMALLOC_DSS
|
||||
|
||||
/* Support memory filling (junk/zero). */
|
||||
#define JEMALLOC_FILL
|
||||
|
||||
/* Support utrace(2)-based tracing. */
|
||||
/* #undef JEMALLOC_UTRACE */
|
||||
|
||||
/* Support optional abort() on OOM. */
|
||||
/* #undef JEMALLOC_XMALLOC */
|
||||
|
||||
/* Support lazy locking (avoid locking unless a second thread is launched). */
|
||||
/* #undef JEMALLOC_LAZY_LOCK */
|
||||
|
||||
/*
|
||||
* Minimum allocation alignment is 2^LG_QUANTUM bytes (ignoring tiny size
|
||||
* classes).
|
||||
*/
|
||||
/* #undef LG_QUANTUM */
|
||||
|
||||
/* One page is 2^LG_PAGE bytes. */
|
||||
#define LG_PAGE 16
|
||||
|
||||
/*
|
||||
* One huge page is 2^LG_HUGEPAGE bytes. Note that this is defined even if the
|
||||
* system does not explicitly support huge pages; system calls that require
|
||||
* explicit huge page support are separately configured.
|
||||
*/
|
||||
#define LG_HUGEPAGE 21
|
||||
|
||||
/*
|
||||
* If defined, adjacent virtual memory mappings with identical attributes
|
||||
* automatically coalesce, and they fragment when changes are made to subranges.
|
||||
* This is the normal order of things for mmap()/munmap(), but on Windows
|
||||
* VirtualAlloc()/VirtualFree() operations must be precisely matched, i.e.
|
||||
* mappings do *not* coalesce/fragment.
|
||||
*/
|
||||
#define JEMALLOC_MAPS_COALESCE
|
||||
|
||||
/*
|
||||
* If defined, retain memory for later reuse by default rather than using e.g.
|
||||
* munmap() to unmap freed extents. This is enabled on 64-bit Linux because
|
||||
* common sequences of mmap()/munmap() calls will cause virtual memory map
|
||||
* holes.
|
||||
*/
|
||||
#define JEMALLOC_RETAIN
|
||||
|
||||
/* TLS is used to map arenas and magazine caches to threads. */
|
||||
#define JEMALLOC_TLS
|
||||
|
||||
/*
|
||||
* Used to mark unreachable code to quiet "end of non-void" compiler warnings.
|
||||
* Don't use this directly; instead use unreachable() from util.h
|
||||
*/
|
||||
#define JEMALLOC_INTERNAL_UNREACHABLE __builtin_unreachable
|
||||
|
||||
/*
|
||||
* ffs*() functions to use for bitmapping. Don't use these directly; instead,
|
||||
* use ffs_*() from util.h.
|
||||
*/
|
||||
#define JEMALLOC_INTERNAL_FFSLL __builtin_ffsll
|
||||
#define JEMALLOC_INTERNAL_FFSL __builtin_ffsl
|
||||
#define JEMALLOC_INTERNAL_FFS __builtin_ffs
|
||||
|
||||
/*
|
||||
* popcount*() functions to use for bitmapping.
|
||||
*/
|
||||
#define JEMALLOC_INTERNAL_POPCOUNTL __builtin_popcountl
|
||||
#define JEMALLOC_INTERNAL_POPCOUNT __builtin_popcount
|
||||
|
||||
/*
|
||||
* If defined, explicitly attempt to more uniformly distribute large allocation
|
||||
* pointer alignments across all cache indices.
|
||||
*/
|
||||
#define JEMALLOC_CACHE_OBLIVIOUS
|
||||
|
||||
/*
|
||||
* If defined, enable logging facilities. We make this a configure option to
|
||||
* avoid taking extra branches everywhere.
|
||||
*/
|
||||
/* #undef JEMALLOC_LOG */
|
||||
|
||||
/*
|
||||
* If defined, use readlinkat() (instead of readlink()) to follow
|
||||
* /etc/malloc_conf.
|
||||
*/
|
||||
/* #undef JEMALLOC_READLINKAT */
|
||||
|
||||
/*
|
||||
* Darwin (OS X) uses zones to work around Mach-O symbol override shortcomings.
|
||||
*/
|
||||
/* #undef JEMALLOC_ZONE */
|
||||
|
||||
/*
|
||||
* Methods for determining whether the OS overcommits.
|
||||
* JEMALLOC_PROC_SYS_VM_OVERCOMMIT_MEMORY: Linux's
|
||||
* /proc/sys/vm.overcommit_memory file.
|
||||
* JEMALLOC_SYSCTL_VM_OVERCOMMIT: FreeBSD's vm.overcommit sysctl.
|
||||
*/
|
||||
/* #undef JEMALLOC_SYSCTL_VM_OVERCOMMIT */
|
||||
#define JEMALLOC_PROC_SYS_VM_OVERCOMMIT_MEMORY
|
||||
|
||||
/* Defined if madvise(2) is available. */
|
||||
#define JEMALLOC_HAVE_MADVISE
|
||||
|
||||
/*
|
||||
* Defined if transparent huge pages are supported via the MADV_[NO]HUGEPAGE
|
||||
* arguments to madvise(2).
|
||||
*/
|
||||
#define JEMALLOC_HAVE_MADVISE_HUGE
|
||||
|
||||
/*
|
||||
* Methods for purging unused pages differ between operating systems.
|
||||
*
|
||||
* madvise(..., MADV_FREE) : This marks pages as being unused, such that they
|
||||
* will be discarded rather than swapped out.
|
||||
* madvise(..., MADV_DONTNEED) : If JEMALLOC_PURGE_MADVISE_DONTNEED_ZEROS is
|
||||
* defined, this immediately discards pages,
|
||||
* such that new pages will be demand-zeroed if
|
||||
* the address region is later touched;
|
||||
* otherwise this behaves similarly to
|
||||
* MADV_FREE, though typically with higher
|
||||
* system overhead.
|
||||
*/
|
||||
#define JEMALLOC_PURGE_MADVISE_FREE
|
||||
#define JEMALLOC_PURGE_MADVISE_DONTNEED
|
||||
#define JEMALLOC_PURGE_MADVISE_DONTNEED_ZEROS
|
||||
|
||||
/* Defined if madvise(2) is available but MADV_FREE is not (x86 Linux only). */
|
||||
/* #undef JEMALLOC_DEFINE_MADVISE_FREE */
|
||||
|
||||
/*
|
||||
* Defined if MADV_DO[NT]DUMP is supported as an argument to madvise.
|
||||
*/
|
||||
#define JEMALLOC_MADVISE_DONTDUMP
|
||||
|
||||
/*
|
||||
* Defined if transparent huge pages (THPs) are supported via the
|
||||
* MADV_[NO]HUGEPAGE arguments to madvise(2), and THP support is enabled.
|
||||
*/
|
||||
/* #undef JEMALLOC_THP */
|
||||
|
||||
/* Define if operating system has alloca.h header. */
|
||||
#define JEMALLOC_HAS_ALLOCA_H 1
|
||||
|
||||
/* C99 restrict keyword supported. */
|
||||
#define JEMALLOC_HAS_RESTRICT 1
|
||||
|
||||
/* For use by hash code. */
|
||||
/* #undef JEMALLOC_BIG_ENDIAN */
|
||||
|
||||
/* sizeof(int) == 2^LG_SIZEOF_INT. */
|
||||
#define LG_SIZEOF_INT 2
|
||||
|
||||
/* sizeof(long) == 2^LG_SIZEOF_LONG. */
|
||||
#define LG_SIZEOF_LONG 3
|
||||
|
||||
/* sizeof(long long) == 2^LG_SIZEOF_LONG_LONG. */
|
||||
#define LG_SIZEOF_LONG_LONG 3
|
||||
|
||||
/* sizeof(intmax_t) == 2^LG_SIZEOF_INTMAX_T. */
|
||||
#define LG_SIZEOF_INTMAX_T 3
|
||||
|
||||
/* glibc malloc hooks (__malloc_hook, __realloc_hook, __free_hook). */
|
||||
#define JEMALLOC_GLIBC_MALLOC_HOOK
|
||||
|
||||
/* glibc memalign hook. */
|
||||
#define JEMALLOC_GLIBC_MEMALIGN_HOOK
|
||||
|
||||
/* pthread support */
|
||||
#define JEMALLOC_HAVE_PTHREAD
|
||||
|
||||
/* dlsym() support */
|
||||
#define JEMALLOC_HAVE_DLSYM
|
||||
|
||||
/* Adaptive mutex support in pthreads. */
|
||||
#define JEMALLOC_HAVE_PTHREAD_MUTEX_ADAPTIVE_NP
|
||||
|
||||
/* GNU specific sched_getcpu support */
|
||||
#define JEMALLOC_HAVE_SCHED_GETCPU
|
||||
|
||||
/* GNU specific sched_setaffinity support */
|
||||
#define JEMALLOC_HAVE_SCHED_SETAFFINITY
|
||||
|
||||
/*
|
||||
* If defined, all the features necessary for background threads are present.
|
||||
*/
|
||||
#define JEMALLOC_BACKGROUND_THREAD 1
|
||||
|
||||
/*
|
||||
* If defined, jemalloc symbols are not exported (doesn't work when
|
||||
* JEMALLOC_PREFIX is not defined).
|
||||
*/
|
||||
/* #undef JEMALLOC_EXPORT */
|
||||
|
||||
/* config.malloc_conf options string. */
|
||||
#define JEMALLOC_CONFIG_MALLOC_CONF "@JEMALLOC_CONFIG_MALLOC_CONF@"
|
||||
|
||||
/* If defined, jemalloc takes the malloc/free/etc. symbol names. */
|
||||
#define JEMALLOC_IS_MALLOC 1
|
||||
|
||||
/*
|
||||
* Defined if strerror_r returns char * if _GNU_SOURCE is defined.
|
||||
*/
|
||||
#define JEMALLOC_STRERROR_R_RETURNS_CHAR_WITH_GNU_SOURCE
|
||||
|
||||
/* Performs additional safety checks when defined. */
|
||||
/* #undef JEMALLOC_OPT_SAFETY_CHECKS */
|
||||
|
||||
#endif /* JEMALLOC_INTERNAL_DEFS_H_ */
|
@ -1,11 +1,9 @@
|
||||
if (NOT ARCH_ARM)
|
||||
if(ARCH_AMD64)
|
||||
option (ENABLE_CPUID "Enable libcpuid library (only internal)" ${ENABLE_LIBRARIES})
|
||||
endif()
|
||||
|
||||
if (ARCH_ARM AND ENABLE_CPUID)
|
||||
message (${RECONFIGURE_MESSAGE_LEVEL} "cpuid is not supported on ARM")
|
||||
elseif(ENABLE_CPUID)
|
||||
message (${RECONFIGURE_MESSAGE_LEVEL} "libcpuid is only supported on x86_64")
|
||||
set (ENABLE_CPUID 0)
|
||||
endif ()
|
||||
endif()
|
||||
|
||||
if (NOT ENABLE_CPUID)
|
||||
add_library (cpuid INTERFACE)
|
||||
|
2
contrib/libcxx
vendored
2
contrib/libcxx
vendored
@ -1 +1 @@
|
||||
Subproject commit 8b80a151d12b98ffe2d0c22f7cec12c3b9ff88d7
|
||||
Subproject commit 2fa892f69acbaa40f8a18c6484854a6183a34482
|
@ -56,6 +56,11 @@ if (USE_UNWIND)
|
||||
target_compile_definitions(cxx PUBLIC -DSTD_EXCEPTION_HAS_STACK_TRACE=1)
|
||||
endif ()
|
||||
|
||||
# Override the deduced attribute support that causes error.
|
||||
if (OS_DARWIN AND COMPILER_GCC)
|
||||
add_compile_definitions(_LIBCPP_INIT_PRIORITY_MAX)
|
||||
endif ()
|
||||
|
||||
target_compile_options(cxx PUBLIC $<$<COMPILE_LANGUAGE:CXX>:-nostdinc++>)
|
||||
|
||||
# Third party library may have substandard code.
|
||||
|
File diff suppressed because it is too large
Load Diff
2
contrib/libpq
vendored
2
contrib/libpq
vendored
@ -1 +1 @@
|
||||
Subproject commit 1f9c286dba60809edb64e384d6727d80d269b6cf
|
||||
Subproject commit c7624588ddd84f153dd5990e81b886e4568bddde
|
@ -75,6 +75,8 @@
|
||||
#define HAVE_STRNDUP 1
|
||||
// strerror_r
|
||||
#define HAVE_STRERROR_R 1
|
||||
// rand_r
|
||||
#define HAVE_RAND_R 1
|
||||
|
||||
#ifdef __APPLE__
|
||||
// pthread_setname_np
|
||||
|
2
contrib/mariadb-connector-c
vendored
2
contrib/mariadb-connector-c
vendored
@ -1 +1 @@
|
||||
Subproject commit f4476ee7311b35b593750f6ae2cbdb62a4006374
|
||||
Subproject commit 5f4034a3a6376416504f17186c55fe401c6d8e5e
|
1
contrib/nanodbc
vendored
Submodule
1
contrib/nanodbc
vendored
Submodule
@ -0,0 +1 @@
|
||||
Subproject commit 9fc459675515d491401727ec67fca38db721f28c
|
14
contrib/nanodbc-cmake/CMakeLists.txt
Normal file
14
contrib/nanodbc-cmake/CMakeLists.txt
Normal file
@ -0,0 +1,14 @@
|
||||
set (LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/nanodbc)
|
||||
|
||||
if (NOT TARGET unixodbc)
|
||||
message(FATAL_ERROR "Configuration error: unixodbc is not a target")
|
||||
endif()
|
||||
|
||||
set (SRCS
|
||||
${LIBRARY_DIR}/nanodbc/nanodbc.cpp
|
||||
)
|
||||
|
||||
add_library(nanodbc ${SRCS})
|
||||
|
||||
target_link_libraries (nanodbc PUBLIC unixodbc)
|
||||
target_include_directories (nanodbc SYSTEM PUBLIC ${LIBRARY_DIR}/)
|
63
contrib/openldap-cmake/linux_ppc64le/include/lber_types.h
Normal file
63
contrib/openldap-cmake/linux_ppc64le/include/lber_types.h
Normal file
@ -0,0 +1,63 @@
|
||||
/* include/lber_types.h. Generated from lber_types.hin by configure. */
|
||||
/* $OpenLDAP$ */
|
||||
/* This work is part of OpenLDAP Software <http://www.openldap.org/>.
|
||||
*
|
||||
* Copyright 1998-2020 The OpenLDAP Foundation.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted only as authorized by the OpenLDAP
|
||||
* Public License.
|
||||
*
|
||||
* A copy of this license is available in file LICENSE in the
|
||||
* top-level directory of the distribution or, alternatively, at
|
||||
* <http://www.OpenLDAP.org/license.html>.
|
||||
*/
|
||||
|
||||
/*
|
||||
* LBER types
|
||||
*/
|
||||
|
||||
#ifndef _LBER_TYPES_H
|
||||
#define _LBER_TYPES_H
|
||||
|
||||
#include <ldap_cdefs.h>
|
||||
|
||||
LDAP_BEGIN_DECL
|
||||
|
||||
/* LBER boolean, enum, integers (32 bits or larger) */
|
||||
#define LBER_INT_T int
|
||||
|
||||
/* LBER tags (32 bits or larger) */
|
||||
#define LBER_TAG_T long
|
||||
|
||||
/* LBER socket descriptor */
|
||||
#define LBER_SOCKET_T int
|
||||
|
||||
/* LBER lengths (32 bits or larger) */
|
||||
#define LBER_LEN_T long
|
||||
|
||||
/* ------------------------------------------------------------ */
|
||||
|
||||
/* booleans, enumerations, and integers */
|
||||
typedef LBER_INT_T ber_int_t;
|
||||
|
||||
/* signed and unsigned versions */
|
||||
typedef signed LBER_INT_T ber_sint_t;
|
||||
typedef unsigned LBER_INT_T ber_uint_t;
|
||||
|
||||
/* tags */
|
||||
typedef unsigned LBER_TAG_T ber_tag_t;
|
||||
|
||||
/* "socket" descriptors */
|
||||
typedef LBER_SOCKET_T ber_socket_t;
|
||||
|
||||
/* lengths */
|
||||
typedef unsigned LBER_LEN_T ber_len_t;
|
||||
|
||||
/* signed lengths */
|
||||
typedef signed LBER_LEN_T ber_slen_t;
|
||||
|
||||
LDAP_END_DECL
|
||||
|
||||
#endif /* _LBER_TYPES_H */
|
74
contrib/openldap-cmake/linux_ppc64le/include/ldap_config.h
Normal file
74
contrib/openldap-cmake/linux_ppc64le/include/ldap_config.h
Normal file
@ -0,0 +1,74 @@
|
||||
/* include/ldap_config.h. Generated from ldap_config.hin by configure. */
|
||||
/* $OpenLDAP$ */
|
||||
/* This work is part of OpenLDAP Software <http://www.openldap.org/>.
|
||||
*
|
||||
* Copyright 1998-2020 The OpenLDAP Foundation.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted only as authorized by the OpenLDAP
|
||||
* Public License.
|
||||
*
|
||||
* A copy of this license is available in file LICENSE in the
|
||||
* top-level directory of the distribution or, alternatively, at
|
||||
* <http://www.OpenLDAP.org/license.html>.
|
||||
*/
|
||||
|
||||
/*
|
||||
* This file works in conjunction with OpenLDAP configure system.
|
||||
* If you do no like the values below, adjust your configure options.
|
||||
*/
|
||||
|
||||
#ifndef _LDAP_CONFIG_H
|
||||
#define _LDAP_CONFIG_H
|
||||
|
||||
/* directory separator */
|
||||
#ifndef LDAP_DIRSEP
|
||||
#ifndef _WIN32
|
||||
#define LDAP_DIRSEP "/"
|
||||
#else
|
||||
#define LDAP_DIRSEP "\\"
|
||||
#endif
|
||||
#endif
|
||||
|
||||
/* directory for temporary files */
|
||||
#if defined(_WIN32)
|
||||
# define LDAP_TMPDIR "C:\\." /* we don't have much of a choice */
|
||||
#elif defined( _P_tmpdir )
|
||||
# define LDAP_TMPDIR _P_tmpdir
|
||||
#elif defined( P_tmpdir )
|
||||
# define LDAP_TMPDIR P_tmpdir
|
||||
#elif defined( _PATH_TMPDIR )
|
||||
# define LDAP_TMPDIR _PATH_TMPDIR
|
||||
#else
|
||||
# define LDAP_TMPDIR LDAP_DIRSEP "tmp"
|
||||
#endif
|
||||
|
||||
/* directories */
|
||||
#ifndef LDAP_BINDIR
|
||||
#define LDAP_BINDIR "/tmp/ldap-prefix/bin"
|
||||
#endif
|
||||
#ifndef LDAP_SBINDIR
|
||||
#define LDAP_SBINDIR "/tmp/ldap-prefix/sbin"
|
||||
#endif
|
||||
#ifndef LDAP_DATADIR
|
||||
#define LDAP_DATADIR "/tmp/ldap-prefix/share/openldap"
|
||||
#endif
|
||||
#ifndef LDAP_SYSCONFDIR
|
||||
#define LDAP_SYSCONFDIR "/tmp/ldap-prefix/etc/openldap"
|
||||
#endif
|
||||
#ifndef LDAP_LIBEXECDIR
|
||||
#define LDAP_LIBEXECDIR "/tmp/ldap-prefix/libexec"
|
||||
#endif
|
||||
#ifndef LDAP_MODULEDIR
|
||||
#define LDAP_MODULEDIR "/tmp/ldap-prefix/libexec/openldap"
|
||||
#endif
|
||||
#ifndef LDAP_RUNDIR
|
||||
#define LDAP_RUNDIR "/tmp/ldap-prefix/var"
|
||||
#endif
|
||||
#ifndef LDAP_LOCALEDIR
|
||||
#define LDAP_LOCALEDIR ""
|
||||
#endif
|
||||
|
||||
|
||||
#endif /* _LDAP_CONFIG_H */
|
61
contrib/openldap-cmake/linux_ppc64le/include/ldap_features.h
Normal file
61
contrib/openldap-cmake/linux_ppc64le/include/ldap_features.h
Normal file
@ -0,0 +1,61 @@
|
||||
/* include/ldap_features.h. Generated from ldap_features.hin by configure. */
|
||||
/* $OpenLDAP$ */
|
||||
/* This work is part of OpenLDAP Software <http://www.openldap.org/>.
|
||||
*
|
||||
* Copyright 1998-2020 The OpenLDAP Foundation.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted only as authorized by the OpenLDAP
|
||||
* Public License.
|
||||
*
|
||||
* A copy of this license is available in file LICENSE in the
|
||||
* top-level directory of the distribution or, alternatively, at
|
||||
* <http://www.OpenLDAP.org/license.html>.
|
||||
*/
|
||||
|
||||
/*
|
||||
* LDAP Features
|
||||
*/
|
||||
|
||||
#ifndef _LDAP_FEATURES_H
|
||||
#define _LDAP_FEATURES_H 1
|
||||
|
||||
/* OpenLDAP API version macros */
|
||||
#define LDAP_VENDOR_VERSION 20501
|
||||
#define LDAP_VENDOR_VERSION_MAJOR 2
|
||||
#define LDAP_VENDOR_VERSION_MINOR 5
|
||||
#define LDAP_VENDOR_VERSION_PATCH X
|
||||
|
||||
/*
|
||||
** WORK IN PROGRESS!
|
||||
**
|
||||
** OpenLDAP reentrancy/thread-safeness should be dynamically
|
||||
** checked using ldap_get_option().
|
||||
**
|
||||
** The -lldap implementation is not thread-safe.
|
||||
**
|
||||
** The -lldap_r implementation is:
|
||||
** LDAP_API_FEATURE_THREAD_SAFE (basic thread safety)
|
||||
** but also be:
|
||||
** LDAP_API_FEATURE_SESSION_THREAD_SAFE
|
||||
** LDAP_API_FEATURE_OPERATION_THREAD_SAFE
|
||||
**
|
||||
** The preprocessor flag LDAP_API_FEATURE_X_OPENLDAP_THREAD_SAFE
|
||||
** can be used to determine if -lldap_r is available at compile
|
||||
** time. You must define LDAP_THREAD_SAFE if and only if you
|
||||
** link with -lldap_r.
|
||||
**
|
||||
** If you fail to define LDAP_THREAD_SAFE when linking with
|
||||
** -lldap_r or define LDAP_THREAD_SAFE when linking with -lldap,
|
||||
** provided header definitions and declarations may be incorrect.
|
||||
**
|
||||
*/
|
||||
|
||||
/* is -lldap_r available or not */
|
||||
#define LDAP_API_FEATURE_X_OPENLDAP_THREAD_SAFE 1
|
||||
|
||||
/* LDAP v2 Referrals */
|
||||
/* #undef LDAP_API_FEATURE_X_OPENLDAP_V2_REFERRALS */
|
||||
|
||||
#endif /* LDAP_FEATURES */
|
1169
contrib/openldap-cmake/linux_ppc64le/include/portable.h
Normal file
1169
contrib/openldap-cmake/linux_ppc64le/include/portable.h
Normal file
File diff suppressed because it is too large
Load Diff
2
contrib/poco
vendored
2
contrib/poco
vendored
@ -1 +1 @@
|
||||
Subproject commit c55b91f394efa9c238c33957682501681ef9b716
|
||||
Subproject commit 83beecccb09eec0c9fd2669cacea03ede1d9f138
|
2
contrib/replxx
vendored
2
contrib/replxx
vendored
@ -1 +1 @@
|
||||
Subproject commit cdb6e3f2ce4464225daf9c8beeae7db98d590bdc
|
||||
Subproject commit 2b24f14594d7606792b92544bb112a6322ba34d7
|
2
contrib/simdjson
vendored
2
contrib/simdjson
vendored
@ -1 +1 @@
|
||||
Subproject commit 3190d66a49059092a1753dc35595923debfc1698
|
||||
Subproject commit 95b4870e20be5f97d9dcf63b23b1c6f520c366c1
|
4
debian/changelog
vendored
4
debian/changelog
vendored
@ -1,5 +1,5 @@
|
||||
clickhouse (21.4.1.1) unstable; urgency=low
|
||||
clickhouse (21.5.1.1) unstable; urgency=low
|
||||
|
||||
* Modified source code
|
||||
|
||||
-- clickhouse-release <clickhouse-release@yandex-team.ru> Sat, 06 Mar 2021 14:43:27 +0300
|
||||
-- clickhouse-release <clickhouse-release@yandex-team.ru> Fri, 02 Apr 2021 18:34:26 +0300
|
||||
|
2
debian/clickhouse-common-static.install
vendored
2
debian/clickhouse-common-static.install
vendored
@ -1,4 +1,6 @@
|
||||
usr/bin/clickhouse
|
||||
usr/bin/clickhouse-odbc-bridge
|
||||
usr/bin/clickhouse-library-bridge
|
||||
usr/bin/clickhouse-extract-from-config
|
||||
usr/share/bash-completion/completions
|
||||
etc/security/limits.d/clickhouse.conf
|
||||
|
@ -1,7 +1,7 @@
|
||||
FROM ubuntu:18.04
|
||||
|
||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||
ARG version=21.4.1.*
|
||||
ARG version=21.5.1.*
|
||||
|
||||
RUN apt-get update \
|
||||
&& apt-get install --yes --no-install-recommends \
|
||||
@ -18,6 +18,7 @@ RUN apt-get update \
|
||||
clickhouse-client=$version \
|
||||
clickhouse-common-static=$version \
|
||||
locales \
|
||||
tzdata \
|
||||
&& rm -rf /var/lib/apt/lists/* /var/cache/debconf \
|
||||
&& apt-get clean
|
||||
|
||||
|
@ -138,7 +138,8 @@
|
||||
"docker/test/stateless_unbundled",
|
||||
"docker/test/stateless_pytest",
|
||||
"docker/test/integration/base",
|
||||
"docker/test/fuzzer"
|
||||
"docker/test/fuzzer",
|
||||
"docker/test/keeper-jepsen"
|
||||
]
|
||||
},
|
||||
"docker/packager/unbundled": {
|
||||
@ -159,5 +160,9 @@
|
||||
"docker/test/sqlancer": {
|
||||
"name": "yandex/clickhouse-sqlancer-test",
|
||||
"dependent": []
|
||||
},
|
||||
"docker/test/keeper-jepsen": {
|
||||
"name": "yandex/clickhouse-keeper-jepsen-test",
|
||||
"dependent": []
|
||||
}
|
||||
}
|
||||
|
@ -4,14 +4,22 @@ FROM ubuntu:20.04
|
||||
ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=11
|
||||
|
||||
RUN apt-get update \
|
||||
&& apt-get install ca-certificates lsb-release wget gnupg apt-transport-https \
|
||||
&& apt-get install \
|
||||
apt-transport-https \
|
||||
apt-utils \
|
||||
ca-certificates \
|
||||
dnsutils \
|
||||
gnupg \
|
||||
iputils-ping \
|
||||
lsb-release \
|
||||
wget \
|
||||
--yes --no-install-recommends --verbose-versions \
|
||||
&& export LLVM_PUBKEY_HASH="bda960a8da687a275a2078d43c111d66b1c6a893a3275271beedf266c1ff4a0cdecb429c7a5cccf9f486ea7aa43fd27f" \
|
||||
&& wget -nv -O /tmp/llvm-snapshot.gpg.key https://apt.llvm.org/llvm-snapshot.gpg.key \
|
||||
&& echo "${LLVM_PUBKEY_HASH} /tmp/llvm-snapshot.gpg.key" | sha384sum -c \
|
||||
&& apt-key add /tmp/llvm-snapshot.gpg.key \
|
||||
&& export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \
|
||||
&& echo "deb [trusted=yes] http://apt.llvm.org/${CODENAME}/ llvm-toolchain-${CODENAME}-${LLVM_VERSION} main" >> \
|
||||
&& echo "deb [trusted=yes] https://apt.llvm.org/${CODENAME}/ llvm-toolchain-${CODENAME}-${LLVM_VERSION} main" >> \
|
||||
/etc/apt/sources.list
|
||||
|
||||
# initial packages
|
||||
@ -27,35 +35,38 @@ RUN apt-get update \
|
||||
RUN apt-get update \
|
||||
&& apt-get install \
|
||||
bash \
|
||||
cmake \
|
||||
build-essential \
|
||||
ccache \
|
||||
curl \
|
||||
gcc-9 \
|
||||
g++-9 \
|
||||
clang-10 \
|
||||
clang-tidy-10 \
|
||||
lld-10 \
|
||||
llvm-10 \
|
||||
llvm-10-dev \
|
||||
clang-11 \
|
||||
clang-tidy-10 \
|
||||
clang-tidy-11 \
|
||||
lld-11 \
|
||||
llvm-11 \
|
||||
llvm-11-dev \
|
||||
cmake \
|
||||
cmake \
|
||||
curl \
|
||||
g++-9 \
|
||||
gcc-9 \
|
||||
gdb \
|
||||
git \
|
||||
gperf \
|
||||
gperf \
|
||||
intel-opencl-icd \
|
||||
libicu-dev \
|
||||
libreadline-dev \
|
||||
lld-10 \
|
||||
lld-11 \
|
||||
llvm-10 \
|
||||
llvm-10-dev \
|
||||
llvm-11 \
|
||||
llvm-11-dev \
|
||||
moreutils \
|
||||
ninja-build \
|
||||
gperf \
|
||||
git \
|
||||
opencl-headers \
|
||||
ocl-icd-libopencl1 \
|
||||
intel-opencl-icd \
|
||||
tzdata \
|
||||
gperf \
|
||||
cmake \
|
||||
gdb \
|
||||
opencl-headers \
|
||||
pigz \
|
||||
pixz \
|
||||
rename \
|
||||
build-essential \
|
||||
tzdata \
|
||||
--yes --no-install-recommends
|
||||
|
||||
# This symlink required by gcc to find lld compiler
|
||||
@ -103,4 +114,4 @@ RUN rm /etc/apt/sources.list.d/proposed-repositories.list && apt-get update
|
||||
|
||||
|
||||
COPY build.sh /
|
||||
CMD ["/bin/bash", "/build.sh"]
|
||||
CMD ["bash", "-c", "/build.sh 2>&1 | ts"]
|
||||
|
@ -11,17 +11,28 @@ tar xJf gcc-arm-8.3-2019.03-x86_64-aarch64-linux-gnu.tar.xz -C build/cmake/toolc
|
||||
mkdir -p build/cmake/toolchain/freebsd-x86_64
|
||||
tar xJf freebsd-11.3-toolchain.tar.xz -C build/cmake/toolchain/freebsd-x86_64 --strip-components=1
|
||||
|
||||
# Uncomment to debug ccache. Don't put ccache log in /output right away, or it
|
||||
# will be confusingly packed into the "performance" package.
|
||||
# export CCACHE_LOGFILE=/build/ccache.log
|
||||
# export CCACHE_DEBUG=1
|
||||
|
||||
mkdir -p build/build_docker
|
||||
cd build/build_docker
|
||||
ccache --show-stats ||:
|
||||
ccache --zero-stats ||:
|
||||
ln -s /usr/lib/x86_64-linux-gnu/libOpenCL.so.1.0.0 /usr/lib/libOpenCL.so ||:
|
||||
rm -f CMakeCache.txt
|
||||
# Read cmake arguments into array (possibly empty)
|
||||
read -ra CMAKE_FLAGS <<< "${CMAKE_FLAGS:-}"
|
||||
cmake --debug-trycompile --verbose=1 -DCMAKE_VERBOSE_MAKEFILE=1 -LA "-DCMAKE_BUILD_TYPE=$BUILD_TYPE" "-DSANITIZE=$SANITIZER" -DENABLE_CHECK_HEAVY_BUILDS=1 "${CMAKE_FLAGS[@]}" ..
|
||||
|
||||
ccache --show-config ||:
|
||||
ccache --show-stats ||:
|
||||
ccache --zero-stats ||:
|
||||
|
||||
# shellcheck disable=SC2086 # No quotes because I want it to expand to nothing if empty.
|
||||
ninja $NINJA_FLAGS clickhouse-bundle
|
||||
|
||||
ccache --show-config ||:
|
||||
ccache --show-stats ||:
|
||||
|
||||
mv ./programs/clickhouse* /output
|
||||
mv ./src/unit_tests_dbms /output ||: # may not exist for some binary builds
|
||||
find . -name '*.so' -print -exec mv '{}' /output \;
|
||||
@ -65,8 +76,21 @@ then
|
||||
cp ../programs/server/config.xml /output/config
|
||||
cp ../programs/server/users.xml /output/config
|
||||
cp -r --dereference ../programs/server/config.d /output/config
|
||||
tar -czvf "$COMBINED_OUTPUT.tgz" /output
|
||||
tar -cv -I pigz -f "$COMBINED_OUTPUT.tgz" /output
|
||||
rm -r /output/*
|
||||
mv "$COMBINED_OUTPUT.tgz" /output
|
||||
fi
|
||||
ccache --show-stats ||:
|
||||
|
||||
if [ "${CCACHE_DEBUG:-}" == "1" ]
|
||||
then
|
||||
find . -name '*.ccache-*' -print0 \
|
||||
| tar -c -I pixz -f /output/ccache-debug.txz --null -T -
|
||||
fi
|
||||
|
||||
if [ -n "$CCACHE_LOGFILE" ]
|
||||
then
|
||||
# Compress the log as well, or else the CI will try to compress all log
|
||||
# files in place, and will fail because this directory is not writable.
|
||||
tar -cv -I pixz -f /output/ccache.log.txz "$CCACHE_LOGFILE"
|
||||
fi
|
||||
|
||||
|
@ -34,31 +34,32 @@ RUN curl -O https://clickhouse-builds.s3.yandex.net/utils/1/dpkg-deb \
|
||||
# Libraries from OS are only needed to test the "unbundled" build (this is not used in production).
|
||||
RUN apt-get update \
|
||||
&& apt-get install \
|
||||
gcc-9 \
|
||||
g++-9 \
|
||||
clang-11 \
|
||||
clang-tidy-11 \
|
||||
lld-11 \
|
||||
llvm-11 \
|
||||
llvm-11-dev \
|
||||
alien \
|
||||
clang-10 \
|
||||
clang-11 \
|
||||
clang-tidy-10 \
|
||||
clang-tidy-11 \
|
||||
cmake \
|
||||
debhelper \
|
||||
devscripts \
|
||||
g++-9 \
|
||||
gcc-9 \
|
||||
gdb \
|
||||
git \
|
||||
gperf \
|
||||
lld-10 \
|
||||
lld-11 \
|
||||
llvm-10 \
|
||||
llvm-10-dev \
|
||||
llvm-11 \
|
||||
llvm-11-dev \
|
||||
moreutils \
|
||||
ninja-build \
|
||||
perl \
|
||||
pkg-config \
|
||||
devscripts \
|
||||
debhelper \
|
||||
git \
|
||||
tzdata \
|
||||
gperf \
|
||||
alien \
|
||||
cmake \
|
||||
gdb \
|
||||
moreutils \
|
||||
pigz \
|
||||
pixz \
|
||||
pkg-config \
|
||||
tzdata \
|
||||
--yes --no-install-recommends
|
||||
|
||||
# NOTE: For some reason we have outdated version of gcc-10 in ubuntu 20.04 stable.
|
||||
|
@ -2,8 +2,14 @@
|
||||
|
||||
set -x -e
|
||||
|
||||
# Uncomment to debug ccache.
|
||||
# export CCACHE_LOGFILE=/build/ccache.log
|
||||
# export CCACHE_DEBUG=1
|
||||
|
||||
ccache --show-config ||:
|
||||
ccache --show-stats ||:
|
||||
ccache --zero-stats ||:
|
||||
|
||||
read -ra ALIEN_PKGS <<< "${ALIEN_PKGS:-}"
|
||||
build/release --no-pbuilder "${ALIEN_PKGS[@]}" | ts '%Y-%m-%d %H:%M:%S'
|
||||
mv /*.deb /output
|
||||
@ -22,5 +28,19 @@ then
|
||||
mv /build/obj-*/src/unit_tests_dbms /output/binary
|
||||
fi
|
||||
fi
|
||||
|
||||
ccache --show-config ||:
|
||||
ccache --show-stats ||:
|
||||
ln -s /usr/lib/x86_64-linux-gnu/libOpenCL.so.1.0.0 /usr/lib/libOpenCL.so ||:
|
||||
|
||||
if [ "${CCACHE_DEBUG:-}" == "1" ]
|
||||
then
|
||||
find /build -name '*.ccache-*' -print0 \
|
||||
| tar -c -I pixz -f /output/ccache-debug.txz --null -T -
|
||||
fi
|
||||
|
||||
if [ -n "$CCACHE_LOGFILE" ]
|
||||
then
|
||||
# Compress the log as well, or else the CI will try to compress all log
|
||||
# files in place, and will fail because this directory is not writable.
|
||||
tar -cv -I pixz -f /output/ccache.log.txz "$CCACHE_LOGFILE"
|
||||
fi
|
||||
|
@ -13,4 +13,3 @@ mv /*.rpm /output ||: # if exists
|
||||
mv /*.tgz /output ||: # if exists
|
||||
|
||||
ccache --show-stats ||:
|
||||
ln -s /usr/lib/x86_64-linux-gnu/libOpenCL.so.1.0.0 /usr/lib/libOpenCL.so ||:
|
||||
|
@ -1,9 +1,24 @@
|
||||
FROM ubuntu:20.04
|
||||
|
||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||
ARG version=21.4.1.*
|
||||
ARG version=21.5.1.*
|
||||
ARG gosu_ver=1.10
|
||||
|
||||
# set non-empty deb_location_url url to create a docker image
|
||||
# from debs created by CI build, for example:
|
||||
# docker build . --network host --build-arg version="21.4.1.6282" --build-arg deb_location_url="https://clickhouse-builds.s3.yandex.net/21852/069cfbff388b3d478d1a16dc7060b48073f5d522/clickhouse_build_check/clang-11_relwithdebuginfo_none_bundled_unsplitted_disable_False_deb/" -t filimonovq/clickhouse-server:pr21852
|
||||
ARG deb_location_url=""
|
||||
|
||||
# set non-empty single_binary_location_url to create docker image
|
||||
# from a single binary url (useful for non-standard builds - with sanitizers, for arm64).
|
||||
# for example (run on aarch64 server):
|
||||
# docker build . --network host --build-arg single_binary_location_url="https://builds.clickhouse.tech/master/aarch64/clickhouse" -t altinity/clickhouse-server:master-testing-arm
|
||||
# note: clickhouse-odbc-bridge is not supported there.
|
||||
ARG single_binary_location_url=""
|
||||
|
||||
# see https://github.com/moby/moby/issues/4032#issuecomment-192327844
|
||||
ARG DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
# user/group precreated explicitly with fixed uid/gid on purpose.
|
||||
# It is especially important for rootless containers: in that case entrypoint
|
||||
# can't do chown and owners of mounted volumes should be configured externally.
|
||||
@ -19,19 +34,37 @@ RUN groupadd -r clickhouse --gid=101 \
|
||||
ca-certificates \
|
||||
dirmngr \
|
||||
gnupg \
|
||||
locales \
|
||||
wget \
|
||||
tzdata \
|
||||
&& mkdir -p /etc/apt/sources.list.d \
|
||||
&& apt-key adv --keyserver keyserver.ubuntu.com --recv E0C56BD4 \
|
||||
&& echo $repository > /etc/apt/sources.list.d/clickhouse.list \
|
||||
&& apt-get update \
|
||||
&& env DEBIAN_FRONTEND=noninteractive \
|
||||
apt-get --yes -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" upgrade \
|
||||
&& env DEBIAN_FRONTEND=noninteractive \
|
||||
apt-get install --allow-unauthenticated --yes --no-install-recommends \
|
||||
clickhouse-common-static=$version \
|
||||
clickhouse-client=$version \
|
||||
clickhouse-server=$version \
|
||||
locales \
|
||||
wget \
|
||||
&& if [ -n "$deb_location_url" ]; then \
|
||||
echo "installing from custom url with deb packages: $deb_location_url" \
|
||||
rm -rf /tmp/clickhouse_debs \
|
||||
&& mkdir -p /tmp/clickhouse_debs \
|
||||
&& wget --progress=bar:force:noscroll "${deb_location_url}/clickhouse-common-static_${version}_amd64.deb" -P /tmp/clickhouse_debs \
|
||||
&& wget --progress=bar:force:noscroll "${deb_location_url}/clickhouse-client_${version}_all.deb" -P /tmp/clickhouse_debs \
|
||||
&& wget --progress=bar:force:noscroll "${deb_location_url}/clickhouse-server_${version}_all.deb" -P /tmp/clickhouse_debs \
|
||||
&& dpkg -i /tmp/clickhouse_debs/*.deb ; \
|
||||
elif [ -n "$single_binary_location_url" ]; then \
|
||||
echo "installing from single binary url: $single_binary_location_url" \
|
||||
&& rm -rf /tmp/clickhouse_binary \
|
||||
&& mkdir -p /tmp/clickhouse_binary \
|
||||
&& wget --progress=bar:force:noscroll "$single_binary_location_url" -O /tmp/clickhouse_binary/clickhouse \
|
||||
&& chmod +x /tmp/clickhouse_binary/clickhouse \
|
||||
&& /tmp/clickhouse_binary/clickhouse install --user "clickhouse" --group "clickhouse" ; \
|
||||
else \
|
||||
echo "installing from repository: $repository" \
|
||||
&& apt-get update \
|
||||
&& apt-get --yes -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" upgrade \
|
||||
&& apt-get install --allow-unauthenticated --yes --no-install-recommends \
|
||||
clickhouse-common-static=$version \
|
||||
clickhouse-client=$version \
|
||||
clickhouse-server=$version ; \
|
||||
fi \
|
||||
&& clickhouse-local -q 'SELECT * FROM system.build_options' \
|
||||
&& rm -rf \
|
||||
/var/lib/apt/lists/* \
|
||||
/var/cache/debconf \
|
||||
|
@ -21,7 +21,9 @@ RUN addgroup -S -g 101 clickhouse \
|
||||
&& chown clickhouse:clickhouse /var/lib/clickhouse \
|
||||
&& chown root:clickhouse /var/log/clickhouse-server \
|
||||
&& chmod +x /entrypoint.sh \
|
||||
&& apk add --no-cache su-exec bash \
|
||||
&& apk add --no-cache su-exec bash tzdata \
|
||||
&& cp /usr/share/zoneinfo/UTC /etc/localtime \
|
||||
&& echo "UTC" > /etc/timezone \
|
||||
&& chmod ugo+Xrw -R /var/lib/clickhouse /var/log/clickhouse-server /etc/clickhouse-server /etc/clickhouse-client
|
||||
|
||||
# we need to allow "others" access to clickhouse folder, because docker container
|
||||
|
@ -38,17 +38,16 @@ if ! $gosu test -f "$CLICKHOUSE_CONFIG" -a -r "$CLICKHOUSE_CONFIG"; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# port is needed to check if clickhouse-server is ready for connections
|
||||
HTTP_PORT="$(clickhouse extract-from-config --config-file "$CLICKHOUSE_CONFIG" --key=http_port)"
|
||||
|
||||
# get CH directories locations
|
||||
DATA_DIR="$(clickhouse extract-from-config --config-file "$CLICKHOUSE_CONFIG" --key=path || true)"
|
||||
TMP_DIR="$(clickhouse extract-from-config --config-file "$CLICKHOUSE_CONFIG" --key=tmp_path || true)"
|
||||
USER_PATH="$(clickhouse extract-from-config --config-file "$CLICKHOUSE_CONFIG" --key=user_files_path || true)"
|
||||
LOG_PATH="$(clickhouse extract-from-config --config-file "$CLICKHOUSE_CONFIG" --key=logger.log || true)"
|
||||
LOG_DIR="$(dirname "$LOG_PATH" || true)"
|
||||
LOG_DIR=""
|
||||
if [ -n "$LOG_PATH" ]; then LOG_DIR="$(dirname "$LOG_PATH")"; fi
|
||||
ERROR_LOG_PATH="$(clickhouse extract-from-config --config-file "$CLICKHOUSE_CONFIG" --key=logger.errorlog || true)"
|
||||
ERROR_LOG_DIR="$(dirname "$ERROR_LOG_PATH" || true)"
|
||||
ERROR_LOG_DIR=""
|
||||
if [ -n "$ERROR_LOG_PATH" ]; then ERROR_LOG_DIR="$(dirname "$ERROR_LOG_PATH")"; fi
|
||||
FORMAT_SCHEMA_PATH="$(clickhouse extract-from-config --config-file "$CLICKHOUSE_CONFIG" --key=format_schema_path || true)"
|
||||
|
||||
CLICKHOUSE_USER="${CLICKHOUSE_USER:-default}"
|
||||
@ -106,6 +105,9 @@ EOT
|
||||
fi
|
||||
|
||||
if [ -n "$(ls /docker-entrypoint-initdb.d/)" ] || [ -n "$CLICKHOUSE_DB" ]; then
|
||||
# port is needed to check if clickhouse-server is ready for connections
|
||||
HTTP_PORT="$(clickhouse extract-from-config --config-file "$CLICKHOUSE_CONFIG" --key=http_port)"
|
||||
|
||||
# Listen only on localhost until the initialization is done
|
||||
$gosu /usr/bin/clickhouse-server --config-file="$CLICKHOUSE_CONFIG" -- --listen_host=127.0.0.1 &
|
||||
pid="$!"
|
||||
|
@ -1,7 +1,7 @@
|
||||
FROM ubuntu:18.04
|
||||
|
||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||
ARG version=21.4.1.*
|
||||
ARG version=21.5.1.*
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y apt-transport-https dirmngr && \
|
||||
|
@ -1,7 +1,7 @@
|
||||
# docker build -t yandex/clickhouse-fasttest .
|
||||
FROM ubuntu:20.04
|
||||
|
||||
ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=10
|
||||
ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=11
|
||||
|
||||
RUN apt-get update \
|
||||
&& apt-get install ca-certificates lsb-release wget gnupg apt-transport-https \
|
||||
@ -43,20 +43,20 @@ RUN apt-get update \
|
||||
clang-tidy-${LLVM_VERSION} \
|
||||
cmake \
|
||||
curl \
|
||||
lsof \
|
||||
expect \
|
||||
fakeroot \
|
||||
git \
|
||||
gdb \
|
||||
git \
|
||||
gperf \
|
||||
lld-${LLVM_VERSION} \
|
||||
llvm-${LLVM_VERSION} \
|
||||
lsof \
|
||||
moreutils \
|
||||
ninja-build \
|
||||
psmisc \
|
||||
python3 \
|
||||
python3-pip \
|
||||
python3-lxml \
|
||||
python3-pip \
|
||||
python3-requests \
|
||||
python3-termcolor \
|
||||
rename \
|
||||
|
@ -8,6 +8,9 @@ trap 'kill $(jobs -pr) ||:' EXIT
|
||||
# that we can run the "everything else" stage from the cloned source.
|
||||
stage=${stage:-}
|
||||
|
||||
# Compiler version, normally set by Dockerfile
|
||||
export LLVM_VERSION=${LLVM_VERSION:-11}
|
||||
|
||||
# A variable to pass additional flags to CMake.
|
||||
# Here we explicitly default it to nothing so that bash doesn't complain about
|
||||
# it being undefined. Also read it as array so that we can pass an empty list
|
||||
@ -70,7 +73,7 @@ function start_server
|
||||
--path "$FASTTEST_DATA"
|
||||
--user_files_path "$FASTTEST_DATA/user_files"
|
||||
--top_level_domains_path "$FASTTEST_DATA/top_level_domains"
|
||||
--test_keeper_server.log_storage_path "$FASTTEST_DATA/coordination"
|
||||
--keeper_server.log_storage_path "$FASTTEST_DATA/coordination"
|
||||
)
|
||||
clickhouse-server "${opts[@]}" &>> "$FASTTEST_OUTPUT/server.log" &
|
||||
server_pid=$!
|
||||
@ -124,22 +127,26 @@ continue
|
||||
|
||||
function clone_root
|
||||
{
|
||||
git clone https://github.com/ClickHouse/ClickHouse.git -- "$FASTTEST_SOURCE" | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/clone_log.txt"
|
||||
git clone --depth 1 https://github.com/ClickHouse/ClickHouse.git -- "$FASTTEST_SOURCE" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/clone_log.txt"
|
||||
|
||||
(
|
||||
cd "$FASTTEST_SOURCE"
|
||||
if [ "$PULL_REQUEST_NUMBER" != "0" ]; then
|
||||
if git fetch origin "+refs/pull/$PULL_REQUEST_NUMBER/merge"; then
|
||||
if git fetch --depth 1 origin "+refs/pull/$PULL_REQUEST_NUMBER/merge"; then
|
||||
git checkout FETCH_HEAD
|
||||
echo 'Clonned merge head'
|
||||
echo "Checked out pull/$PULL_REQUEST_NUMBER/merge ($(git rev-parse FETCH_HEAD))"
|
||||
else
|
||||
git fetch origin "+refs/pull/$PULL_REQUEST_NUMBER/head"
|
||||
git fetch --depth 1 origin "+refs/pull/$PULL_REQUEST_NUMBER/head"
|
||||
git checkout "$COMMIT_SHA"
|
||||
echo 'Checked out to commit'
|
||||
echo "Checked out nominal SHA $COMMIT_SHA for PR $PULL_REQUEST_NUMBER"
|
||||
fi
|
||||
else
|
||||
if [ -v COMMIT_SHA ]; then
|
||||
git fetch --depth 1 origin "$COMMIT_SHA"
|
||||
git checkout "$COMMIT_SHA"
|
||||
echo "Checked out nominal SHA $COMMIT_SHA for master"
|
||||
else
|
||||
echo "Using default repository head $(git rev-parse HEAD)"
|
||||
fi
|
||||
fi
|
||||
)
|
||||
@ -181,7 +188,7 @@ function clone_submodules
|
||||
)
|
||||
|
||||
git submodule sync
|
||||
git submodule update --init --recursive "${SUBMODULES_TO_UPDATE[@]}"
|
||||
git submodule update --depth 1 --init --recursive "${SUBMODULES_TO_UPDATE[@]}"
|
||||
git submodule foreach git reset --hard
|
||||
git submodule foreach git checkout @ -f
|
||||
git submodule foreach git clean -xfd
|
||||
@ -215,7 +222,7 @@ function run_cmake
|
||||
|
||||
(
|
||||
cd "$FASTTEST_BUILD"
|
||||
cmake "$FASTTEST_SOURCE" -DCMAKE_CXX_COMPILER=clang++-10 -DCMAKE_C_COMPILER=clang-10 "${CMAKE_LIBS_CONFIG[@]}" "${FASTTEST_CMAKE_FLAGS[@]}" | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/cmake_log.txt"
|
||||
cmake "$FASTTEST_SOURCE" -DCMAKE_CXX_COMPILER="clang++-${LLVM_VERSION}" -DCMAKE_C_COMPILER="clang-${LLVM_VERSION}" "${CMAKE_LIBS_CONFIG[@]}" "${FASTTEST_CMAKE_FLAGS[@]}" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/cmake_log.txt"
|
||||
)
|
||||
}
|
||||
|
||||
@ -223,7 +230,7 @@ function build
|
||||
{
|
||||
(
|
||||
cd "$FASTTEST_BUILD"
|
||||
time ninja clickhouse-bundle | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/build_log.txt"
|
||||
time ninja clickhouse-bundle 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/build_log.txt"
|
||||
if [ "$COPY_CLICKHOUSE_BINARY_TO_OUTPUT" -eq "1" ]; then
|
||||
cp programs/clickhouse "$FASTTEST_OUTPUT/clickhouse"
|
||||
fi
|
||||
@ -292,6 +299,8 @@ function run_tests
|
||||
01318_decrypt # Depends on OpenSSL
|
||||
01663_aes_msan # Depends on OpenSSL
|
||||
01667_aes_args_check # Depends on OpenSSL
|
||||
01776_decrypt_aead_size_check # Depends on OpenSSL
|
||||
01811_filter_by_null # Depends on OpenSSL
|
||||
01281_unsucceeded_insert_select_queries_counter
|
||||
01292_create_user
|
||||
01294_lazy_database_concurrent
|
||||
@ -357,6 +366,9 @@ function run_tests
|
||||
|
||||
# JSON functions
|
||||
01666_blns
|
||||
|
||||
# Depends on AWS
|
||||
01801_s3_cluster
|
||||
)
|
||||
|
||||
(time clickhouse-test --hung-check -j 8 --order=random --use-skip-list --no-long --testname --shard --zookeeper --skip "${TESTS_TO_SKIP[@]}" -- "$FASTTEST_FOCUS" 2>&1 ||:) | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/test_log.txt"
|
||||
@ -419,7 +431,7 @@ case "$stage" in
|
||||
# See the compatibility hacks in `clone_root` stage above. Remove at the same time,
|
||||
# after Nov 1, 2020.
|
||||
cd "$FASTTEST_WORKSPACE"
|
||||
clone_submodules | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/submodule_log.txt"
|
||||
clone_submodules 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/submodule_log.txt"
|
||||
;&
|
||||
"run_cmake")
|
||||
run_cmake
|
||||
@ -430,7 +442,7 @@ case "$stage" in
|
||||
"configure")
|
||||
# The `install_log.txt` is also needed for compatibility with old CI task --
|
||||
# if there is no log, it will decide that build failed.
|
||||
configure | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/install_log.txt"
|
||||
configure 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/install_log.txt"
|
||||
;&
|
||||
"run_tests")
|
||||
run_tests
|
||||
|
@ -4,7 +4,9 @@
|
||||
set -eux
|
||||
set -o pipefail
|
||||
trap "exit" INT TERM
|
||||
trap 'kill $(jobs -pr) ||:' EXIT
|
||||
# The watchdog is in the separate process group, so we have to kill it separately
|
||||
# if the script terminates earlier.
|
||||
trap 'kill $(jobs -pr) ${watchdog_pid:-} ||:' EXIT
|
||||
|
||||
stage=${stage:-}
|
||||
script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
|
||||
@ -14,35 +16,28 @@ BINARY_TO_DOWNLOAD=${BINARY_TO_DOWNLOAD:="clang-11_debug_none_bundled_unsplitted
|
||||
|
||||
function clone
|
||||
{
|
||||
(
|
||||
# The download() function is dependent on CI binaries anyway, so we can take
|
||||
# the repo from the CI as well. For local runs, start directly from the "fuzz"
|
||||
# stage.
|
||||
rm -rf ch ||:
|
||||
mkdir ch
|
||||
cd ch
|
||||
|
||||
git init
|
||||
git remote add origin https://github.com/ClickHouse/ClickHouse
|
||||
|
||||
# Network is unreliable. GitHub neither.
|
||||
for _ in {1..100}; do git fetch --depth=100 origin "$SHA_TO_TEST" && break; sleep 1; done
|
||||
# Used to obtain the list of modified or added tests
|
||||
for _ in {1..100}; do git fetch --depth=100 origin master && break; sleep 1; done
|
||||
|
||||
# If not master, try to fetch pull/.../{head,merge}
|
||||
if [ "$PR_TO_TEST" != "0" ]
|
||||
then
|
||||
for _ in {1..100}; do git fetch --depth=100 origin "refs/pull/$PR_TO_TEST/*:refs/heads/pull/$PR_TO_TEST/*" && break; sleep 1; done
|
||||
fi
|
||||
|
||||
git checkout "$SHA_TO_TEST"
|
||||
)
|
||||
mkdir ch ||:
|
||||
wget -nv -nd -c "https://clickhouse-test-reports.s3.yandex.net/$PR_TO_TEST/$SHA_TO_TEST/repo/clickhouse_no_subs.tar.gz"
|
||||
tar -C ch --strip-components=1 -xf clickhouse_no_subs.tar.gz
|
||||
ls -lath ||:
|
||||
}
|
||||
|
||||
function download
|
||||
{
|
||||
wget -nv -nd -c "https://clickhouse-builds.s3.yandex.net/$PR_TO_TEST/$SHA_TO_TEST/clickhouse_build_check/$BINARY_TO_DOWNLOAD/clickhouse"
|
||||
wget -nv -nd -c "https://clickhouse-builds.s3.yandex.net/$PR_TO_TEST/$SHA_TO_TEST/clickhouse_build_check/$BINARY_TO_DOWNLOAD/clickhouse" &
|
||||
wget -nv -nd -c "https://clickhouse-test-reports.s3.yandex.net/$PR_TO_TEST/$SHA_TO_TEST/repo/ci-changed-files.txt" &
|
||||
wait
|
||||
|
||||
chmod +x clickhouse
|
||||
ln -s ./clickhouse ./clickhouse-server
|
||||
ln -s ./clickhouse ./clickhouse-client
|
||||
|
||||
# clickhouse-server is in the current dir
|
||||
export PATH="$PWD:$PATH"
|
||||
}
|
||||
|
||||
function configure
|
||||
@ -74,25 +69,38 @@ function watchdog
|
||||
killall -9 clickhouse-client ||:
|
||||
}
|
||||
|
||||
function filter_exists
|
||||
{
|
||||
local path
|
||||
for path in "$@"; do
|
||||
if [ -e "$path" ]; then
|
||||
echo "$path"
|
||||
else
|
||||
echo "'$path' does not exists" >&2
|
||||
fi
|
||||
done
|
||||
}
|
||||
|
||||
function fuzz
|
||||
{
|
||||
# Obtain the list of newly added tests. They will be fuzzed in more extreme way than other tests.
|
||||
cd ch
|
||||
NEW_TESTS=$(git diff --name-only "$(git merge-base origin/master "$SHA_TO_TEST"~)" "$SHA_TO_TEST" | grep -P 'tests/queries/0_stateless/.*\.sql' | sed -r -e 's!^!ch/!' | sort -R)
|
||||
cd ..
|
||||
# Don't overwrite the NEW_TESTS_OPT so that it can be set from the environment.
|
||||
NEW_TESTS="$(grep -P 'tests/queries/0_stateless/.*\.sql' ci-changed-files.txt | sed -r -e 's!^!ch/!' | sort -R)"
|
||||
# ci-changed-files.txt contains also files that has been deleted/renamed, filter them out.
|
||||
NEW_TESTS="$(filter_exists $NEW_TESTS)"
|
||||
if [[ -n "$NEW_TESTS" ]]
|
||||
then
|
||||
NEW_TESTS_OPT="--interleave-queries-file ${NEW_TESTS}"
|
||||
NEW_TESTS_OPT="${NEW_TESTS_OPT:---interleave-queries-file ${NEW_TESTS}}"
|
||||
else
|
||||
NEW_TESTS_OPT=""
|
||||
NEW_TESTS_OPT="${NEW_TESTS_OPT:-}"
|
||||
fi
|
||||
|
||||
./clickhouse-server --config-file db/config.xml -- --path db 2>&1 | tail -100000 > server.log &
|
||||
clickhouse-server --config-file db/config.xml -- --path db 2>&1 | tail -100000 > server.log &
|
||||
|
||||
server_pid=$!
|
||||
kill -0 $server_pid
|
||||
while ! ./clickhouse-client --query "select 1" && kill -0 $server_pid ; do echo . ; sleep 1 ; done
|
||||
./clickhouse-client --query "select 1"
|
||||
while ! clickhouse-client --query "select 1" && kill -0 $server_pid ; do echo . ; sleep 1 ; done
|
||||
clickhouse-client --query "select 1"
|
||||
kill -0 $server_pid
|
||||
echo Server started
|
||||
|
||||
@ -111,14 +119,14 @@ continue
|
||||
# SC2012: Use find instead of ls to better handle non-alphanumeric filenames. They are all alphanumeric.
|
||||
# SC2046: Quote this to prevent word splitting. Actually I need word splitting.
|
||||
# shellcheck disable=SC2012,SC2046
|
||||
./clickhouse-client --query-fuzzer-runs=1000 --queries-file $(ls -1 ch/tests/queries/0_stateless/*.sql | sort -R) $NEW_TESTS_OPT \
|
||||
clickhouse-client --query-fuzzer-runs=1000 --queries-file $(ls -1 ch/tests/queries/0_stateless/*.sql | sort -R) $NEW_TESTS_OPT \
|
||||
> >(tail -n 100000 > fuzzer.log) \
|
||||
2>&1 \
|
||||
|| fuzzer_exit_code=$?
|
||||
|
||||
echo "Fuzzer exit code is $fuzzer_exit_code"
|
||||
|
||||
./clickhouse-client --query "select elapsed, query from system.processes" ||:
|
||||
clickhouse-client --query "select elapsed, query from system.processes" ||:
|
||||
killall clickhouse-server ||:
|
||||
for _ in {1..10}
|
||||
do
|
||||
@ -190,7 +198,7 @@ case "$stage" in
|
||||
# Lost connection to the server. This probably means that the server died
|
||||
# with abort.
|
||||
echo "failure" > status.txt
|
||||
if ! grep -ao "Received signal.*\|Logical error.*\|Assertion.*failed\|Failed assertion.*\|.*runtime error: .*\|.*is located.*\|SUMMARY: MemorySanitizer:.*\|SUMMARY: ThreadSanitizer:.*\|.*_LIBCPP_ASSERT.*" server.log > description.txt
|
||||
if ! grep -ao "Received signal.*\|Logical error.*\|Assertion.*failed\|Failed assertion.*\|.*runtime error: .*\|.*is located.*\|SUMMARY: AddressSanitizer:.*\|SUMMARY: MemorySanitizer:.*\|SUMMARY: ThreadSanitizer:.*\|.*_LIBCPP_ASSERT.*" server.log > description.txt
|
||||
then
|
||||
echo "Lost connection to server. See the logs." > description.txt
|
||||
fi
|
||||
|
@ -19,7 +19,8 @@ RUN apt-get update \
|
||||
tar \
|
||||
krb5-user \
|
||||
iproute2 \
|
||||
lsof
|
||||
lsof \
|
||||
g++
|
||||
RUN rm -rf \
|
||||
/var/lib/apt/lists/* \
|
||||
/var/cache/debconf \
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user